Merideth Naomi is now 15 months old No new pictures posted. Yet.
- Fritz. (Aug. 6th, 1998)
by Fritz Freiheit
Mar. 13th, 1995
(A background resource for the Guha & Lenat .)
"Commonsense" - I can't tell you what it is, but I know it when I see it.
Initially I would like to compare the Cyc project as an Artificial Intelligence version of two other seminal technological events: the first heavier than air flight by man accomplished by the Wright brothers shortly after the turn of the century, and the Manhattan Project as carried out by the United States during the Second World War. Neither of these events is a complete analogy, but each allows some interesting insights into the nature of the Cyc project. In addition, the very process of creating and exploring these analogies illuminates the very problem that the Cyc project was launched in an attempt to resolve. Unfortunately, it is impossible to carry these analogies to far, and in particular to give us any real indications as to the future success, or failure, of the Cyc project, as it has yet reach the end of its planned lifetime.
Like both the rush to accomplish the first heavier than air flight and that to build the first atomic bomb the AI community is striving to build the first example of machine intelligence that is comparable to that of our own. In both cases there are strong camps that were (or are) aligned with the engineering/pragmatic end of things, and those that feel that a firm theoretical understanding of the problems is absolutely necessary. But in both historical cases, the winning camp was the engineering "lets just build it, we'll deal with the problems as they come up" camps. The Cyc project is also firmly in the engineering camp. It is more true of the climate that pervaded the search for the first (heavier than air) aircraft than that of the Manhattan project that there were many competing ideas and experiments, frequently mutually exclusive, as is the case in the AI field today. It was felt by Guha, Lenat & Feigenbaum (see ) that the only way to make significant progress towards the goal was to bite the bullet and throw a large amount of resources at it, like the Manhattan project (this is one of those areas where the analogy with respect the first aircraft breaks down, it was possible to accomplish the initial goal using meager resources).
To return to the Wright brothers, it is interesting to note that all three men involved were essentially mechanics, and in fact the ultimate solution was accomplished by a machinist who had no training in how to build gasoline engines. This is not to say that Guha, Lenat and many of the others who have been working on the Cyc project do not have strong theoretical backgrounds, only to say the similarity is derived from the fact that participants in both cases did not have any reason to believe that their theories were/are sufficient, or even close. The gas engine for the Wright brothers plane, like the knowledge in the Cyc the project, was built from scratch and despite of the theorists saying that it would not work (the experts did not believed that a power to weight ratio sufficient to achieve flight was possible then, just as many of the experts today do not believe that sufficient power can be derived from captured knowledge to allow a significant subset of human intelligence to be simulated).
Which brings us to the question of what motivated Cyc project in the first place. Much like heavier than air flight as demonstrated by birds and insects, the example set by "natural intelligence" as demonstrated by animals, and more importantly, by humans, has been held to be a possible "man-made" goal. The immediate motivations for the Cyc project derive from the experience in the AI field in general during the 1970's and the early 1980's, and more specifically from failures to accomplish desired goals, such as natural language understanding and expert systems that demonstrated anything but the most narrow of expertise. These failures included excessively brittle expert systems (i.e. systems which performed well in narrow domains, but which get lost when encountering anything outside of those domains). Another important limitation has been in the area of sharable ontologies. The reason that expert systems can not be combined, in general, because they do not share the same semantics, even if they share a common syntax or implementation language.
So, what is the problem here? In general one must address the limits of architecture, implementation and representation through that of content, i.e. knowledge. Things become more clear when the theoretical foundations for the Cyc project are articulated by Lenat & Feigenbaum in . They sum this up in the following principle and two hypothesis.
To put these three principles/hypotheses into a concrete framework, the Cyc project was founded on the belief that if progress was to be made towards the goal of full machine intelligence, then someone must attempt to capture that ephemeral concept "commonsense". The fact that this was undertaken as an empirical process does not detract from the attempt. Far from it, it presents the opportunity to test the hypothesis that commonsense can be captured, and if so, will it actually produce the desired effects. Thus, the Cyc project is inherently, and intentionally, falsifiable.
The Knowledge Principle and the Breadth Hypothesis combine to create a notion of "commonsense knowledge", the knowledge necessary to understand an encyclopedia entry or a newspaper article. It can also be view as contextual knowledge and "consensus reality", the reality that we all assume is about us. This concept of commonsense knowledge forms the core of what the Cyc project is attempting to capture.
Research in AI has been going on since the inception electronic computers, so why should have taken this long for someone to attempt an undertaking like the Cyc project? One can certainly argue that there was insufficient computational resources to really attack it, but I think there are several other important reasons as well. One important reason is that it took until the beginning of the 1980's to even recognize that there was a problem (as described by Lenat & Feigenbaum in ) with the traditional methods of search and general representational systems. Once it was decided that there was problem, it is difficult to find a place to start. The theory of knowledge was vague, and even knowing how to grip on "commonsense" was a non-trivial task, as can be seen by the number of changes that Cyc went through (see below). Some researchers (such as Brian Smith ) feel that a much firmer theoretical foundation is required before you can even hope to start a project of this size. This is a trap, as how can you know in advance what sort of interaction between theory and implementation will occur? It is the process of implementation that frequently reveals problems with ontology, representation, and other aspects of theory. And finally, one cannot dismiss the power of the fear of being wrong or failing, as it is much easier to work in areas that are already charted, then to set out into the unknown.
Lenat & Feigenbaum (in ) present a grand vision for the future of AI research as initially embodied by the capture of commonsense knowledge. This vision is broken into three major stages (see Figure 1). These are:
(Insert figure 1)
Figure 1. Rate of learning vs. amount of knowledge
This grand vision can be directly mapped to the developmental phases of Cyc in the following manner:
Pre-Phase 1. Ontological and representational problem solving. (Continues through the lifetime of the Cyc project.)
Phase 1. Knowledge Entry. This is the primary goal of the Cyc project.
Pre-Phase 2. Application implementation using Cyc - to help define and extend the necessary knowledge, ontology, inferencing, etc.
Phase 2. Crossover to natural language based learning. This is the planned termination of the Cyc project as it is expected to be in use supporting other applications.
Phase 3. (Ultimately) Discovery on its own.
In the process of implementing the grand vision the specific original goals of the Cyc project were:
These goals emphasize the engineering or pragmatic nature that has been emphasized so far. The actual implementation of Cyc essentially follows that of the declarative logicists approach with a heavy dose of pragmatism. For more specific details, refer to  and . It is not that the implementation details are uninteresting, but, in part because of the shifting ground nature of many of the details pertaining to the implementation of Cyc, I prefer to stick with the higher level motivational view. Guha, Lenat and Feigenbaum emphasize that Cyc has evolved quickly and pragmatically. Initially the Cyc system was almost entirely a (vanilla) frame language but this has declined in use as the use of a constraint language (based on first order predicate calculus) has risen in use. This constraint language was developed to address the needs for: disjunction, negation, universal and existential quantification, etc. Changes were introduced into the representation language (CycL) and the ontology of Cyc only when it was felt to be absolutely necessary, such as when there was no way found to represent something, or when there was a need for more efficient inferencing. One of the important questions asked by McDermott  and Skuce  is how could the process of developing Cyc be carried out in an environment of shifting representation and ontology without forcing restarts. The answer to this question seem painfully obvious to me (as state in ) and is because of the declarative nature of Cyc's knowledge representation which allows the writing of new procedures over predicates that remain, essentially the same. Changes in the ontology are/were easier than changes in the representation language. The following indicate the major reasons why:
As an indication of this Guha and Lenat  state only 5 major changes to ontology/representation occurred (as 1993), the last big change was in 1990 when contexts/microtheories were introduced (the KB had over half a million assertions in it). The others being elimination probabilities, addition of default reasoning based on argumentation, and allowing predicates of arity greater than 2. Not surprisingly Guha and Lenat indicate that tools developed to support these modifications have also helped to make change manageable.
Guha and Lenat emphasize the increasing importance of the Epistemological Level (EL) and Heuristic Level (HL) mechanisms. The EL provides a common interface, so that, in general, the users of Cyc (humans or other applications) don't have to know about the underlying HL mechanisms that are being used to resolve queries and assertions, while the HL allows specialized techniques to be applied to specific domains, contexts, etc. within Cyc so that efficient (and timely) results can be obtained.
Extensions were made to FOPC to prevent "intolerably slow inference" speed. The extensions include meta-level assertions (reification, reflection of internal inferencing), modal operators (Believes, Desires), a context mechanism, limited quantification over predicates. In addition the FOPC use by Cyc evolved to have some n-th-order predicate calculus features.
Some additional changes:
During the time that  span many changes have occured. But by the time that  was published it would be fair that changes have settled down and are more representative of the maturing of Cyc. During this midterm period, as noted above, the importance of the Epistemological and Heuristic level bifurcation has grown. Another important insight that Cyc seems to bear out is that of the importance of local consistency over global consistency. Instead of trying to maintain some sort of global consistency Cyc maintains local consistency. This is indicated by the fact that despite the inconsitencies that exists in global sense within Cyc (various contexts/microtheories disagree), Cyc is still capable carrying out meaningful inferences. This was made possible by introduction of contexts/microtheories. Guha and Lenat (in ) show a strong parallel between the implementation of the ontologies and that of inferencing mechanisms.
Empirically-derived, increasing stable set of collections
Large Count (1993 - 8000 collections, over 5000 predicates, several tens of thousands of individuals)
Additions are easy (good tools, occurs in parallel, Cyc monitors updates to control the effects of each change)
Context/Microtheory is important (allows multiple ontologies, this is useful by allowing multiple views of a given domain, i.e. strictly correct vs. commonly useful, or multiple participant vantage points, local consistency vs. global consistency, new ontologies can be created by "budding" a new one, rather than by modifying "The One True Ontology")
Additional Midterm implementation notes:
Some of the criticisms were laid at the door of Cyc at its midterm?
Use of the KB as a shared information pool, as opposed to Levesque and Brachman's view of the KB as a service that only inference engines have direct access to. (I.e. the reasoning mechanism is coupled to the knowledge that is represented in the KB.)
How can we measure the success of Cyc? This is an important aspect of the Cyc project that has been somewhat neglected. While Lenat and Feigenbaum admit to there being somewhat of a short fall in this area, and suggest some ways to measure it, there does not seem to be any real attempt to measure progress of Cyc except in a "head-count" sort of way. While it is true that we will have to wait until the time that Cyc actually "crossesover" into reading and asking questions as its primary mode of learning to judge a number of these successes measurements, it would seem that the Cyc project team could publish more incremental results along the way.
Success measurement methods:
In the end, it is important to keep in mind that the Cyc project is worth doing regardless of its success. Cyc provides an important locus for progress in AI research, something to compare techniques and strategies to. Another reason is that big projects bring out problems that small projects don't, we can thus expect to learn a great deal about the process of engineering large knowledge bases. It is also expected that we can derive some handle on the size a commonsense knowledge base, succeed or fail. Finally, in the assessment of Guha & Lenat in  success can be measure in the following way:
There are number of things that I did not address due to time and space considerations. These include any significant information on specific inplementation details, such as explicit internal representation, or user interfaces. Instead, I prefered to emphasize the high level and theoretical considerations that surround and support the Cyc project. If this had been a "how to" rather than a "how come" paper, I would have spent more time on them.
BH - Breadth Hypothesis
CSK - Commonsense Knowledge
EH - Empirical Inquiry Hypothesis
EL - Epistemological Level
HL - Heuristic Level
KB - Knowledge Base
KP - Knowledge Principle
 C. Elkan and R. Greiner, Book Review: D.B. Lenat & R.V. Guha, Building Large Knowledge-Based Systems, Artificial Intelligence 61 (1993) 41-52
 R.V. Guha and D.B. Lenat, Enabling Agents to Work Together, Communications of the ACM, July 1994/Vol. 37, No. 7, 127-142.
 R.V. Guha and D.B. Lenat, Cyc: a midterm report, AI Magazine 11 (3) (1990) 32-59.
 R.V. Guha and D.B. Lenat, Response: Re: CycLing paper reviews, Artificial Intelligence 61 (1993) 149-174
 D.B. Lenat and E.A. Feigenbaum, On the thresholds of knowledge, Artificial Intelligence 47 (1990) 185-250
 D.B. Lenat and R.V. Guha, Building Large Knowledge-Based Systems (Addison-Wesley, Reading, MA, 1990)
 D. McDermott, Book Review: D.B. Lenat & R.V. Guha, Building Large Knowledge-Based Systems, Artificial Intelligence 61 (1993) 53-63
 D. Skuce, Book Review: D.B. Lenat & R.V. Guha, Building Large Knowledge-Based Systems, Artificial Intelligence 61 (1993) 81-94
 B.C. Smith, The owl and the electric encyclopedia, Artificial Intelligence 47 (1990) 251-288
|Fritz's Home page||Contents||Feedback and comments|
Copyright (C) 1997 by F.E. Freiheit IV
Updated on Fri Aug 7 1:26:02 US/Michigan 1998
Generated at Fri Aug 7 7:04:04 US/Michigan 1998