A Brief on The Knowledge Level by Allen Newell

I think the strength of The Knowledge Level is in its insight into the structure and direction of AI. An insight that I particularly like is that computer science has been a growth of empirical knowledge occasionally augmented by theoretical structures and predictions. This isn't to say that there is no "science" in computer science, but that the traditions that have been the foundation of the physical sciences such as physics and chemistry during the past two centuries have not really characterized computer sciences. The role of theoretical prediction has been replaced with that of theoretical explanation (something that was much more prevalent of the older sciences during the 17th century) i.e. the construction of structured theory to explain observed phenomena and empirical knowledge. This is precisely the role that Allen Newell plays by postulation the existence of the "knowledge level".

Newell takes the concept of levels in computer science, starting with the Device Level, and with increasing complexity identifies: Circuit, Logic-Circuit, Register-Transfer, and Symbolic Level. Each of these levels share some important, if not crucial features. These include that a system definition is sufficient to determine the behavior of the system, that the behavior of the system is an algebraic sum of the behavior of its components, the variety of the behavior of a system is derived from the complexity of the systems structures. and that systems realize (i.e. are constructed of) physical properties inherent in the physical state/nature components. Newell then proceeds to define a new system level, which I think is derived from the Newell's empirical observations of what an agent (in the AI sense) should be, which he calls the "Knowledge Level". This new level, unlike the preceding levels, does not entirely share these common properties. In fact the characteristics that are ascribed to the previous levels are for all practical purposes contradicted in the knowledge level. (But it is possible to make an argument that this is really a matter of degree rather than a truly qualitative difference between the knowledge level and the preceding levels.) This high degree of contradiction seems to imply one of two things. Either that the concept of a knowledge level is not really a system level like the previously defined levels (as the reasoning behind the derivations of theses levels appears to be obvious) or that a fundamental change has occurred in the hierarchy.

Of the two possibilities, I lean in the direction of later as I find the concept of a knowledge level compelling. It makes sense that at some point one must talk about systems that derive their properties from the interaction of content of their components with respect to some global rule system (i.e. "rationality") rather than from the interaction of the structure of their components. (But I can not but think that an alternative view point or methodology here might be one of context dependent symbol or language systems.) Another factor in supporting the idea of a fundamental change in the hierarchy is that since the knowledge is derived from meta-structure of an agent one can postulate (albeit less clearly) the existence of a level above the knowledge level. This level, which I will call the social level, is modeled after the interactions of (rational) agents. (Which I suspect as being a better model for human intelligence. See below.)

Initially Newell seems to imply that unlike the previous levels (each of which can be used to completely model the level above it) that the knowledge level is independent of, and impossible to model in, the prior symbol level, he eventually admits to the fact that the symbol level does play an important role in reasoning about the knowledge level. In fact, I think that Newell attempts to maintain to much distance between the symbol and knowledge levels. Newell uses the short story "The Lady and the Tiger" as an example of why levels are as independent as he implies, but I do not find it a compelling argument. The claim that by having a complete knowledge-level model of the princess would prevent us from being able to tell what decision she made is deceptive. While it is true that humans do not appear to be determinant (this is an open question, in my opinion) it is short sighted to say that we can not make highly accurate predictions about their behavior when we have sufficient information, such as previous behavior, upbringing, societal pressures, etc. In any case, it seems deceptive to put humans strictly into the category of systems that can be captured by the knowledge level. The complex and contradictory of behavior in humans seems to imply that humans are not capable of examining the internal workings of their mental processes, which Newell's definition of agents/knowledge level systems require.

It is my contention that interactions between the symbol system(s) and the knowledge system are non-trivial and critical to the knowledge system or agent's behavior. The validity of many of the statements made by Newell are dependent on the agent's in question sharing a significantly overlapping symbol system, such as the ability of an agent to be able to predict another agent's behavior based on observing the same inputs. This is a reasonable assumption when the agents in question share symbol systems that are reasonably similar, i.e. they speak the same language (in native mode) and come from a common society. Without this commonalty, all bets are off.

Newell brings out many other interesting and important points from the role of logic systems in AI to that of R&D in externalizing various procedures to extract knowledge and structure from data structures which I do not have the space to go into. I would like to go out on the following observation that the platform that "personhood" operates on, like the structure that defines the knowledge level, is not as important as the "person", or the knowledge.