It has long been a goal in the artificial intelligence community to produce a general model, or even better, a general example, of intelligence. Despite the problems entailed by this goal (not the least of which is what, exactly, is intelligence) Rosenbloom, et.al. present an analysis of the Soar architecture in this context. Rosenbloom, et. al. present Soar's architecture in the as a sequence of levels (for which Allen Newell had an important part). These levels are, in increasing order of complexity, memory (i.e. storage of facts), decisions (i.e. generation and/or selection of actions), and goals (i.e. the direction and/or evaluation of various courses of action). In addition to the levels of its architecture Soar includes the ability to learn, perceive and act upon it's environment, and a set of default behaviors or knowledge.

The primary unit of Soar's operations occur over productions (which may embody procedures and operators), but unlike traditional production systems Soar uses its productions as pointers into its long term memory. That is to say that if a production's antecedent are met then Soar adds the production's consequent to the current "active" set/data. This allows Soar to generate a state of "facts" that represent only what (seems) necessary to achieve a goal rather than always having to search through its entire long term memory data/knowledge base. When no clear answer is available (built in preferences are allowed for in the productions) Soar generates sub-goals in an attempt to generate a path to a solution. I think that this process of sub-goal generation or meta-operators by Soar is a critical component in its candidacy for an architecture for general intelligence. Another important strength of the sub-goal process is that it is capable of examining prior (i.e. higher level) goals and deciding that they have been satisfied or are unnecessary, thus jumping out of the current sub-goal search.

The second critical component of Soar's architecture (and thus its general intelligence) is its ability to learn in generalized manner. Soar uses a "chunking" process to bundle up the steps required to satisfy a sub-goal and save it to long term memory thus making it unnecessary to ever re-satisfy the sub-goal by searching the next time it is confronted with the sub-goal. The fact that Soar can do this in a reasonably general manner is very powerful, as it reduces the space requirements (and thus reduces the search costs for finding a chunked sub-goal). Interestingly, the method that Soar uses to generalize is through that of pointers (i.e. symbolic representations) of the parameters to its operators. Thus when a chunked sub-goal is applied, any object that meets the criteria of the sub-goal can be taken as a parameter to the sub-goal.

An important extension to the above learning process is the ability of Soar to learn the solution to a problem at an abstract level and then be able to apply this solution to a more specific or concrete problem. This seems to be an effective technique as the abstraction of the problem produces a much smaller search space than a real problem would, yet provides Soar with chunked set of sub-goals that are still effective for the real problem. (I found this rather intriguing and would have liked to have seen some more about it.)

One thing that bothered me about the Soar architecture is that it seemed to lack the ability to deal with fuzzy or metaphorical application of its knowledge. This could be very important in that it would seem to limit how general it could get in applying procedures or operators. Lack of the ability to construct metaphors would also seem to make environment modeling from experimentation, scientific discovery and theory formation (things that Soar can not yet do) all much more difficult.

Another thing that bothered me was that concentrating on keeping the Soar a simple one technique system would seem to lead to "pounding nails with screwdriver" syndrome. I admit that attempting to find a solution within a single context has its advantages, but there is only so far that this should be taken. This seems to be particularly true when dealing with such a huge domain of competence as is represented by general intelligence.