The Nature of Consciousness

Piero Scaruffi

(Copyright © 2013 Piero Scaruffi | Legal restrictions )
Inquire about purchasing the book | Table of Contents | Annotated Bibliography | Class on Nature of Mind

These are excerpts and elaborations from my book "The Nature of Consciousness"

Production

Soon, the most abused model of cognitive psychology became one in which a memory system containing knowledge is operated upon by an inference engine; the results are added to the knowledge base and the cycle resumes indefinitely. For example, I may infer from my knowledge that it is going to rain and therefore add to my knowledge base that there is a need for umbrellas.

In this fashion, knowledge is continuously created, and pieces of it represent solutions to problems. Every new piece of knowledge, whether acquired from the external world or inferred from the existing knowledge, may trigger any number of inferential processes, which can proceed in parallel.  Since knowledge is mainly represented via “production rules” (rules that state that something becomes true when something else has become true), these systems are referred to as “production systems”. A production rule is, ultimately, a formula of classical Logic.

John Anderson's ACT (1976) was a cognitive architecture capable of dealing with both declarative knowledge (represented by propositional networks) and procedural knowledge (represented by production rules). Declarative knowledge ("knowing that") can be consulted, whereas procedural knowledge ("knowing how") must be enacted in order to be used.

The relationship between the two types of knowledge is twofold. On one hand, the production system acts as the interpreter of the propositional network to determine action. On the other hand, knowledge is continuously compiled into more and more complex procedural chunks  through an incremental process of transformation of declarative knowledge into procedural knowledge. Complex cognitive skills can develop from a simple architecture, as new production rules are continuously learned.

Anderson, therefore, thought of a cognitive system as having two short-term memories: a “declarative” memory (that remembers experience) and a “procedural” memory (that remembers rules learned from experience).

Anderson also developed a probabilistic method to explain how categories are built and how prototypes are chosen. Anderson's model maximizes the “inferential potential” of categories (i.e., their "usefulness"): the more a category helps predict the features of an object, the more the existence of that category makes sense.  For each new object, Anderson's model computes the probability that the object belongs to one of the known categories and the probability that it belongs to a new category: if the latter is greater than the former, a new category is created.

Later editions of the architecture organize knowledge in three levels: a knowledge level (information acquired from the environment plus innate principles of inference), an algorithmic level (internal deductions, inductions and compilations) and an implementation level (setting parameters for the encoding of specific pieces of information).

Newell, working with John Laird and Paul Rosenbloom, proposed a similar architecture, SOAR (1987), based on three powerful concepts.  The “universal weak method” is an organizational framework whereby knowledge determines the inferential methods employed to solve the problem, i.e.  knowledge controls the behavior of the rational agent.  “Universal sub-goaling” is a process whereby goals can be created automatically to deal with the difficulties that the rational agent encounters during problem solving. A model of practice is developed based on the concept of “chunking”, the creation of new production rules, a process which is calibrated to produce the “power law of practice” that characterizes the improvements in human performance during practice at a given skill: the more you practice, the better you get at it.  Within SOAR, each task has a goal hierarchy. When a goal is successfully completed, a chunk that represents the results of the task is created. In the next instance of that task, the system will not need to fully process it because the corresponding chunk already contains the instructions to achieve its goal. The process of chunking proceeds bottom-up in the goal hierarchy.  The process will eventually lead to a chunk for the top-level goal for every situation that it can encounter.

These production systems are architectures advanced by the proponents of the symbolic processing approach in order to explain how the mind goes about acting, solving problems and learning how to solve new problems.

 


Back to the beginning of the chapter "Cognition: A General Property of Matter " | Back to the index of all chapters