(These are excerpts from my book "Intelligence is not Artificial")
The Boom of Expert Systems
At the same time, expert systems were beginning to make inroads at least in academia, notably Bruce Buchanan's Mycin (1972) at Stanford for medical diagnosis and John McDermott's Xcon (1978) at Carnegie Mellon University for product configuration,
both written in LISP,
and, by the 1980s, also in the industrial and financial worlds at large, thanks especially to many innovations in knowledge representation (Ross Quillian's semantic networks at Carnegie Mellon University, Minsky's frames at MIT, Roger Schank's scripts at Yale University, Barbara Hayes-Roth's blackboards at Stanford University, etc).
A frame, which is a variant on Otto Selz's schema, is a structure that helps identify a situation; and, once we recognize the situation, the frame also tells us what we can do in and with that situation. A script, which is a social variant of Minsky's frame, represents stereotypical knowledge of situations as a sequence of actions and a set of roles. Once the situation is recognized, the script prescribes the actions that are sensible and the roles that are likely to be played. The script helps understand the situation and predicts what will happen in that situation. A script performs "anticipatory reasoning". Reasoning in ordinary daily life is not formal logical reasoning but instead "case-based" reasoning, a form of analogical reasoning in which each new situation is matched with known ones to predict what will happen next. Reasoning is "expectation-driven". There is a fundamental unity of cognitive phenomena such as perception, recognition, reasoning, understanding and memory: they occur at the same time, you can't have one without the other ones. Minsky and Schank were influenced by the "New Look" movement of Jerome Bruner and others: expectations determine what we perceive.
In 1975 the first A.I. startups had appeared: Leon Cooper of Brown University (the physicist who in 1957 co-developed the first theory of superconductivity and in 1981 would co-develop the BCM learning rule for neurons) co-founded Nestor with Charles Elbaum to develop neural-network technology; and Larry Harris of Dartmouth College founded Artificial Intelligence Corp to develop natural-language interfaces. In 1980 Edward Feigenbaum at Stanford co-founded the first major start-ups for expert systems: Intellicorp and Teknowledge.
Brian McCune, an alumnus of the Stanford Artificial Intelligence Laboratory, was one of the founders in 1980 of Advanced Information and Decision Systems (AIDS) in Mountain View, later renamed Advanced Decision Systems. He and Richard Tong, a Cambridge University graduate, designed a concept-based text-retrieval system, Rubric, a progenitor of search engines.
This A.I. boom did not extend much further than the USA: in 1973 a report by James Lighthill, a Cambridge University mathematician, for the British government, "Artificial Intelligence - A General Survey", had pretty much killed A.I. research in Britain.
In the USA, instead, there were several projects of expert systems, especially in the medical field. In 1972 Saul Amarel's student Casimir Kulikowski and Sholom Weiss developed Casnet (CausalASsociational NETwork) at Rutgers University. In 1974 Harry Pople at University of Pittsburgh built Dialog (later renamed Internist) and in 1976 Stephen Pauker debuted PIP (Present Illness Program) at Tufts University. All the excitement about expert systems for medicine culminated in the first Artificial Intelligence in Medicine workshop at Rutgers University, organized in 1975 by Kulikowski.
Notable expert systems of the 1980s included: Prospector (1981), developed at SRI by the Shakey team (Nils Nilsson, Richard Duda, Peter Hart, etc) to help geologists in mineral exploration, written in a dialect of LISP (Interlisp) and running on a PDP-10; and DELTA (Diesel Electric Locomotive Troubleshooting Aid), deployed in 1983 by Piero Bonissone at General Electric for maintenance of locomotives, written in Forth and running on a PDP-11.
There was progress in knowledge-based architectures to overcome the slow speed of computers. In 1980 Judea Pearl introduced the Scout algorithm, the first algorithm to outperform alpha-beta, and in 1983 Alexander Reinefeld further improved the search algorithm with his NegaScout algorithm.
One factor that certainly helped the symbolic-processing approach and condemned the connectionist approach was that the latter uses much more complex algorithms, i.e. it requires computational power that at the time was rare and expensive.
(Personal biography: i entered the field in 1985 and went on to lead the Silicon Valley-based Artificial Intelligence Center of the largest European computer manufacturer, Olivetti, and i later worked at Intellicorp for a few years).
Back to the Table of Contents
Purchase "Intelligence is not Artificial")