(These are excerpts from my book "Intelligence is not Artificial")
Footnotes in the History of Artificial Intelligence
There were many side tracks that didn't become as popular as expert systems and neural networks.
At the famous conference on A.I. of 1956 there was a third proposal for A.I. research. The Boston-based mathematician Ray Solomonoff presented "An Inductive Inference Machine" for machine learning. Induction is the kind of learning that allows us to apply what we learned in one case to other cases. His method used Bayesian reasoning, i.e. it introduced probabilities in machine learning. Alas, Solomonoff's inductive inference is not computable, although some algorithms can approximate it in order to make it run on a computer.
The robot Shakey (1969), built at the Stanford Research Institute (SRI) by Charles Rosen's team, was the vanguard of autonomous vehicles.
In 1971 the Shakey project at SRI made the leap to a more powerful machine (a PDP-10) with a hard-disk of almost 1 megabyte (which in those days cost about a million dollar) and made some valuable contributions to the field: the STRIPs planner, developed by Richard Fikes and Nils Nilsson, and the A* heuristic search algorithm (that would remain the most used algorithm in its class for half a century).specific domains.
Cordell Green experimented at Stanford with automatic programming, software that can write software the same way a software engineer does (“Application of Theorem Proving to Problem Solving”, 1969).
In 1961 Melvin Maron, a philosopher working at the RAND Corporation, suggested a statistical approach to analyze language (technically speaking, a “naive Bayes classifier”). IBM's Shoebox (1964) debuted speech recognition. Conversational agents such as
Conversational agents such as Daniel Bobrow's Student (1964), Joe Weizenbaum's Eliza (1966) and Terry Winograd's Shrdlu (1972), all from the MIT,
as well as LUNAR (1973), built by William Woods at nearby Bolt Beranek and Newman to answer questions about moon rocks,
were the first practical implementations of natural language processing.
In 1968 Peter Toma founded Systran to commercialize machine-translation systems. The discipline of Machine Translation actually predates Artificial Intelligence. Yehoshua Bar-Hillel organized the first International Conference on Machine Translation in 1952 at the MIT. In 1954 Leon Dostert's team at Georgetown University and Cuthbert Hurd's team at IBM demonstrated a machine-translation system, one of the first non-numerical applications of the digital computer. (For the record, in 1958 the same Bar-Hillel who had jumpstarted the field published a "proof" that machine translation is impossible without common-sense knowledge).
Refining an idea pioneered by the German engineer Ingo Rechenberg at the Technical University of Berlin in his thesis "Evolution Strategies" (1971), John Holland at the University of Michigan introduced a different way to construct programs by using "genetic algorithms" (1975), the software equivalent of the rules used by biological evolution: instead of writing a program to solve a problem, let a population of programs evolve (according to some algorithms) to become more and more "fit" (better and better at finding solutions to that problem). In 1976 Richard Laing at the same university introduced the paradigm of self-replication by self-inspection ("Automaton Models of Reproduction by Self-inspection") that 27 years later would be employed by Jackrit Suthakorn and Gregory Chirikjian at John Hopkins University to build a rudimentary self-replicating robot (“An Autonomous Self-Replicating Robotic System”, 2003).
In 1990 Carver Mead at Caltech described a "neuromorphic" processor, a processor that emulates the human brain.
Back to the Table of Contents
Purchase "Intelligence is not Artificial"