Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )

(These are excerpts from, and additions to, my book "Intelligence is not Artificial")

Another Failure: Machine Learning

Machine learning lay mostly dormant until the 1970s. Earl Hunt, then a psychologist at UCLA, had developed a Concept Learning System, first described in his book "Concept Learning" (1962), for inductive learning, i.e. learning of concepts. In 1975 that was extended by Ross Quinlan into Iterative Dichotomiser 3 (ID3) at the University of Sydney in Australia. Patrick Winston╬Ú╬¸s thesis at the MIT with Minsky was "Learning Structural Descriptions From Examples" (1970) that introduced "difference networks".

Polish-born Ryszard Michalski at University of Illinois built the first practical system that learned rules from examples, AQ11 (1978).

Meanwhile, also at Carnegie Mellon University and starting in 1973, John Anderson had been developing his own cognitive architecture, called ACT*.

Another school was started at Stanford by Bruce Buchanan, who had worked on Dendral, following his paper "Model-directed learning of production rules" (1978). His student Tom Mitchell graduated with a thesis on "Version Spaces" (1978), a model-based method for concept learning (as opposed to Michalski╬Ú╬¸s data-based method).

After the pioneering work of Herbert Simon and Walter Reitman at Carnegie Institute of Technology, reasoning by analogy was studied at MIT by Patrick Winston ("Learning and Reasoning by Analogy", 1980) and at Carnegie Mellon University by Jaime Carbonell, who had graduated from Roger Schank's case-based reasoning systems ("Learning and Problem Solving by Analogy", 1980) and who in 1981 organized the first conference on machine learning (held at Carnegie Mellon University); and also by Ken Forbus at Northwestern University, who developed the Structure Mapping Engine (SME), based on the theories of psychologist Dedre Gentner ("Structure-mapping", 1983). Decades later, Douglas Hofstadter and the French psychologist Emmanuel Sander would argue that analogy is the foundation of all thinking in their book "Surfaces and Essences" (2013), but by then the A.I. community would be absorbed in deep learning.

In 1981 Allen Newell and Paul Rosenbloom at Carnegie Mellon University formulated the "chunking theory of learning" to model the so-called "power law of practice", and in 1983 John Laird and Paul Rosenbloom started building a system called Soar that implemented chunking.

Then came the "explanation-based Learning systems" such as Lex2 (1986), developed by Tom Mitchell at CMU , and Kidnap (1986), developed at the University of Illinois by Gerald De Jong, whose thesis at Yale University had been the Frump system of natural language processing based on Schank╬Ú╬¸s scripts; and the "learning apprentice systems" such as Leap (1985) by Tom Mitchell at CMU and Disciple (1986) by Yves Kodratoff in France.

An influential theory of learning was the "probably approximately correct" (PAC) model of learning, introduced in 1984. Leslie Valiant at Harvard University ("A Theory of the Learnable", 1984) considered inductive learning as the process of deducing a program for performing a task: the learner must select the best generalization function (the "hypothesis") for the data at hand out of a class of possible functions (the "hypothesis space"), a task traditionally done by hand but here automated using computational complexity theory. Some PAC ideas were predated by Vladimir Vapnik in Russia ("Estimation of Dependences Based on Empirical Data", originally published in Russian in 1979).

None of these attempts proved successful at building programs that learn. There seemed to be something about learning that still eluded both computer scientists and cognitive psychologists. "Education is paradoxical in that it is largely composed of things that cann ot be learned" (Roberto Calasso), or, if you prefer, "Education is what survives when what has been learned has been forgotten" (Burrhus Skinner).

Back to the Table of Contents

Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact