Intelligence is not Artificial

by piero scaruffi

Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )


(These are excerpts from my book "Intelligence is not Artificial")

Artificial Shallow Intelligence

Progress in artificial neural networks, from the first Perceptron to Fukushima's convolutional network and to Hinton's deep learning, has been based on models of how animals learn by trial and error in simple (simple) cases of conditioning. It is not based on studies of general intelligence. No surprise therefore that deep learning is not giving us anything that even remotely resembles general intelligence.

Training a neural network is a highly unnatural process. If you want a deep-learning system to recognize a cake, you have to train it with thousands if not millions of pictures of cakes. A child learns to recognize a cake after seeing (and tasting) a few of them.

Artificial Intelligence often ignores (and sometimes reinvents decades later) the body of research that comes from psychologists. In the 1950s Jerome Bruner held that a category is defined by the set of features that are individually necessary and jointly sufficient for an object to belong to it ("A Study of Thinking", 1956), whereas Roger Brown (“Words and Things”, 1958) found that we "naturally" name objects based on the “distinctive action” that we perform on them: the actions we perform on flowers are pretty much all the same, and certainly different from the actions that we perform on a cat. It is our basic actions that tell us that a cat is a cat and a flower is a flower. In the 1960s Brent Berlin found that people categorize plants at the same "basic level" anywhere in the world: it is a level at which only shape, substance and pattern of change are involved (“Covert Categories and Folk Taxonomies”, 1968). In the 1970s Eleanor Rosch thought that a concept is represented through a prototype: membership of an object in a category is determined by the perceived distance of resemblance of that object from the prototype of the category. ("Cognition and Categorization" (1978). George Lakoff, author of "Women, Fire and Dangerous Things" (1987), argued that categories depend also on the bodily experience of the “categorizer”. Frank Keil argued that no concept can be understood in isolation from all other concepts: concepts embody "systematic sets of causal beliefs" about the world ("Concepts, Kinds and Cognitive Development", 1989). My book "Thinking about Thought" has a lengthy survey of these theories.

A program that, trained with thousands of examples, learns how to play weichi/go on a 19x19 board (and beats the world's champion) but cannot play the game on a different kind of board unless you train it again with thousands of examples, can hardly be called "intelligent". A system that is powerless when you change the problem slightly can hardly be called intelligent simply because it can solve the problem in just one particular configuration.

Back to the Table of Contents


Purchase "Intelligence is not Artificial")
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact