Intelligence is not Artificial

by piero scaruffi

Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )

(These are excerpts from my book "Intelligence is not Artificial")

The Origins of Digital Life

In 1936 Turing had shown how to build a computing machine. John Von Neumann wanted to build a self-replicating machine, a machine capable of creating a copy of itself.

In 1944 Erwin Schroedinger, one of the founders of Quantum Mechanics, published a book titled "What is Life?" in which he mused that chromosomes must contain the instructions to build the future organism as well as the machinery to execute them. Von Neumann differed from Schroedinger in that he separated the code and the machinery. Von Neumann came up with the design for what came to be known as a "cellular automaton" after discussing his idea with Stanislaw Ulam at Los Alamos National Laboratory, and in 1948 delivered a lecture titled "The General and Logical Theory of Automata" at a symposium in Pasadena (published in 1951).

Von Neumann's "universal constructor" consists of three parts: a description of itself, a decoder that constructs a machine based on that description, a copier that insert a copy of the description inside the new machine. He took from Kurt Goedel's incompleteness theorem of 1931 the concept of storing a description of the organism within the organism, i.e. of storing the instructions for constructing an organism inside the organism itself. Von Neumann proved that a cellular automaton doesn't need to be a Turing machine; and that a cellular automaton could implement a Turing machine. Note that Von Neumann came up with the vision of this self-replicating machine several years before Watson and Crick discovered the self-replicating process of DNA (1953).

Von Neumann's cellular automaton was never built in his lifetime. The first self-reproducing cellular automaton would be implemented much later, in 1994, by Renato Nobili of the University of Padova in Italy and Umberto Pesavento of Princeton University.

Refining an idea pioneered by the German engineer Ingo Rechenberg at the Technical University of Berlin in his dissertation "Evolution Strategies" (1971), John Holland at the University of Michigan introduced a different way to construct programs by using "genetic algorithms" (1975), the software equivalent of the rules used by biological evolution: instead of writing a program to solve a problem, let a population of programs evolve (according to some algorithms) to become more and more "fit" (better and better at finding solutions to that problem). His thesis advisor was Arthur Burks, who in 1946 had worked with John VonNeumann at the Institute for Advanced Study in Princeton on the theory of automata. In 1976 Richard Laing at the same university introduced the paradigm of self-replication by self-inspection ("Automaton Models of Reproduction by Self-inspection") that 27 years later would be employed by Jackrit Suthakorn and Gregory Chirikjian at Johns Hopkins University to build a rudimentary self-replicating robot ("An Autonomous Self-Replicating Robotic System", 2003). It took a decade for these ideas to be appreciated. The first international conference on genetic algorithms was held in 1985 at CMU.

"Each writer creates his precursors" (Jorge Luis Borges).

Back to the Table of Contents

Purchase "Intelligence is not Artificial")
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact