Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )


(These are excerpts from my book "Intelligence is not Artificial")

Computation as Deduction

The idea of turning computation into deduction (into a form of theorem proving) goes back to Kurt Goedel's "completeness theorem" in Austria (his PhD thesis "On the Completeness of the Calculus of Logic", 1929) and Jacques Herbrand in France (his PhD thesis "Research on the Theory of Demonstration", 1929).

Coming on the heels of these pioneering conferences on machine intelligence, in 1957 Alfred Tarski (who was then at UC Berkeley) organized a five-week long retreat at Cornell University that brought together 85 logical mathematicians and computer scientists, the first time that a large number of computer scientists came together with logicians. Although the logicians generally did not appreciate that such computers as the IBM 704 and the scientific programming language FORTRAN were changing the world, that conference laid the foundations for automated reasoning. For example, a young Richard Friedberg, a summer intern at IBM, explained how to design a learning machine, a forerunner of genetic programming (published as "A Learning Machine", 1958).

The fundamental algorithm for automated deduction (for proving automatically theorems of first-order predicate logic) is the "resolution principle", a uniform proof procedure developed in 1964 by Alan Robinson while at the Argonne National Laboratory ("A Machine-Oriented Logic Based on the Resolution Principle", 1965). Cordell Green's QA3 at Stanford (1969) was one of the first resolution-based systems for reasoning. A few years later Robert Kowalski at the University of Edinburgh showed how to use this algorithm to perform computation ("Predicate Logic as Programming Language", 1974), therefore completing the process started by Goedel and Herbrand. Kowalski also helped Alain Colmerauer's team in France to develop the programming language ProLog (1972). Meanwhile, two schools of logic programming were developing, represented on the West Coast by Cordell Green at SRI Intl ("Application of Theorem Proving to Problem Solving", 1969) and on the East Coast by Carl Hewitt at MIT with his Planner system (also 1969). A subset of Planner was later used by Terry Winograd to implement his SHRDLU. In 1973 Hewitt invented the actor model for the design of computer networks ("A Universal Modular Actor Formalism for Artificial Intelligence", 1973).

Richard Waldinger developed PROW (PROgran Writing) at Carnegie Mellon University, a theorem prover able to generate LISP programs from input-output pairs, one of the first applications of Robinson's resolution method ("A Step Toward Automatic Program Writing", 1969).

Inductive logic programming was pioneered by Gordon Plotkin at the University of Edinburgh (his PhD thesis "Automatic Methods of Inductive Inference", 1972), building on the work of Mark Gold at the RAND Corporation ("Language Identification in the Limit", 1967, a famous paper on language learnability). Inductive logic programming attempts to infer logic programs from positive and negative examples. The name "inductive logic programming" was coined in 1991 by Donald Michie's former student Stephen Muggleton while at the Turing Institute in Glasgow.

Cordell Green was experimenting with automatic programming, software that can write software the same way a software engineer does, namely the QA3 system that could write simple programs. This had a practical application. As computers were getting more complex, they were also becoming less reliable, and testing them for quality assurance was becoming more important. One way to test them was to generate thousands of test programs, to simulate the expected results and then compare the simulation with the actual performance. The roots of combinatorial testing came from the field of statistics, from venerable textbooks such as Ronald Fisher's "The Design of Experiments" (1935) and William Cochran's and Gertrude Cox's "Experimental Designs" (1957), but the automatic generation of combinatorial test programs for quality assurance was an attempt to have computers write programs for other computers.

For the record, neural networks are actually a form of program synthesis: a neural network is an algorithm capable of changing itself in response to some data; a computer that programs itself. In fact, a common definition of program synthesis is: given a dataset of input-output pairs, produce a program whose behavior generates those pairs. That's precisely what a neural network does to itself.

"Begin at the beginning," the King said, very gravely, "and go on till you come to the end: then stop." (Lewis Carroll, "Alice in Wonderland").

Back to the Table of Contents


Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact