(These are excerpts from my book "Intelligence is not Artificial")
Footnote: Phase Transitions and A.I.
An exponential increase (especially if it is only about computational speed)
is not enough to demonstrate that a qualitative change will ever take place.
If you build exponentially faster cars, you will eventually get a car that
can travel close to the speed of light, but it would still be a car, not
Should machines ever reach a superhuman level of intelligence, one would imagine that this would entail a sequence of phase transitions, e.g. from mere arithmetic calculation to pattern recognition to higher and higher forms of mental faculties.
The most famous model of cognitive development in children is due to the Swiss psychologist Jean Piaget, who explained it in "The Language and Thought of the Child" (1923). He posited that the child's mind undergoes what a physicist would call "phase transitions" after which the mind thinks differently. Cognitive faculties are not fixed at birth but evolve during the lifetime of the individual, and the evolution is not smooth but due to quantum jumps in cognitive skills. Precisely, the development of children's intellect proceeds from simple mental arrangements to progressively more complex ones not by gradual evolution but by sudden rearrangements of mental operations that produce qualitatively new forms of thought. First a child lives a "literal" sensorymotor life, then the child begins to deal with internal symbols, then the child learns to perform internal manipulations on symbols that represent real objects, and, finally, the child's mental life extends to abstract objects. Piaget's four transition phases start with a stage in which the dominant factor is perception, which is irreversible, and end with a stage in which the dominant factor is thought, which is reversible.
At the end of the century, the Canadian neuropsychologist Merlin Donald reached the conclusion in his book "Origins Of The Modern Mind" (1991) that a similar path can be seen in the growth of the human mind through history: the human mind developed in four stages (which roughly correspond to Piaget's stages of cognitive growth in children) from a non-symbolic form of intelligence to the modern mind of symbolic thought through gradual absorption of new representational systems.
Neural networks belong to the class of complex systems, which are characterized by nonlinear dynamics. Bernardo Huberman and Tad Hogg at Xerox PARC studied the phase transitions that take place in large-scale cognitive systems ("Phase Transitions in Artificial Intelligence Systems", 1987), but there was generally little interest in studying phase transitions in neural networks. Elizabeth Gardner at the University of Edinburgh, not coincidentally an expert in spin glass theory, applied statistical mechanics to neural networks but she died of cancer a few weeks before her two papers were published (notably "The Phase Space of Interactions in Neural Networks Models", 1988). Michael Biehl at the Institute for Theoretical Physics of Wuerzburg in Germany ("Statistical Mechanics of Unsupervised Structure Recognition", 1994) has studied, in general, phase transitions in networks (whether the World-wide Web, ecological nets, social nets, cellular nets, linguistic nets or neural nets).
Both physics and neural networks study systems with many degree of freedoms: physics studies many-body interactions, neural networks processes data in high dimensions. Physics uses a trick called "renormalization" to handle complex systems with many degrees of freedom. Neural networks use the approximation tricks of deep learning. The connection between the two fields has been mainly explored by physicists such as Pankaj Mehta of Boston University and David Schwab of Northwestern University ("An Exact Mapping between the Variational Renormalization Group and Deep Learning", 2014).
"You must be the change you wish to see in the world" (Mahatma Gandhi).
Back to the Table of Contents
Purchase "Intelligence is not Artificial")