(These are excerpts from my book "Intelligence is not Artificial")
Analog Computation/ Reservoir Computing
In 1992 Hava Siegelmann of Bar-Ilan University in Israel and
Eduardo Sontag of Rutgers University developed "analogic recurrent neural networks"
("Analog Computation via Neural Networks", paper submitted in 1992 but published only in 1994).
For 60 years it was assumed that no computing device can be more powerful than a Universal Turing Machine. Hava Siegelmann proved mathematically that analog RNNs can achieve super-Turing computing ("On the Computational Power of Neural Nets", 1992). Alan Turing himself had tried to imagine a way to extend the computational power of his universal machine ("Systems of Logic Based on Ordinals", 1938), but his idea cannot be implemented in practice. Siegelmann's system was not the first system to break the Turing limit using real numbers, and nobody has built a computer yet that can perform operations on real numbers in a single step.
Recurrent networks are harder to train with gradient descent methods than feed-forward networks. Luckily, two techniques introduced a new paradigm for training recurrent neural networks: "echo state networks", developed in 2001 by German chaos theorist Herbert Jaeger at the University of Bremen for classifying and forecasting time series such as speech ("The Echo State Approach to Analysing and Training Recurrent Neural Networks", 2001), and "liquid state machines", developed in 2002 by the German mathematician Wolfgang Maass and the South-african neuroscientist Henry Markram at the Graz University of Technology in Austria as a biologically plausible model of spiking neurons ("Real-time Computing Without Stable States", 2002). They came from different disciplines (computer science and neuroscience) but arrived at the same trick: they trained only the final, non-recurrent output layer (the "readout layer"), while the other layers (the "reservoir") were randomly initialized. Therefore most of the weights in the network are assigned only once and at random. The difference between the two reservoir models is minimal, but, in a nutshell, liquid state machines are more general and therefore encompass echo state networks. The idea of random networks with a trained readout layer was mentioned by Frank Rosenblatt in his 1962 book "Principles of Neurodynamics"), and the "context reverberation network" developed by Kevin Kirby at Wright State University in Ohio ("Context Dynamics in Neural Sequential Learning", 1991) and the neural network developed by Peter Dominey at the French National Institute of Health and Medical Research to model "complex sensory-motor sequences" in the brain such as speech recognition ("Complex Sensory-Motor Sequence Learning Based on Recurrent State Representation and Reinforcement Learning", 1995) were de facto already implementations of that idea, but it became accepted only three decades later, after Dean Buonomano in Michael Merzenich's laboratory at UC San Francisco explained how the brain encodes time ("Temporal Information Transformed into a Spatial Code by a Neural Network with Realistic Properties", 1995). Reservoir computing greatly facilitated the practical application of recurrent neural networks. Reservoir computing provided a much needed an alternative to gradient descent methods for training recurrent neural networks. Echo state networks have also been implemented in hardware, e.g. by the team of Zambia-born physicist Serge Massar at the University of Brussels in Belgium ("Brian-inspired Photonic Signal Processor For Generating Periodic Patterns And Emulating Chaotic Systems," 2017).
Reservoir computing wasn't just a cute trick to train neural networks. It stealthily represented a devastating critique of Artificial Intelligence. Turing machines are not well-suited for modeling the behavior of brain circuits, which are analog, not digital, and don't work in discrete time steps but continuously. Brains perform real-time computations on continuous input streams (on "time series"), whereas Turing machines perform off-line computations on discrete input values (basically, zeroes and ones). Brain states are "liquid" and that makes them well-suited for computing perturbations.
Reservoir computing provides a simpler way to train recurrent neural networks, but was soon made obsolete by the rise of deep learning. However, A.I. witnessed a resurgence of reservoir computing in 2017 when the team of physicist Edward Ott at the University of Maryland showed that reservoir computing can closely simulate the evolution of a chaotic system ("Model-free Prediction of Large Spatiotemporally Chaotic System from Data", 2018). The term "butterfly effect", named after an Edward Lorenz lecture ("Does The Flap Of A Butterfly's Wings In Brazil Set Off A Tornado In Texas?", 1972), expresses the fact that a small change in the initial conditions can grow exponentially quickly, makes long-term predictions of chaotic systems impossible. That's why weather predictions are still so unreliable: a mathematical model of the atmosphere is a chaotic model. Why reservoir computing is so good at learning the dynamics of chaotic systems is not yet well understood. To be fair, Sapsis Themistoklis and his student Zhong-yi Wan at MIT achieved similar results with an LSTM neural network ("Data-assisted Reduced-Order Modeling of Extreme Events In Complex Dynamical Systems", 2018).
Neural networks in computer science are sets of mathematical equations that represent stage changes with continuous variables. Real neurons, instead, interact primarily via discontinuous "spiking" (action potentials). Software simulations of real neurons were pioneered by Neuron, first published in 1984 and mainly developed by mathematician Michael Hines in the lab of Kacy Cole's student John Moore, now at Duke University; and by GENESIS (the GEneral NEural SImulation System), developed in 1988 by James Bower's team at Caltech. Then came NEST (Neural Simulation Tool) developed in 2002 by Markus Diesmann and Marc-Oliver Gewaltig at EPFL in Switzerland; and Brian, developed in 2008 by Romain Brette and Dan Goodman at the École Normale Supérieure in France. Hardware implementations include SpiNNaker (Spiking Neural Network Architecture), designed in 2005 by Steve Furber at the University of Manchester, and Neurogrid, built in 2009 by Kwabena Boahen's team at Stanford University.
"The brain is an organ of minor importance" (Aristotle in "De Motu Animalium")
Back to the Table of Contents
Purchase "Intelligence is not Artificial"