Intelligence is not Artificial

by piero scaruffi

Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )


(These are excerpts from my book "Intelligence is not Artificial")

Artificial and Natural Neural Networks: The Myth of Backpropagation

Despite the success of deep learning in so many fields, saying that artificial neural networks are similar to the neural networks of the brain is like saying that linear regression or multiplication are similar to the way the neural networks of the brain work. Artificial neural networks are just one approximation and optimization method that works pretty well in some cases.

They are mathematical procedures. They are not what happens in the brain. Neither backpropagation nor Boltzmann machines reflect what our brain does for unsupervised learning. Neuroscience has not discovered any biological mechanism for errors to be backpropagated any further than a single synapse. Backpropagation implements a precise, symmetric model of connectivity among neurons, which is not what we see in the brain. Our neurons are wildly interconnected, but the connections are far from symmetric. All the methods that evolved out of "gradient descent" share very little with the processes discovered by neuroscience in the brain.

One of the most criticized videos in the history of A.I. was probably Geoff Hinton's "Here's how the Brain Implements Backpropagation" (2015, now withdrawn). Oddly enough, Hinton himself had recognized in 1989 that backpropagation is biologically implausible because there is no evidence that synapses can propagate signals in the reverse direction or that neurons can propagate error derivatives backwards and at the same time propagate signals forward according to a nonlinear function ("Connectionist Learning Procedures" in Artificial Intelligence magazine). A Boltzmann machine is a fantastic mathematical technique and perhaps more closely resembles the working of the brain, but, again neuroscience has found no such formulas in the brain.

Yoshua Bengio's "Towards Biologically Plausible Deep Learning" (2016) begins with this sentence: "Neuroscientists have long criticised deep learning algorithms as incompatible with current knowledge of neurobiology".

More importantly, a lot of "machine learning" uses techniques that are not neural networks and still work very well. Usually, nobody claims that linear regression or SVM mirror the way the brain works.

The biological process that adjusts the strength of connections between neurons in the brain is called Spike-Timing-Dependent Plasticity (STDP), and might not be the only one. STDP is an extension of Hebbian learning first suggested in 1973 by the Canadian psychologist Martin Taylor ("The Problem of Stimulus Structure in the Behavioural Theory of Perception", 1973). Hebbian learning is the fact that a synapse get stronger when a presynaptic spike occurs just before a postsynaptic spike often enough. Taylor envisioned a symmetric process that would weaken the synapse whenever the opposite happened often enough (anti-Hebbian learning). Basically, an input that is likely the cause of a post-synaptic output is made even more likely to contribute to that output the future, whereas inputs that are certainly not the cause of the post-synaptic output are made less likely to contribute in the future.

Wulfram Gerstner at EPFL in Switzerland (who in 1996 rediscovered the differential Hebbian rules) expressed this fact in terms of the competition between "good" and "bad" timings: good timing is when the presynaptic spike arrives before a postsynaptic spike, and viceversa for bad timing ("A Neuronal Learning Rule for Sub-millisecond Temporal Coding", 1996). Gerstner wrote a book that every A.I. scientist should read: "Neuronal Dynamics" (2014).

Back to the Table of Contents


Purchase "Intelligence is not Artificial")
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact