The Nature of Consciousness

Piero Scaruffi

(Copyright © 2013 Piero Scaruffi | Legal restrictions )
Inquire about purchasing the book | Table of Contents | Annotated Bibliography | Class on Nature of Mind

These are excerpts and elaborations from my book "The Nature of Consciousness"

Deep Learning

In 1986 Paul Smolensky modified the Boltzmann Machine into what became known as the “Restricted Boltzmann machine”, which lends itself to easier computation. This network is restricted to one visible layer and one hidden layer, with units in each layer never connected to units in the same layer.

By the end of the 1980s, neural networks had established themselves as a viable computing technology, and a serious alternative to expert systems as a mechanical approximation of the brain. The probabilistic approach to neural network design had won out.

“Learning” is reduced to the classic statistical problem of finding the best model to fit the data. There are two main ways to go about this. A generative model is a full probabilistic model of the problem, a model of how the data are actually generated  (for example, a table of frequencies of English word pairs can be used to generate a “likely” sentence). Discriminative algorithms, instead, classify data without providing a model of how the data are actually generated. Discriminative models are inherently supervised. Traditionally, neural networks were discriminative algorithms.

In 1996 the developmental psychologist Jenny Saffran showed that babies use probability theory to learn about the world, and they do learn very quickly a lot of facts. So Bayes had stumbled on to an important fact about the way the brain works, not just a cute mathematical theory.

 


Back to the beginning of the chapter "Connectionism and Neural Machines" | Back to the index of all chapters