The Nature of Consciousness

Piero Scaruffi

(Copyright © 2013 Piero Scaruffi | Legal restrictions )
Inquire about purchasing the book | Table of Contents | Annotated Bibliography | Class on Nature of Mind

These are excerpts and elaborations from my book "The Nature of Consciousness"

In 2006 Hinton (“A Fast Learning Algorithm For Deep Belief Nets”) made Deep Belief Networks the talk of the town, basically a generative algorithm for Restricted Boltzmann Machines which suddenly relaunched neural networks and led to new, sophisticated applications to unsupervised learning.

Deep Belief Networks are layered hierarchical architectures that stack Restricted Boltzmann Machines one on top of the other, each one feeding its output as input to the one immediately higher, with the two top layers forming an associative memory.  The features discovered by one RBM become the training data for the next one.

DBNs are still limited in one respect: they are “static classifiers”, i.e. they operate at a fixed dimensionality. However, speech or images don’t come in a fixed dimensionality, but in a (wildly) variable one. They require “sequence recognition”, i.e. dynamic classifiers, that DBNs cannot provide. One method to expand DBNs to sequential patterns is to combine deep learning with a “shallow learning architecture” like the Hidden Markov Model.

Meanwhile, in 2006 Osamu Hasegawa introduced Self-Organising Incremental Neural Network (SOINN), a self-replicating neural network for unsupervised learning, and in 2011 his team created a SOINN-based robot that learned functions it was not programmed to do.

 


Back to the beginning of the chapter "Connectionism and Neural Machines" | Back to the index of all chapters