The Nature of Consciousness

Piero Scaruffi

(Copyright © 2013 Piero Scaruffi | Legal restrictions )
Inquire about purchasing the book | Table of Contents | Annotated Bibliography | Class on Nature of Mind

These are excerpts and elaborations from my book "The Nature of Consciousness"

Non-sequential Programming

Neural networks are fundamentally different from the sequential, Von Neumann computer. Information is processed in parallel, rather than sequentially. The network can modify itself (i.e., learn), based on its performance. Information is spread across the network, rather than being localized in a particular storage place. The network as a whole can still function even if a piece of the network is not functioning.

The technology of neural networks promised to lead to a type of computer capable of learning, and, in general, of more closely resembling our brain. 

The brain is a neural network that exhibits one important property: all the changes that occur in the connections eventually "converge" towards some kind of stable state. For example, the connections may change every time I see a friend's face from a different perspective, but they "converge" towards the stable state in which I always recognize him as him. Some kind of stability is important for memory to exist, and for any type of recognition to be performed.  Neural networks must exhibit the same property if they have to be useful for practical purposes and plausible as models of the brain.  Several different mathematical models were proposed in the quest for the optimal neural network.

The discipline of neural networks quickly picked up steam. More and more complex machines were built. Until in 1968 the US mathematician Marvin Minsky proved (or thought he proved) some intrinsic limitations of neural networks. All of a sudden, research on neural networks became unpopular and for more than a decade the discipline languished. 

 


Back to the beginning of the chapter "Connectionism and Neural Machines" | Back to the index of all chapters