The Nature of Consciousness

Piero Scaruffi

(Copyright © 2013 Piero Scaruffi | Legal restrictions )
Inquire about purchasing the book | Table of Contents | Annotated Bibliography | Class on Nature of Mind

These are excerpts and elaborations from my book "The Nature of Consciousness"

Psychological Models

Computational models of neural activity soon proliferated. From the “neural equations” devised in 1961 by the Italian physicist Eduardo Caianiello (“"An Outline Of Thought Processes And Thinking Machines”) to Stephen Grossberg's non-linear quantitative descriptions of brain processes, the number of mathematical theories on how neurons work almost exceeds the possibility of testing them. Now that the mathematics has been improved to the point of safety, the emphasis is moving towards psychological plausibility. At first the only requirement was that a neural network guaranteed to find a solution to every problem, but soon psychologists started requiring that it did so in a fashion similar to the way the human brain does it.  Grossberg’s models, for example, are aware of Ivan Pavlov’s experiments on conditioning.

Besides proving computationally that a neural network can learn, one has to build a plausible model of how the brain as a whole represents the world.  In Teuvo Kohonen's “adaptive maps”, nearby units respond similarly, thereby explaining how the brain represents the topography of a situation.  His unsupervised architecture, inspired by Carl von der Malsburg's studies on self-organization of cells in the cerebral cortex, is capable of self-organizing in regions. Kohonen assumes that the overall synaptic resources of a cell are approximately constant and what changes is the relative “efficacies” of the synapses.

The British computer scientist Igor Aleksander has attempted to build a neural state machine, “Magnus” (1996), that duplicates the most important features of a human being, from consciousness to emotions. 

 

 

 

Resurrected by Andrew Barto and Richard Sutton in the early 1980s from ideas by the US mathematician Harry Klopf, Reinforcement Learning was goal-directed learning driven by interaction between the learning agent and its environment. The goal was represented by a reward that needed to be maximized. The four pillars of reinforcement learning were: a policy, a reward function, a value function, and a model of the environment. Using a deep network to represent value function and/or policy and/or model, i.e. applying deep learning to reinforcement learning, yielded Deep Reinforcement Learning, of which Deep Q-Networks (DQN), developed in Canada by Volodymyr Mnih and others in 2013, constituted a particularly appealing class.


Back to the beginning of the chapter "Connectionism and Neural Machines" | Back to the index of all chapters