The Nature of Consciousness

Piero Scaruffi

(Copyright © 2013 Piero Scaruffi | Legal restrictions )
Inquire about purchasing the book | Table of Contents | Annotated Bibliography | Class on Nature of Mind

These are excerpts and elaborations from my book "The Nature of Consciousness"

 

Artificial Neural Networks

An artificial "neural network" is a piece of software or hardware that simulates the neural network of the brain. Several simple units are connected together, with each unit connecting to any number of other units. The "strength" of the connections can fluctuate from zero strength to infinite strength. Initially the connections and their strengths are set randomly. Then the network is either "trained" or forced to train itself. "Training" a network means using some kind of feedback to adjust the strength of the connections. Every time an input is presented, the network is told what the output should be and asked to adjust its connections accordingly.

For example, the input could be a picture of an apple and the required output could be the string of letters A-P-P-L-E. The first time, equipped with random connections, the network produces some random output. The requested output (A-P-P-L-E) is fed back and the network reorganizes its connections to produce such an output. Another image of an apple is presented as the input and the output is forced again to be the string A-P-P-L-E. Every time this happens the connections are modified to produce the same output even if all images of apples are slightly different. The theory predicts that at some point the network will start recognizing images of apples even if they are all slightly different from the ones it saw before.

Formally: a neural net is a nonlinear directed graph in which each element of processing (each node) receives signals from other nodes and emits a signal towards other nodes, and each connection between nodes has a weight that can vary in time. 

A number of algorithms have been proposed for adjusting the strengths of the connections based on the expected output. Such an algorithm must eventually "converge" to a unique and proper configuration of the neural network. The network can continue learning forever, but it must be capable of not forgetting what it has already learned. The larger the network (both in terms of units and in terms of connections) the easier it is to reach a point of stability.

Artificial neural networks are typically used to recognize an image, a sound, a written word. But, since everything is ultimately a pattern of information, there is virtually no limit to their applications. For example, they can be used to build expert systems. An expert system built with the technology of knowledge-based systems (a "traditional" expert system) relies on a knowledge base which represents the knowledge acquired over a lifetime by a specialist. An expert system built with neural-network technology would be a neural network which has been initialized with random values and trained with a historical record of "cases". Instead of relying on an expert, one would rely on a long list of previous cases in which a certain decision was made. If the network is fed this list and "trained" to learn that this long list corresponds to a certain action, the network will eventually start recommending that certain action for new cases that somehow match that "pattern".

Imagine a credit scoring application: the bank’s experts use some criteria for deciding whether a business is entitled to a loan or not. A knowledge-based system would rely on the experience of one such expert and use that knowledge to examine future applications. A neural network would rely on the historical record of loans and train itself from that record to examine future applications.

The approach is almost completely opposite, even if it should lead to exactly the same behavior.

 


Back to the beginning of the chapter "Connectionism and Neural Machines" | Back to the index of all chapters