Jaime Carbonell:
"Machine Learning" (MIT Press, 1989)

(Copyright © 2016 Piero Scaruffi | Terms of use )
This compilation consists of nine articles written by prominent researchers in the area of machine learning. Carbonell's introduction compares the traditional inductive paradigm (constructing the symbolic description of a concept from a set of positive and negative instances) with the new analytic (i.e., deductive) paradigms. The latter utilize past problem solving experience to formulate the search strategy in the space of potential solutions. Deductive learning systems include: Jerry DeJong's "explanation-based learning", Allen Newell's "chunking", and Carbonell's own "derivational analogy".

Pat Langley and others cover concept formation, reviewing historical systems such as Langley's own BACON, Doug Lenat's AM, Ed Feigenbaum's EPAM, Michael Lebowitz's UNIMEM, Doug Fisher's COBWEB, Jan Zytkow's FAHRENHEIT.

An explanation-based learning system is given a high-level description of the target concept, a single positive instance of the concept, a description of what a concept definition is and domain knowledge. The system generates a proof that the positive instance satisfies the target concept and then generalizes the proof. Richard Fikes' STRIPS is recognized as a forerunner of explanation-based learning.

Derivational analogy solves a problem by tweaking a plan (represented as a hierarchical goal structure) used to solve a previous problem. Jack Mostow surveys a few applications.

John Holland and Geoffrey Hinton touch briefly on two alternative and extreme paradigm, respectively genetic algorithms and connectionism.

Holland's "Classifier systems and genetic algorithms" provides his definitive version of classifier systems.

Classifier systems are defined as "massively parallel, message-passing, rule-based systems that learn through credit assignment (the bucket brigade algorithm) and rule discovery (the genetic algorithm)". When a message from the environment matches the antecedent of a rule, the message specified in the consequent of the rule is produced. Some messages produced by the rules cycle back into the classifier system, some generate action on the environment. A message is a string of characters from a specified alphabet. The rules are not written in the first-order predicate logic of expert systems, but in a language that lacks descriptive power and is limited to simple conjunctive expressions.

Credit assignment is the process whereby the system evaluates the effectiveness of its rules. The bucket brigade algorithm assigns a strength (a maesure of its past usefulness) to each rule. Each rule then makes a bid (proportional to its strength and to its relevance to the current situation) and only the highest bidding rules are allowed to pass their messages on. The strengths of the rules are modified according to an economic analogy: every time a rule bids, its strength is reduced of the value of the bid while the strength of its "suppliers" (the rules that sent the messages matched by this bidder) are increased. The bidder strength will in turn increase if its consumers (the rules that receive its message) will become bidders. This leads to a chain of suppliers/consumers whose success ultimately depends on the success of the rules that act directly on the environment.

Then the system replaces the least useful (weak) rules with newly generated rules that are based on the system's accumulated experience, i.e. by combining selected "building blocks" ("strong" rules) according to some genetic algorithms.

Hinton focuses on gradient-descent learning procedures of connectionist systems. Each connection computes the derivative, with respect with its strength, of a global measure of error in the performance of the network, and then adjusts its strength in the direction that decreases the error. Hinton is interested in learning procedures that lead to internal representations of the environment. His survey starts with associative memories without hidden units (linear and nonlinear associators) and supervised networks without hidden units (least squares and perceptron convergence algorithms) and proves the deficiencies of such approaches. Backpropagation (a multi-layer least squares algorithm) can instead lead to the discovery of semantic features, but it too exhibits limitations, specifically computational intractability and biological implausibility.

Hinton also surveys Boltzmann machines (where units update their state based on astochastic decision rule), Hebbian learning (where weight modification depends on both presynaptic and postsynaptic activity), competitive learning (where units in a hidden layer compete to become active) and reinforcement learning (where credit is assigned to a local decision by measuring how it correlates with the global reinforcement signal.

John Anderson's "A theory of origins of human knowledge" generalizes the results of his systems, in particular ACT and his latest P
S. They organize knowledge in three levels: knowledge level (information acquired from the environment and innate principles of inference), algorithm level (internal deductions, inductions and compilations) and implementation level (setting strengths for the encoding of specific pieces of information).

TM, ®, Copyright © 2016 Piero Scaruffi