The British philosopher Andy Clark
wants to bring back the body into the reasoning brain.
We can dispose of the body and still find ways that a brain would calculate
how to perform actions, but the very reason that we have bodies is that
bodies make it a lot easier to perform those actions even without calculating
every single movement. The fact that a body's movements are constrained by
the body's structure is actually an advantage: once the brain directs
a general action, there are only so many ways that the action can be carried
out by the body. There is no need to calculate ways that are beyond the
capabilities of the body.
Clark's book begins with an attack against the kind of Artificial Intelligence that wants to equip machines with logic and "problem solving" techniques (usually based on an abstract representation of the world. This is a way to build a brain without taking into account the body; a very intelligent brain (possibly more intelligent than its creators) but pathetically out of touch with the reality of its body and its possible interactions with the environment.
Clark, instead, envisions a road to artificial intelligence via "autonomous agents", who are controllers of bodily action along the lines of Rodney Brooks' "subsumption architectures". They have simpler "brains", but their behavior is largely driven by their interaction with the environment instead of pure logic. Where logical systems take an input from perception and calculate action, these agents use action as perception. Thus the distinction between perception and action fades away, as they are two sides of the same coin. And cognition becomes simply the interaction with the environment, not a system of logic. Learning occurs while we act. This, Clark reminds us, is more similar to the "quick and dirty" strategies employed by Nature.
Clark then makes the connection between his robots and emergent systems by looking at the way complex behavior emerges in biological systems (e.g., how termites build their nests without any one termite directing the work).
I find that all these theories of "situated agents" that criticize the "problem-solving" approach (the approch in which there is a "brain" planning everything the body does) tend to miss an important point. They look at biological organisms for inspiration, but they forget that biological organisms evolve. Robots do not evolve. To me that is the very basic reason that computer scientists came up with the need for a problem solver, for an entity capable of solving every possible problem on purely logical bases. Biological organisms embody their interaction with the environment: their body has been sculpted by evolution to optimize that interaction. It is hard to create a robot that can display the same degree of "fitness". What we don't have is evolving robots: robots that, given an environment, will build better and better fit robots. It is just very difficult to have a robot build another robot. At best, we have software inside a robot that evolves and fine-tunes itself. But that is not what happens in nature, where it is the hardware itself (not just the software) that evolves. The reason A.I. originally adopted the view of a "disembodied brain" is that a robot "is" disembodied: it is just a container for a mind. Our bodies are not mere containers of minds: our bodies have been shaped by evolution to be the natural object and subject of the mind.
When Clark and Brooks build their robots, they commit the same sin of the "problem-solving" crowd: they design the robot, instead of letting evolution design it. The problem-solving crowd builds a system that uses logic to decide behavior. Clark and Brooks use their own logic (the logic of their brains) to design a robot that (according to their logical thinking) will behave correctly in its environment. They have simply moved the problem-solving activity from the brain of the machine to the brain of the humans that design it. But nothing substantial has changed.
That is why Clark's chapter "Evolving Robots" does not discuss evolving robots: we do not know how to build (hardware) robots that build robots that build robots etc, each slightly different than its "parents".
But Clark's book is much more interesting that a trivial defense of autonomous agents versus problem solvers. The second half of the book is a breath-taking tour of modern theories of mind, with detours into economics and linguistics. Clark offers a deeply intellectual analysis of thought (the epilogue alone is worth the price of the book). The drawback is that it is not easy to understand what the conclusions are. At the end it looks more like a study on metholodogy than a study on intelligent architectures.