(These are excerpts from my book "Intelligence is not Artificial")
How not to Build an Artificial General Intelligence - Part II: Smart, not Deep, Learning
Simply telling me that Artificial Intelligence and robotics research will keep producing better and smarter devices (that are fundamentally not "intelligent" the way humans are) does not tell me much about the chances of a breakthrough towards a different kind of machine that will match (general) human intelligence.
I don't know what such a breakthrough should look like, but i know what it doesn't look like. The machine that beat the world champion of go/weichi was programmed with knowledge of virtually every major go/weichi game ever played, and it was allowed to run millions of logical steps before making any move. That obviously put the human contender at a huge disadvantage. Even the greatest go/weichi champion with the best memory can only remember so many games. The human player relies on intuition and creativity, whereas the machine relies on massive doses of knowledge and processing. Shrink the knowledge base that the machine is using to the knowledge base that we have and limit the number of logical steps it can perform to the number of logical steps that the human mind can perform before it is timed out, and then we'll test how often it wins against ordinary players, let alone world champions.
Having a computer (or, better, a huge knowledge base) play chess against a human being is like having a gorilla fight a boxing match with me: i'm not sure what conclusion you could draw from the result of the boxing match about our respective degrees of intelligence.
I wrote that little progress has been made in Natural Language Processing. The key word is "natural". Machines can actually speak quite well in unnatural language, a language that is grammatically correct but from which all creativity has been removed: "subject verb object - subject verb object - subject verb object - etc." The catch is that humans don't do that. If i ask you ten times to describe a scene, you will use different words each time.
Language is an art. That is the problem. How many machines do we have that can create art? How far are we from having a computer that switches itself on in the middle of the night and writes a poem or draws a picture just because the inspiration came? Human minds are unpredictable. And not only adult human minds: pets often surprise us, and children surprise us all the time. When is the last time that a machine surprised you? (other than surprise you because they are still so dumb). Machines simply do their job, over and over again, with absolutely no imagination.
Here is what would constitute a real breakthrough: a machine that has only a limited knowledge of all the go/weichi games ever played and is allowed to run only so many logical steps before making a move and that can still play well. That machine will have to use intuition and creativity. That's a machine that would probably wake up in the middle of the night and write a poem. That's a machine that would probably learn a human language in a few months just like even the most disadvantaged children do. That is a machine that would not translate "'Thou' is an ancient English word" into "'Tu' e` un'antica parola Inglese", and that will not stop at a red traffic light if it creates a dangerous situation.
I suspect that this will require some major redesigning of the very architecture of today's computers. For example, a breakthrough could be a transition from digital architectures to analog architectures. Another breakthrough could be a transition from silicon (never used by Nature to construct intelligent beings) to carbon (the stuff of which all natural brains are made). And another one, of course, could be the creation of an artificial being that is self-conscious.
Today it is commonplace to argue that in the 1970s A.I. scientists gave up too quickly on neural networks and connectionism. My gut feeling is that in the 2000s we gave up a bit too quickly on the symbolic processing (knowledge-based) program. Basically, we did to the logical approach what we had done before to the connectionist approach: in the 1970s neural networks fell into oblivion because knowledge-based systems were delivering practical results only to find out that knowledge-based systems were very limited and that neural networks were capable of doing more.
My guess is that there was nothing wrong with the knowledge-based approach. Unfortunately, we never figured out an adequate way to represent human knowledge. Representation is one of the oldest problems in philosophy, and I don't think we got any closer to solving it now that we have powerful computers. The speed of the computer does little to fix a wrong theory of representation.
So we decided that the knowledge-based approach was wrong and we opted for neural networks (deep learning and the likes). And neural networks have proven very good at simulating specialized tasks: each neural network does one thing well, but doesn't do what every human, even the dumbest one, and even animals, do well: use the exact same brain to carry out thousands (potentially an infinite number) of different tasks.
"If a machine is expected to be infallible, it cannot also be intelligent" (Alan Turing, 1947)
Back to the Table of Contents
Purchase "Intelligence is not Artificial"