(These are excerpts from my book "Intelligence is not Artificial")
Before analyzing what it will take (and how long it will take) to get machine intelligence, we need to define what we are talking about.
A man, wearing a suit and tie, walks out from a revolving hotel door dragging his rolling suitcase. Later, another man, wearing a shabby uniform and gloves, walks out of the side door dragging a garbage can. It is obvious even to the dumbest human being that one is a guest of the hotel and the other one is a janitor. Do we require from a machine this simple kind of understanding ordinary situations in order for it to qualify as "intelligent"? Or is it irrelevant, just like matching the nightingale's song is irrelevant in order to solve differential equations? If we require that kind of understanding, we push machine intelligence dramatically forward into the future: just figuring out that one is a suit and tie and one is a uniform is not trivial at all for a machine. It takes an enormous computational effort to achieve just this one task. There are millions of situations like this one that we recognize in a split second.
Let us continue our thought experiment. Now we are in an underdeveloped country and the janitor is dragging an old broken suitcase full of garbage. He has turned an old suitcase into his garbage can. Seeing such a scene, we would probably just smile at the man's ingenuity; but imagine how hard it is for a machine to realize what is going on. Even if the machine is capable of telling that someone dragging a suitcase is a hotel guest, the machine now has to understand that a broken suitcase carried by a person in a janitor's uniform does not qualify as a suitcase.
There are millions of variants on each of those millions of situations that we effortlessly understand, but that are increasingly trickier for a machine.
The way that today's A.I. scientists would go about it is to create one specific software program for each of the millions of situations, and then millions of their variants. Given enough engineers, time and processors, this is feasible. Whenever a critic like me asks "but can your machine do this too?", today's A.I. scientists rush out to create a new program that can do it. "But can your machine also do this other thing?" The A.I. scientists rush out to create another program. And so forth.
Given enough engineers, time and processors, it is indeed possible to create a million machines that can do everything we naturally do.
After all, the Web plus a search engine can answer any question: someone, sooner or later, will post the answer on the Web, and the search engine will find it. Billions of Web users are providing all the answers to all the possible questions. The search engine is not particularly intelligent in any field but can find the answer to questions in all fields.
I doubt that this is the way in which my mind works (or any animal's mind works), but, yes, those millions of software programs will be "functionally" equivalent to my mind. In fact, they will be better than my mind because they will be able to recognize all the situations that all the people in the world recognize, not just the ones that i recognize, just like the Web will eventually contain the answers to all questions that all humans can answer, not only the answers that i know.
This is exactly what "brute-force A.I." is doing today: create a specific software program for each intelligent task that humans perform. The method is different, but the rationale is reminiscent of Marvin Minsky's "The Society of Mind" (1985) that viewed an artificial general intelligence as a society of specialized agents.
Luckily, the effect on the economy will be to create millions of jobs because those millions of machines will need to be designed, tested, stored, marketed, sold, and, last but not least, repaired.
Back to the Table of Contents
Purchase "Intelligence is not Artificial"