Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )


(These are excerpts from my book "Intelligence is not Artificial")

Brute-force A.I.

Despite all the hoopla, to me machines are still way less "intelligent" than most animals. Recent experiments with neural networks were hailed as sensational triumphs because a computer finally managed to recognize cats in videos (at least a few times). How long does it take for a mouse to learn how a cat looks like? And that's despite the fact that computers use the fastest possible communication technology, whereas the neurons of a mouse's brain use hopelessly old-fashioned chemical signaling.

One of the very first applications of neural networks was to recognize numbers. Sixty years later the ATM (automatic teller machine) of my bank still cannot recognize the amounts on many of the cheques that i deposit, but any human being can. Ray Kurzweil is often (incorrectly) credited with inventing "optical character recognition" (OCR), a technology that dates back to the 1950s (the first commercial OCR system was introduced by David Shepard's Intelligent Machines Research Corporation and became the basis for the Farrington Automatic Address Reading Machine delivered to the Post Office in 1953, and the term "OCR" itself was coined by IBM for its IBM 1418 product). Buy the most expensive OCR software and feed it the easiest possible case: a well-typed page from a book or magazine. It will probably make some mistakes that humans don't make, but, more interestingly, now slightly bend a corner of the page and try again: any human can still read the text, but the most sophisticated OCR software on the market will go berserk.

For similar reasons we still don't have machines that can read cursive handwriting, despite the fact that devices with handwriting recognition features already appeared in the 1990s (GO's PenPoint, Apple's Newton). Most people don't even know that their tablet or smartphone has such a feature: it is so inaccurate that very few people ever use it. And, yet, humans (even not very intelligent ones) can usually read other people's handwriting with little or no effort.

What has significantly improved is vision recognition and speech recognition. Fei-fei Li's 2014 algorithm generates natural-language descriptions of images such as "A group of men playing frisbee in a park". This result is based on a large dataset of images and their sentence descriptions that she started in 2009, ImageNet. In the 1980s it would have been computationally impossible to train a neural network with such a large dataset. The result may initially sound astounding (the machine algorithm even recognized the frisbee) but, even with the "brute force" of today's computers, in reality this is still a far cry from human performance: we easily recognize that those are young men, and many other details. And Peter Norvig of Google showed at Stanford's L.A.S.T. festival of 2015 a funny collection of images that were wrongly tagged by the machine because the machine has no common sense.

We are flooded with news of robots performing all sorts of human tasks, except that most of those tasks are useless. On the other hand, commenting on the ongoing unmanned Mars mission, in April 2013 NASA planetary scientist Chris McKay told me that "what Curiosity has done in 200 days a human field researcher could do in an easy afternoon." And that is the most advanced robotic explorer ever built.

What today's "deep learning" A.I. does is very simple: lots of number crunching. It is a smart way to manipulate large datasets for the purpose of classification. It was not enabled by a groundbreaking paradigm shift but simply by increased computing power.

The "Google Brain" project started at Google in 2011 by Andrew Ng (real name Wú Ēndá) is the quintessential example of this approach. In June 2012 a combined Google/Stanford research team used an array of 16,000 processors to create a neural network with more than one billion connections and let it loose on the Internet to analyze millions of still frames of videos (it recognized that many of them had a similar feature, the shape of a cat). Given the cost, size and speed of computers back then, 30 years ago nobody would have tried to build such a system. The difference between then and now is that today A.I. scientists can use thousands of powerful computers to get what they want. It is, ultimately, brute force with little or no sophistication. Whether this is how the human mind does it is debatable. And, again, we should be impressed that 16,000 of the fastest computers in the world took a few months to recognize a cat, something that a kitten with a still underdeveloped brain can do in a split second. I would be happy if the 16,000 computers could just simulate the 302-neuron brain of the roundworm, no more than 5000 synapses that nonetheless can recognize with incredible accuracy a lot of very interesting things.

The real innovation in Ng's approach was the idea to use GPUs. That simple idea made it possible to train multi-layer neural networks. In fact, one could argue that the real turning point in the history of Artificial Intelligence came when Andrew Ng at Stanford ("Large-scale Deep Unsupervised Learning using Graphics Processors", 2009) and Schmidhuber at IDSIA ("Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition", 2010) showed that fast processors (GPUs) and the dataset were more important than all the philosophical tweaking of architectures: "brute force" was more important than elegant math. They tackled well-known problems and showed that, without any theoretical improvement, these problems could be solved simply by throwing enough computational power and training data at the neural network.

The human brain consumes about 20 Watts per hour. I estimate that AlphaGo's 1920 processors and 280 GPUs consumed about 440,000 Watts per hour (and that's not including the energy spent during the training process). What else can AlphaGo do besides playing Go? Absolutely nothing. What else can you do besides playing games? An infinite number of things, from cooking a meal to washing the car. AlphaGo consumed 440,000 W to do just one thing. Your brain uses 20 W and does an infinite number of things. How would you call someone that has to use 20,000 times more resources than you to do just one thing? What AlphaGo did is usually called "stupidity" not "intelligence". Let both the human and AlphaGo run on 20 Watts and see who wins. If it takes 440,000 Watts to play Go, how many Watts will it take to do everything else that that the go/weiqi master can do with his brain? Like driving a car, cooking a meal, jogging in the park, reading the news, chatting about literature with a friend, etc? A ridiculous number of machines will be needed to match the human capability, an amount of power perhaps exceeding the 15 terawatts that all nations combined consume. Perhaps it will take more machines that we can possibly build with all the materials available on the planet.

DeepMind's network that learned to play Atari videogames like a master was widely publicized. Less publicized was a study by Joshua Tenenbaum's student Pedro Tsividis at MIT, in collaboration with Harvard psychologists, which showed that humans can learn the same Atari videogames to the same level of Deep Mind's program in a few minutes, whereas DeepMind's program needs hundreds of hours of game-playing ("Human Learning in Atari", 2017).

Brute force is the paradigm that now dominates A.I. After all, by indexing millions of webpages, a search engine is capable of providing an answer to the vast majority of questions (even "how to" questions), something that no expert system came close to achieving.

One wonders if slow and cumbersome computers were a blessing for the scientific community of the 1960s because those archaic machines forced computer scientists to come up with creative models instead of just letting modern high-speed computers crunch numbers until a solution is found.

John McCarthy was right to complain that, once A.I. solves a problem, the world does not consider it "artificial intelligence" anymore. But he didn't realize why: because, so far, whenever A.I. solved a problem (e.g., playing chess better than a master), the world realized that the solution wasn't special at all, it was just a matter of implementing very demanding mathematics on very fast computers. If 1+1=2 is not A.I., then playing chess is not A.I. A.I. has become synonym of "running computationally intensive programs on supercomputers". We are impressed by the results, but we correctly don't consider them A.I. for the simple reason that human intelligence is something else. The name of the discipline is misleading. Not our fault.

Back to the Table of Contents


Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact