Nick Bostrom:

"Superintelligence" (2014)

(Copyright © 2014 Piero Scaruffi | Legal restrictions )

General rant against the Singularity

The "Singularity" seems to have become a new lucrative field for the struggling publishing industry (and, i am sure, soon, for the equally struggling Hollywood movie studios). To write a bestseller, you have to begin by warning that machines more intelligent than humans are coming soon. That is enough to get everybody's attention. Then you can write pretty much about Malaysian recipes or where to buy cheap shoes in Hong Kong. People will read the rest no matter what, major news media will talk about it, and reviewers like me will be forced to review your book and further legitimize what you wrote (at least in the case of this book it was a pleasant chore, something i cannot say for Ray Kurzweil's embarrassing "How to Create a Mind").

I wrote everything i had to say on the subject in "Intelligence is not Artificial": this is mostly a new religion as good at predicting the future (a religion probably best encapsulated by Ted Chu's "Human Purpose and Transhuman Potential"), as the Bible was, it is highly unscientific (based on very vague definitions and dubious experiments), and its claims about what it has achieved so far (and about its rate of progress) are wildly exaggerated. I know: that's not a way to sell a lot of copies and be reviewed by panicking journalists.

As i have written ad nauseam, super-intelligence already exists and has always existed: there are countless animals that can do things that we cannot do. We coexist with them. We already built machines that can do things that we cannot even think of doing, such as keeping time the precise way that medieval clocks already did a thousand years ago. And of course computers can make calculations millions of times faster than the fastest mathematician (the traditional definition of "intelligence" being the ability to make computations). We coexist with these machines that can do things that no human being can do. Of course there are problems related to every machine that we ever invented, from the clock to social media. And there will be problems related to future machines.

Nothing that historians and philosophers haven't discussed before. But an age that is rapidly losing faith in the traditional God desperately needs to find and found a new religion, and the Singularity is the best option that some people have in the 21st century. The human mind is programmed to believe in the supernatural. That is one of the limitations of the human mind and all this talk about the Singularity is nothing but a new modern proof of that limitation.

To be fair to Nick Bostrom, who obviously has a broad knowledge of both technology and philosophy, he has written a more technical book than most (much more technical than his publisher wants the reader to believe).

Hopefully, our governments will keep spending more money on fighting real threats such as ebola, and our philosophers will keep spending more time on analyzing human stupidity than machine intelligence. Those are much bigger threats to future generations than "superintelligent" machines.

My version of the facts is here.


General meditations from reading Bostrom's book

Nick Bostrom became famous for "Are you Living in a Computer Simulation?" (2003) in which he logically proved that we are likely to be living in a simulation (if the human race survives long enough to achieve a "posthuman" stage of virtually infinite computational power, it will be able to build simulations of its ancestors, i.e. of us, and then the question is only whether it will want to do so or not).

Bostrom writes that the reason A.I. scientists have failed so badly in predicting the future of their own field is that the technical difficulties have been greater than they expected. I don't think so. I think those scientists had a good understanding of what they were trying to build. The reason why "the expected arrival date [of Artificial Intelligence] has been receding at a rate of one year per year" (Nick Bostrom's estimate) is that we keep changing the definition. There never was a proper definition of what we mean by "Artificial Intelligence" and still there isn't. Bostrom notes that the original A.I. scientists were not concerned with safety or ethical concerns: of course, the machines that they had in mind were chess players and theorem provers. That's what Artificial Intelligence originally meant. Being poor philosophers and poor historians, they did not realize that they belonged to the centuries-old history of automation, leading to greater and greater automata. And they couldn't foresee that within a few decades all these automata would become millions of times faster, billions of times cheaper, and would be massively interconnected. The real progress has not been in A.I. but in miniaturization. Miniaturization has made it possible to use thousands of tiny cheap processors and to connect them massively. The resulting "intelligence" is still rather poor, but its consequences are much more intimidating.

The statistical method that has become popular in Artificial Intelligence during the 2000s is simply an admission that previous methods were not wrong but simply difficult to apply to all problems in general. This new method, like its predecessors, can potentially be applied to every kind of problem... until scientists will admit that it cannot. The knowledge-based method proved inadequate for recognizing things and was eventually abandoned (nothing wrong with it at the theoretical level). The traditional neural networks proved inadequate for just about everything because of their high computational costs. In both cases dozens of scientists had to tweak the method to make it work in a narrow and very specific problem domain. When generalized, the statistical methods in vogue in the 2010s turn out to be old-fashioned mathematics such as statistical classification and optimization algorithms. These might indeed be more universal than previous methods but, alas, hopelessly computational-resource intensive. It would be ironic if Thomas Bayes' theorem of 1761 turned out to be the most important breakthrough in Artificial Intelligence. Unfortunately, it is easy to find real-world problems in which the repeated application of that theorem leads to a combinatorial explosion of the space of potential solutions that is computationally intractable. We are now waiting for the equivalent of John Hopfield's "annealing" algorithm that in 1982 made neural networks easier to implement. That will make this Bayesian kind of Artificial Intelligence go for a little longer, but i am skeptic that this will lead to a general A.I. The most successful algorithms used in the 2010s to perform machine translation require virtually no linguistic knowledge. The very programmer who creates and improves the system has no knowledge of the two languages being translated into each other: it is only a statistical game. Translations between two languages for which millions of translated texts exist are beginning to be decent enough, while translations between rarely translated languages (such as Italian and Chinese) are still pitiful. I doubt that this is how human interpreters translate one language into another, and i doubt that this approach will ever be able to match human-made translations, let along surpass it (i assume that the Singularity is also supposed to be better at translating from any language to any language).

Bostrom quotes Donald Knuth's famous sentence that A.I. seems better at emulating "thinking" than at emulating the things we do without thinking. As Bostrom points out in Note 60 of Chapter 1, even that is optimistic, but there is larger truth in that statement: it is relatively easy to write an algorithm when we can tell how we do things. The real hard problem is that we don't know how we do the vast majority of things that we do, otherwise philosophers and psychologists would not have a job. A conversation is the typical example. We do it effortlessly. We shape strategies, we construct sentences, we understand the other party's strategy and sentences, we get passionate, we get angry, we try different strategies, we throw in jokes and we quote others. Anybody can do this without any training or education. Check what kind of conversation can be carried out by the most powerful computer ever built. It turns out that most of the things that we do by "thinking" (such as proving theorems and playing chess) can be emulated with a simple algorithm (especially if the environment around us has been shaped by society to be highly structured and to allow only for a very small set of moves). The things that we do without thinking are still a mystery. We can't even explain how children learn in the first place. Artificial Intelligence scientists have a poor philosophical understanding of what humans do and a poor neurophysiological understanding of how humans do it.


Detailed review of Bostrom's book

In a nutshell, Bostrom first tries to estimate how far we are from achieving super-intelligence. Even if he sounds cautionary, i still think he is wildly optimistic. When he talks about whole brain emulation, he seems to reduce the problem to figuring out the connectionist structure of the human brain, ignoring that the brain uses more than 50 kinds of neurotransmitters to operate that network. I don't think that is a negligible detail. He seems confident in "biotechnical enhancements", but ours is a species that can't even figure out how to defend its brain from a tiny virus like ebola. We are more likely to produce a race of idiots than a race of geniuses if we use today's science for "biological cognitive enhancements" as he calls them. Like everybody else he can't quite define what a super-intelligence is (how will i know it when i see one? again, a clock is already a machine that does something that no human being can do), but he does some sophisticated analysis of how it might "take off". Except that the conclusion is a kind of "I don't know". He speculates that most likely there will be only one kind of super-intelligent (and, if there are multiple ones, it won't be good news), but his speculations are based on the assumption that his knowledge and his logic are good enough to understand how a super-intelligence will behave, which sounds like a contradiction in terms. He even employs historical precedents (from human civilization) and Malthusian theories to analyze the super-intelligence: isn't the super-intelligence supposed to be a different kind of intelligence that we cannot understand? The way Bostrom treats it, super-intelligence sounds like nothing more than a faster car or a stronger weapon, something that we can know how to handle if we think hard enough. He even closes the book with interesting discussions about morality: how we create a moral machine? Philosophers will love these chapters. His methods for addressing the problem (capability control and motivation selection) have many predecessors in ethical philosophy (i.e. philosophers offering advice on how society can create better citizens) and, of course, at the end of the day the discussion shifts to "who is the supreme moral authority to decide which moral values are to be taught?" Moral values have changed over the centuries. It used to be perfectly normal to marry and have sex with a woman under 18: now you go to jail and are listed as a sexual offender for the rest of your life. Most societies punished homosexuality and most religions consider it a horrible sin, but an increasing number of states are recognizing same-sex marriage. Given the horrible track record of the human species, why would it be bad if the super-intelligence simply wiped out the human race from the face of the Earth? Philosophers have a job precisely because they spend their careers discussing topics like these. Regardless of the answers to these centuries-old questions, the fundamental contradiction is that Bostrom treats the super-inteligence as something whose behavior is human-like. Hence, we are just talking about yet another human-made technology and what effects it may have on human society. Every technology ever invented by humans has had unwanted consequences, and it is certainly a good idea to prevent them instead of having to fix them later. But what exactly will be different with this "super-intelligence" is not explained. Exactly like nobody really knows what will happen when the Messiah, Jesus or the Mahdi will come.
For a much better Bostrom, read his essay on alien life (Technology Review, 2017).

TM, ®, Copyright © 2014 Piero Scaruffi