Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )

(These are excerpts from my book "Intelligence is not Artificial")

The Singularity as the Outcome of Exponential Progress

The Singularity crowd is driven to enthusiastic prognostications about the evolution of machines: machines will soon become intelligent and they will rapidly become intelligent in a superhuman way, acquiring a higher form of intelligence than human intelligence.

There is an obvious disconnect between the state of the art and what the Singularity crowd predicts. We are not even remotely close to a machine that can troubleshoot and fix an electrical outage or simply your washing machine, let alone a software bug. We are not even remotely close to a machine that can operate any of today's complex systems without human supervision. One of the premises of the theory of the Singularity is that machines will not only become intelligent but will even build other, smarter machines by themselves; but right now we don't even have software that can write other software.

The jobs that have been automated are repetitive and trivial. And in most cases the automation of those jobs has required the user/customer to accept a lower (not higher) quality of service. Witness how customer support is rapidly being reduced to a "good luck with your product" kind of service. The more automation around you, the more you (you) are forced to behave like a machine to interact with machines, precisely because they are still so dumb.

The reason that we have a lot of automation is that (in developed countries like the USA, Japan and the European countries) it saves money: machine labor is a lot cheaper than human labor. Wherever the opposite is true, there are no machines. The reason we are moving to online education is not that university professors failed to educate their students but that universities are too expensive. And so forth: in most cases it is the business plan, not the intelligence of machines, that drives automation.

Wildly optimistic predictions are based on the exponential progress in the speed and miniaturization of computers. In 1965 Gordon Moore predicted that the processing power of computers would double every 18 months ("Moore's law"), and so far his prediction has been correct. Look closer and there is little in what they say that has to do with software. It is mostly a hardware argument. And that is not surprising: predictions about the future of computers have been astronomically wrong in both directions but, in general, the ones that were too conservative were about hardware (its progress has surprised us), the ones that were too optimistic were about software (its progress has disappointed us). What is amazing about today's smartphones is not that they can do what computers of the 1960s could not do, but that they are small, cheap and fast. The fact that there are many more software applications downloadable for a few cents means that many more people can use them, a fact that has huge sociological consequences; but it does not mean that a conceptual breakthrough has been reached in software technology. It is hard to name one software program that exists today and could not have been written in Fortran fifty years ago. If it wasn't written, the reason, probably, is that it would have been too expensive or that some required hardware did not exist yet.

Accelerating technological progress in computer science has largely been driven by accelerating cost of labor, not by real scientific innovation. The higher labor costs go, the stronger the motivation to develop "smarter" machines. Those machines, and the underlying technologies, were already feasible ten or twenty or even thirty years ago, but back then it didn't make economic sense for them to be adopted.

There has certainly been a lot of progress in computers getting faster, smaller and cheaper. Even assuming that this will continue "exponentially" (as the Singularity crowd is quick to claim), the argument that this kind of (hardware) progress is enough to make a shocking difference in terms of machine intelligence is based on an indirect assumption: that faster/smaller/cheaper will lead first to a human-level intelligence and then to a superior intelligence. After all, if you join together many many many dumb neurons you get the very intelligent brain of Albert Einstein. If one puts together millions of superfast GPUs, maybe one gets superhuman intelligence. Maybe.

In any event, we'd better prepare for the day that Moore's Law stops working. Moore's Law was widely predicted to continue in the foreseeable future, but its future does not look so promising anymore. It is not only that technology limits might be approaching. The original spirit behind Moore's Law was to show that the "cost" of making transistor-based devices would continue to decline. Even if the industry finds a way to continue to double the number of transistors etched on a chip, the cost of doing so might start increasing soon: the technologies to deal with microscopic transistors are inherently expensive, and heat has become the main problem to solve in ultra-dense circuits. In 2016 William Holt of Intel announced that Intel will not push beyond the 7-nanometer technology and cautioned that processors may get slower in the future in order to save energy and reduce heat, i.e. costs.
In October 2014 the DARPA created the first 1THz computer chip (i.e. capable of one trillion instructions per second). In June 2016 Bevan Baas' team at UC Davis unveiled the KiloCore chip with a maximum computation rate of 1.78 trillion instructions per second. But it is telling that in 2017 it was impossible to find out, using search engines, what is the fastest processor in the world: no company seemed to have made that claim. So i looked up the most expensive ones: the AMD Ryzen Threadripper 1950X at 4GHz and the Intel Core i9-7900X at 4.3 GHz; quite far from a terahertz, in fact 200 times slower.
For 70 years computers have been getting smaller and smaller, but in 2014 they started getting bigger again (the iPhone 6 generation). If Moore's Law stops working, will there still be progress in "Brute-force A.I.", e.g. in deep learning? In 2016 Scott Phoenix, the CEO of Silicon Valley-based AI startup Vicarious, declared that "In 15 years, the fastest computer will do more operations per second than all the neurons in all the brains of all the people who are alive." What if this does not come true?

The discussion about the Singularity is predicated upon the premise that machines will soon be able to perform "cognitive" tasks that were previously exclusive to humans. This, however, has already happened. We just got used to it. The early computers of the 1950s were capable of computations that traditionally only the smartest and fastest mathematicians could even think of tackling, and the computers quickly became millions of times faster than the fastest mathematician. If computing is not an "exclusively human cognitive task", i don't know what would qualify. Since then computers have been programmed to perform many more of the tasks that used to be exclusive to human brains. And no human expert can doublecheck in a reasonable amount of time what the machine has computed. Therefore there is nothing new about a machine performing a "cognitive" task that humans cannot match. Either the Singularity already happened in the 1950s or it is not clear what cognitive task would represent the coming of the Singularity.

To assess the progress in machine intelligence one has to show something (some intelligent task) that computers can do today that, given the same data, they could not have done fifty years ago. There has been a lot of progress in miniaturization and cost reduction, so that today it has become feasible to use computers for tasks for which we didn't use them fifty years ago; not because they were not intelligent enough to do them but because it would have been too expensive and it would have required several square kilometers of space. If that's "artificial intelligence", then we invented artificial intelligence in 1946. Today's computers can do a lot more things than the old ones just like new models of any machine (from kitchen appliances to mechanical reapers) can do a lot more things than old models. Incremental engineering steps lead to more and more advanced models for lower prices. Some day a company will introduce coffee machines on wheels that can make the coffee and deliver the cup of coffee to your desk. And the next model will include voice recognition that understands "coffee please". Etc. This kind of progress has been going on since the invention of the first mechanical tool. It takes decades and sometimes centuries for the human race to fully take advantage of a new technology. "Progress" often means the process of mastering a new technology (of creating ever more sophisticated products based on that technology). The iPhone was not the first smartphone, and Google was not the first search engine, but we correctly consider them "progress".

There is no question that progress has accelerated with the advent of electrical tools and further accelerated with the invention of computers. Whether these new classes of artifacts will eventually constitute a different kind of "intelligence" probably depends on your definition of "intelligence".

The way the Singularity would be achieved by intelligent machines is by these machines building more intelligent machines capable of building more intelligent machines and so forth. A similar loop has existed since about 1776. The steam engine enabled mass production of steel, which in turn enabled the mass production of better steam engines, and this recursive loop continued for a while. James Watt himself, inventor of the steam engine that revolutionized the world, worked closely with John Wilkinson, who made the steel for Watt's engines using Watt's engines to make the steel. Today this loop of machines helping build other machines takes place on a large scale. For example, a truck carries the materials that the factory will use to make better trucks. The human beings in this process can be viewed as mere intermediaries between machines that are evolving into better machines. This positive-feedback loop is neither new nor necessarily "exponential". In the 19th century that loop of machines building (better) machines building (better) machines accelerated for a while. Eventually, the steam engine (no matter how sophisticated that accelerating positive-feedback loop had made it) was made obsolete by a new kind of machine, the electrical motor. Again, electrical motors were used by manufacturers of motor parts that contributed to making better electrical motors used by manufacturers of electrical motor parts.

We have been surrounded by machines that built better machines for a long time... but with human intermediaries designing the improvements.

Despite the fact that no machine has ever created another machine of its own will, and no software has ever created a software program of its own will, the Singularity crowd seems to have no doubts that a machine is coming soon created by a machine created by a machine and so forth, each generation of machines being smarter than the previous one.

i certainly share the concern that the complexity of a mostly automated world could get out of hand. This concern has nothing to do with the degree of intelligence but just with the difficulty of managing complex systems. Complex, self-replicating systems that are difficult to manage have always existed. For example: cities, armies, post offices, subways, airports, sewers, economies...

Back to the Table of Contents

Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact