(These are excerpts from my book "Intelligence is not Artificial")
Why the Singularity is a Waste of Time and Why we Need A.I. - A Call to Action
The first and immediate reason why obsessive discussions about the coming of machine super-intelligence and human immortality are harmful is that they completely miss the point.
We live in an age of declining innovation. Fewer and fewer people have the means or the will to become the next Edison or Einstein. The great success stories in Silicon Valley (Google, Facebook, Apple) are of companies, started by individuals with very limited visions, that introduced small improvements over existing technologies. Entire nations (China and India, to name the obvious ones) are focusing on copying, not inventing.
Scholars from all sorts of disciplines are discussing the stagnation of innovation. A short recent bibliography: Tyler Cowen's e-book "The Great Stagnation" (2010) by an economist; Neal Stephenson's article "Innovation Starvation" (2011) by a sci-fi writer; Peter Thiel's article "The End of the Future" (2011) by a Silicon Valley venture capitalist; Max Marmer's "Reversing The Decline In Big Ideas" (2012) by another Silicon Valley entrepreneur; Jason Pontin's "Why We Can't Solve Big Problems" (2012) by a technology magazine editor; Rick Searle's article "How Science and Technology Slammed into a Wall and What We Should Do About It" (2013) by a political scientist.
Robert Gordon's book "The Rise and Fall of American Growth" (2016) shows that the pace of innovation has slowed since 1970 after a rapid rise between 1800 and 1970.
A study by Ashish Aroray and Sharon Belenzon at Duke University "Killing the Golden Goose?" (2015) shows that in the 27 years after 1980 the USA has experienced a shift from long-term invention towards short-term innovation. In 2016 Nicholas Bloom and Charles Jones of Stanford University and John Van Reenen of MIT wrote a paper titled "Are Ideas Getting Harder to Find?" in which they calculate the productivity of scientists and conclude that "research productivity is declining sharply".
Then there is the fundamental issue of priorities. The hypothetical world of the Singularity distracts us from the real world. The irrational exuberance about the coming Singularity distracts a lot of people from realizing the dangers of unsustainable growth, dangers that may actually wipe out all forms of intelligence from this planet.
Let's assume for a second that climate scientists like Paul Ehrlich and Chris Field (to name two i met in person at Stanford) are right about the coming apocalypse. Their science is ultimately based on the same science that happens to be right about what that bomb would do to Hiroshima (as unlikely as Einstein's formula may look), that is right about what happens when you speak in that rectangular device (as unlikely as it may seem that someone far away will hear your voice), that is right about what happens when someone broadcasts a signal in that frequency range to a box sitting in your living room (as unlikely as it may seem that the box will then display the image of someone located far away), that is right about what happens when you turn on that switch (as unlikely as it is that turning on a switch will light up a room); and it's the same science that got it right on the polio vaccine (as unlikely as it may look that invisible organisms cause diseases) and many other incredible affairs.
The claims about the Singularity, on the other hand, rely on a science (Artificial Intelligence) whose main achievement has been to win board games. One would expect that whoever believes wholeheartedly in the coming of the Singularity would believe tenfold stronger that the human race is in peril.
Let's assume for a second that the same science that has been right on just about everything that it predicted is also right on the consequences of rapid climate change and therefore the situation is exactly the opposite of the optimistic one based mostly on speculation depicted by A.I. science: the human race may actually go extinct before it even produces a single decent artificial intelligence.
In about one century the Earth's mean surface temperature has increased by about 0.8 degrees. Since it is increasing faster today than it was back then, the next 0.8 degrees will come even faster, and there is widespread agreement that 2 degrees above what we have today will be a significant tipping point. Recall that a simple heat wave in summer 2003 led to 15,000 deaths in France alone. Noah Diffenbaugh and Filippo Giorgi (authors of "Heat Stress Intensification in the Mediterranean Climate Change Hotspot ", 2007) have created simulations of what will happen to the Earth with a mean temperature 3.8 degrees above today's temperature: it would be unrecognizable. That temperature, as things stand, is coming for sure, and coming quickly, whereas super-intelligence is just a theoretical hypothesis and, in my humble opinion, is not coming any time soon.
Climate scientists fear that we may be rapidly approaching a "collapse" of civilization as we know it. There are, not one, but several environmental crises. Some are well known: extinction of species (with unpredictable biological consequences, such as that declining populations of bees may pose a threat to fruit farms), pollution of air and water, epidemics, and, of course, anthropogenic (human-made) climate change, See the "Red List of Threatened Species" published periodically by the International Union for Conservation of Nature (IUCN). See the University of North Carolina's study "Global Premature Mortality Due To Anthropogenic Outdoor Air Pollution and the Contribution of Past Climate Change" (2013) that estimated air pollution causing the deaths of over two million people annually. A Cornell University study led by David Pimentel, "Ecology of Increasing Diseases" (2007), estimated that water, air and soil pollution account for 40% of worldwide deaths. A 2004 study by the Population Resource Center found that 2.2 million children die each year from diarrhea caused by contaminated water and food. And, lest we think that epidemics are a thing of the past, it is worth reminding ourselves that AIDS (according to the World Health Organization) has killed about 35 million people between 1981 and 2012, and in 2012 about 34 million people were infected with HIV (Human Immunodeficiency Virus, the cause of AIDS), which makes it the fourth worst epidemics of all times. Cholera, tuberculosis and malaria are still killing millions every year; and "new" viruses routinely pop up in the most unexpected places (Ebola, West Nile virus, Hantavirus, Avian influenza, Zika virus, etc).
Some environmental crises are less advertised but no less terrifying. For example, global toxification: we filled the planet with toxic substances, and now the odds that some of them interact/combine in some deadly runaway chemical experiment never tried before are increasing exponentially every year. Many scientists point out the various ways in which humans are hurting our ecosystem, but few single out the fact that some of these ways may combine and become something that is more lethal than the sum of its parts. There is a "non-linear" aspect to what we are doing to the planet that makes it impossible to predict the consequences.
The result of exposure to lead, mercury and pesticides on fetal brain development has been known for a while In 2006 Philippe Grandjean at Harvard and Philip Landrigan at Mount Sinai in New York wrote of a "silent pandemic" of "neurotoxins" that are damaging the brains of unborn children ("Developmental Neurotoxicity of Industrial Chemicals," 2006). In 2012 the Harvard neurologist David Bellinger showed that intelligence quotients among children whose mothers had been exposed to "neurotoxins" while pregnant were lower than the IQs of children whose mothers were not exposed to those toxins ("A Strategy for Comparing the Contributions of Environmental Chemicals and Other Risk Factors to Neurodevelopment of Children", 2012).
But now the CDC (Center for Disease Control) warns that 212 industrial chemicals are widely disseminated in the natural environment and can routinely be found inside the bodies of people ("The Fourth National Report on Human Exposure to Environmental Chemicals", 2009).
In 1937 Lowell Thomas made a film on the story of "bakelite", the first synthetic plastic invented in 1907 by Leo Baekeland using fossil-fuel derivatives. Thomas talked of a fourth kingdom (besides animals, plants and fungi: the kingdom of synthetics. Within a century someone could make a film to show how the kingdom of synthetics infiltrated all other kingdoms. (In what was basically a preview of the Singularity movement, Baekeland chose the infinity symbol as the logo of his plastic-manufacturing firm).
The word "petro-topia", popularized by the book "Petrochemical America" (2012), composed by the photographer Richard Misrach and the landscape architect Kate Orff , and by the book "Living Oil - Petroleum Culture in the American Century" (2014), written by an environmental scientist at the University of Oregon, Stephanie LeMenager. Petro-topia could well turn into "petro-calypse".
The next addition of one billion people to the population of the planet will have a much bigger impact on the planet than the previous one billion. The reason is that human civilizations have already used up all the cheap, rich and ubiquitous resources. Naturally enough, humans started with the cheap, rich and ubiquitous ones, whether forests or oil wells. A huge amount of resources is still left, but those will be much more difficult to harness. For example, oil wells have to be much deeper than they used to. Therefore one liter of gasoline today does not equal one liter of gasoline a century from now: a century from now they will have to do a lot more work to get that liter of gasoline. It is not only that some resources are being depleted, but even the resources that will be left are, by definition, those that are difficult to extract and use (a classic case of "diminishing margin of return").
The United Nations' "World Population Prospects" (2013) estimated that the current population of 7.2 billion will reach 9.6 billion by 2050, and population growth will mainly come from developing countries, particularly in Africa: the world's 49 least developed countries may double in size from around 900 million people in 2013 to 1.8 billion in 2050.
A catastrophic event is not only coming, but the combination of different kinds of environmental problems makes it likely that it is coming even sooner than the pessimists predict and in a fashion that we cannot quite predict.
For the record, the environmentalists are joined by an increasingly diversified chorus of experts in all sorts of disciplines. For example, Jeremy Grantham who is an economist (managing 100 billion dollars of investments). His main point (see, for example, his 2013 interview on Charlie Rose's television program) is that the "accelerated progress" that the Singularity crowd likes to emphasize started 250 years ago with the exploitation of coal and then truly accelerated with the exploitation of oil. The availability of cheap and plentiful energy made it possible to defy, in a sense, the laws of Physics. Without fossil fuels the human race would not have experienced such dramatic progress in merely 250 years. Now the planet is rapidly reaching a point of saturation: there aren't enough resources for all these people. Keeping what we have now is a major project in itself, and those who hail the coming super-intelligence miss the point the way a worker about to get fired missed the point if he is planning to buy a bigger house.
We are rapidly running out of cheap resources, which means that the age of steadily falling natural resource costs is coming to an end. In fact, the price of natural resources declined for a century until about 2002 and then in just 5 or 6 years that price regained everything that it had lost in the previous century (i am still quoting Grantham). This means that we may return to the world of 250 years ago, before the advent of the coal (and later oil) economy, when political and economic collapses were the norm; a return to, literally, the ages of starvation.
It is not only oil that is a finite resource: phosphates are a finite resource too, and the world's agriculture depends on them.
Population growth is actually a misleading parameter, because "overpopulation" is measured more in terms of material resources than in number of people: most developed countries are not overcrowded, not even crowded Singapore, because they are rich enough to provide a good life to their population; most underdeveloped countries are overcrowded because they can't sustain their population. In this sense, overpopulation will increase even in countries where population growth is declining: one billion Indians who ride bicycles is not the same as one billion Indians who drive cars, run A/C units and wrap everything in plastic. If you do it, why shouldn't they?
The very technologies that should improve people's lives (including your smartphone and the robots of the future) are likely to demand more energy, which for now comes mainly from the very fossil fuels that are leading us towards a catastrophe.
All those digital devices will require more "rare earths", more coltan, more lithium and many other materials that are becoming scarcer.
We also live in the age of Fukushima, when the largest economies are planning to get rid of nuclear power, which is the only form of clean alternative energy as effective as fossil fuels. Does anyone really think that we can power all those coming millions of robots with wind turbines and solar panels?
Chris Field has a nice diagram (expanded in the 2012 special report of the Intergovernmental Panel on Climate Change titled "Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation") that shows "Disaster Risk" as a function of "Climate Change" and "Vulnerability" (shown, for example, at a seminar at the Energy Biosciences Institute in 2013). It is worth pondering the effects of robots, A.I. and the likes on that equation. Manufacturing millions of machines will have an impact on anthropogenic climate change; economic development comes at the cost of exploitation of finite resources; and, if high technology truly succeeds in increasing the longevity of the human race, the population will keep expanding. In conclusion, the race to create intelligent machines might exacerbate the risk of disasters before these super-intelligent machines can find a way to reduce it.
The Paris accord on climate change of 2015 (COP21) was a wildly optimistic agreement, and not even an enforceable one.
Economists such as Robin Hanson ("Economics Of The Singularity", 2008) have studied the effects of the agricultural, industrial and digital revolutions. Each caused an acceleration in economic productivity. The world's GDP may double every 15 years on average in this century. That's an impressive feat, but it's nothing compared with what would happen if machines could replace people in every single task. Productivity could then double even before we can measure it. The problem with that scenario is that the resources of the Earth are finite, and most wars have been caused by scarcity of resources. Natural resources are already strained by today's economic growth. Imagine if that growth increased ten fold, and, worse, if those machines were able to mine ten or 100 times faster than human miners. It could literally lead to the end of the Earth as a livable planet. Imagine a world full of machines that rapidly multiply and improve, and basically use all of the Earth's resources within a few years.
Ehrlich calls it "growthmania": the belief that there can be exponential growth on a finite planet.
The optimists counter that digital technology can be "cleaner" than the old technology. For example, the advent of email has dramatically reduced the amount of paper that is consumed, which has reduced the number of trees that we need to fell. It is also reducing the amount of mail trucks that drive around cities to deliver letters and postcards. Unfortunately, in order to check email and text messages you need devices like laptops, notepads and smartphones. The demand for materials such as lithium and coltan has risen exponentially.
Technological progress in the internal combustion engine (i.e., in fuel-efficient vehicles), in hybrid cars, in electric cars and in public transportation is credited for the reduction in oil consumption since 2007 in developing countries. But Asia Pacific as a whole has posted a 46% increase in oil consumption in the first decade of the 21st century. In 2000 oil consumption in China was 4.8 million bpd (barrels per day), or 1.4 barrels per person per year. In 2010 China's consumption had grown to 9.1 million bpd. China and India together have about 37% of the world's population. The rate of cars per person in China (0.09%) is almost 1/10th the one in the USA (0.8%) and in India is one of the lowest in the world (0.02%). Hence analysts such as Ken Koyama, chief economist at the Institute of Energy Economics Japan, predict that global petroleum demand will grow 15% over the next two decades ("Growing Oil Demand and SPR Development in Asia", 2013).
Services such as Zipcar and Uber were hailed as technology that empowers people, but the side-effect has been to shift thousands of people from public transportation towards cars. Many of the Zipcar and Uber customers are people who used to take the bus. These services have caused an increase in road traffic. Ironically, they may have also caused longer (not shorter) times to reach your destination because of increased traffic jams. If self-driving cars ever do happen, they will further compound the problem of traffic congestions: it should be obvious even to the dumbest Silicon Valley engineer that 200 people riding a bus occupy less asphalt than 200 people riding 200 self-driving cars.
There are loops that are obviously spinning out of control. I am writing this in China. One can hear the constant buzz of the giant air conditioner of China's large train stations, blowing cold air nonstop in order to lower the indoor temperature. However, those giant air conditioners contribute to global warming, i.e. to higher temperatures; which will require even bigger air conditioners; which will cause even more global warming; etc. China has just relaxed the "one-child policy" because its aging society will soon need more youngsters to take care of the elderly (and to pay taxes to support their health care). Europeans and Japanese are encouraging their people to make children for the same reason. We were scared of the population explosion, but we have come to realize that a declining population is no less dangerous. We are doomed either way: if population keeps increasing and gets wealthier, we will run out of resources; if it doesn't increase, our economies will collapse.
George Mitchell pioneered fracking in 1998, releasing huge amounts of natural gas that were previously thought inaccessible. Natural gas may soon replace oil in power stations, petrochemical factories, domestic heaters and perhaps motor vehicles. The fact that there might be plenty of this resource in the near future proves that technology can extend the life expectancy of natural resources, but it does not change the fact that those resources are finite, and it might reduce the motivation to face the inevitable.
Technology is also creating a whole new biological ecosystem around us, a huge laboratory experiment never tried before. Humans have already experienced annihilation of populations by viruses. Interestingly, the three most famous ones took hold at a time of intense global trade: the plague of 1348 (the "black death") was probably brought to Europe by Italian traders who picked it up in Mongol-controlled regions at a time when travel between Europe and Asia was relatively common and safe; and the flu pandemic of 1918, that infected about 30% of the world's population and killed 50 million people, took hold thanks to the globalized world of the British and French empires and to World War I. The HIV came out in the 1980s when the Western economies had become so entangled and it spread to the whole world during the globalization decade of the 1990. By the end of 2012 AIDS had killed 35 million people worldwide.
We now live in the fourth experiment of that kind: the most globalized world of all times, in which many people travel to many places; and they do so very quickly. There is one kind of virus that could be worse than the previous ones: a coronavirus, whose genes are written in RNA instead of DNA. The most famous epidemics caused by a coronavirus was the Severe Acute Respiratory Syndrome (SARS): in February 2003 it traveled in the body of a passenger from Hong Kong to Toronto, and within a few weeks it had spread all over East Asia. Luckily both Canada and China were equipped to deal with it and all the governments involved did the right thing; but we may not be as lucky next time. In 2012 a new coronavirus appeared in Saudi Arabia, the Middle East Respiratory Syndrome (MERS).
All of these race-threatening problems are unsolved because we don't have good models for them. One would hope that the high-tech industry invest as much into creating good computational models that can be used to save the human race as into creating ever more lucrative machines. Otherwise, way before the technological singularity happens, we may enter an "ecological singularity".
Discussing super-human intelligence is a way to avoid discussing the environmental collapse that might lead to the disappearance of human intelligence. We may finally find the consensus to act on environmental problems only when the catastrophe starts happening. Meanwhile, the high-tech world will keep manufacturing, marketing and spreading the very items that make the problem worse (more vehicles, more electronic gadgets, and, soon, more robots); and my friends in Silicon Valley, firmly believing that we are living in an era of accelerating progress, will keep boasting about the latest gadgets the things that environmental scientists call "unnecessarily environmentally damaging technologies".
Fans of high technology fill their blogs with news of ever more ingenious devices to help doctors, not realizing that the proliferation of such devices will require even more energy and cause even more pollution (of one sort or another). They might be planning a world in which we will have fantastic health care tools but we will all be dead.
I haven't seen a single roadmap that shows how technology will evolve in the next decades, leading up to the Singularity (to super-human intelligence). I have, instead, seen many roadmaps that show in detail what will happen to our planet under current trends.
In 2005 Norman Myers of Oxford University delivered a speech at the Economic Forum in which he predicted that climate change alone would cause the displacement of 250 million people by 2050. The number sounded hard to believe. However, a 2015 report by the Internal Displacement Monitoring Centre estimated 157.8 million people had been force to flee their homes between 2008 and 2014 because of natural disasters. In 2015 The Economics of Land Degradation Initiative estimated that land affected by serious drought doubled between the 1970s and the early 2000s. A 2015 study by the UN Environment Program (UNEP) found that 70% of Africa's migrants left their countries because of poverty or unemployment. In 2015 a surge in illegal immigrants from Central American into the USA followed catastrophic crop failures. The biggest exodus of the 2010s took place in Syria, devastated by a civil war.: it might be a coincidence or not, but Syria suffered a crippling drought in the five years preceding the civil war, so even the civil war itself may be due to some extent to climate. Millions of people move because changing climate has destroyed their traditional livelihood. Their job has not been stolen by robots but by climate change.
There is also plenty to worry about the Internet. As Ted Koppel wonderfully explained in his book "Lights Out" (2015), the chances of a massive cyber-attack that would leave the USA without electricity, communications and even water for weeks are very high. There are dozens of hacking incidents every day. Banks, retail chains, government agencies, even the smartphone of the director of the CIA and even Mark Zuckerberg's Facebook account have been hacked. And they are usually hacked by amateurs in search of publicity. Spy agencies can cause a lot more damage than amateurs. They are probably monitoring the system right now, and they will strike only when it is worth it. Companies that boasted about being invulnerable to hacking attacks have frequently been subjected to humiliating hacking attacks. The fact is that the Internet cannot be defended. It was probably a strategic mistake to make so much of the economy and of the infrastructure depend on a computer network (any computer network). Computers are vulnerable in a way that humans are not. You need to capture me and torture me in order to extract information from me that would harm my friends, relatives and fellow citizens; but you don't need to capture and torture a computer. It is much easier than that. Computer networks can be easily fooled into providing access and information. The more intelligent you make the network of computers, the bigger the damage it can cause to the humans who use it.
A.I.'s promises of dramatic economic and social change have been very effective in obtaining public and private funding, but that has come at the expense of other disciplines. Steven Weinberg's book "Dreams of a Final Theory" (1993) failed miserably to convince the political establishment to fund a new expensive project, the Superconducting Super-Collider. He failed because he narrated the reality of scientific research. Ray Kurzweil's "The Age of Spiritual Machines" (1999), a provocative and enthusiastic (and wildly self-congratulatory) reaction to IBM's Deep Blue beating the world chess champion in 1997, was totally out of touch with reality but impressed the political establishment enough such that many A.I. scientists obtained funding for their research. Research in A.I. in the USA has always relied on funding from the government (mainly through its "defense" arm called DARPA, which is really a designer of weapons). It was true of the original A.I. labs at the MIT and Stanford, it was true of the A.I. research at SRI that yielded the autonomous robot Shakey and eventually the conversational agent Siri, and it was true of Nicholas Negroponte's Media Lab at the MIT. Capturing the imagination of the political and military establishment is imperative for the progress of a scientific program (in Europe a similar phenomenon is at work, although it is the social impact rather than the military one to be more valued). The media's passion for A.I. may end up draining legitimate disciplines of the funding they need to improve the lives of millions of people. Imagine of enthusiasm for early A.I. had diverted the funds that were spent on the Interstate Highway System or Social Security; or if today's enthusiasm ends up diverting some of the $7 billion that the government pays to the Center for Disease Control (CDC), our front line in fighting infectious diseases. I for one think that, in the grand scheme of things, the Superconducting Super-Collider would have been more useful than Siri.
Last but not least, we seem to have forgotten that a nuclear war (even if contained between two minor powers) would shorten the life expectancy of everybody on the planet, and possibly even make the planet uninhabitable. Last time I checked, the number of nuclear powers had increased, not decreased, and, thanks to rapid technological progress and to the electronic spread of knowledge, there are now many more entities capable of producing nuclear weapons.
The unbridled optimism of the Artificial Intelligence community, and of the media that propagate it, is not justified because A.I. is no helping to solve any of these impelling problems. We desperately need machines that will help us solve these problems. Unbridled optimism is not a replacement for practical solutions.
The enthusiastic faith that Rome was the "eternal city" and the firm belief that Venice was the "most serene republic" did not keep those empires from collapsing. Unbridled optimism can be the most lethal weapon of mass destruction.
Back to the Table of Contents
Purchase "Demystifying Machine Intelligence"