Automation and Jobs in the Age of Intelligent Machines
(This became a chapter in "Demystifying Machine Intelligence")
Jobs in the Digital Age
Many contemporary thinkers fear
that us (humans) are becoming obsolete because machines
will soon take our place.
Irving John Good in "Speculations Concerning the First Ultraintelligent Machine" (1965): "the first ultraintelligent machine is the last invention that man need ever make".
Hans Moravec in "Mind Children" (1988): "robots will eventually succeed us: humans clearly face extinction".
Federico Pistono in "Robots Will Steal Your Job But That's OK" (2012): "as we speak, millions of algorithms created by computer scientists are frantically running on servers all over the world, with one sole purpose: do whatever you used to do, but better."
Actually, this idea has been repeated often since
the invention of the assembly line and of the typewriter.
In order to understand what we are talking about we need to define what is
"us". Assembly lines, typewriters, computers, search engines and whatever
comes next have replaced jobs that have to do with material life.
I could simply say
that they have replaced "jobs". They have not replaced "people". They replaced
their jobs. Therefore what went obsolete has been jobs, not people, and what
is becoming obsolete is jobs, not people. Humans, to me, are biological
organisms who (and not "that") write novels, compose music, make films, play
soccer, ride the Tour de France, discover scientific theories, hike on
mountains and recommend restaurants. Which of these activities are becoming
obsolete because machines are doing them better? My favorite question in
private conversations on machine intelligence is: when will a machine be able
to cross a street that doesn't have a traffic light? Machines are not even
remotely close to doing anything of what i consider "human". In fact, there
has been virtually no progress in building a machine that will cross that
Machines are certainly good at processing big data at lightning speed.
Fine. We are rapidly becoming obsolete at doing that. Soon we will have a
generation that cannot do arithmetic. In fact, we've never done that.
Very few humans spent their time analyzing big data. The vast majority of
people are perfectly content with small data: the price of gasoline, the name
of the president, the standings in the soccer league, the change in my pocket,
the amount of my electricity bill, my address, etc. Humans have mostly
been annoyed by big data. That was, in fact, a motivation to invent a machine
that would take care of big data. The motivation to invent a machine that rides
the Tour de France is minimal because we actually enjoy watching (human)
riders sweat on those steep mountain roads, and many of us enjoy emulating
them on the hills behind our home.
So we can agree that what is becoming obsolete is not "us" but our current jobs.
That has been the case since the invention of the first farm (that made obsolete
the prehistoric gatherers) and, in fact, since the invention of the wheel (that
probably made obsolete many who were making a living carrying goods on their
Automation is responsible for making many jobs obsolete, but it is not the only
The first and major one is the end of the Cold War.
In 1991 the capitalist world started expanding: before 1991 the economies
that really counted were a handful (USA, Japan, Western Europe). After 1991
the number of competitors for the industrialized countries has skyrocketed,
and they are becoming better and better. Technology might have "stolen"
some jobs, but that factor pales by comparison with the millions of jobs that
were exported to Asia. In fact, if one considers the totality of the capitalist
world, an incredible number of jobs have been created precisely during the
period in which critics routinely claim that millions of jobs have been lost.
If Kansas loses one thousand jobs but California creates two thousand, we
consider it an increase in employment. These critics make the mistake of using
the old nation-based logic for the globalized world. When counting jobs
lost or created during the last twenty years, one needs to consider the
entire interconnected economic system. In the first pages he mentions
employment data for the USA but has nothing to say about employment over
the same period in China, India, Mexico, Brazil, etc. Those now rank among
the main trading partners of the USA, and, more importantly, business is
multinational. If General Motors lays off one thousand employees in Michigan
but hires two thousand in China, it is not correct to simply conclude
that "one thousand jobs have been lost". If the car industry in the USA
loses ten thousand jobs but the car industry in China gains twenty thousand,
it is not correct to simply conclude that ten thousand jobs have been lost
in the car industry. In all of these cases jobs have actually been created.
There are other factors that one has to keep in mind, although not as pivotal
as globalization. For example, energy. This is the age of energy. Energy has
always been important for economic activity but never like in this century.
The cost and availability of energy are one of the main factors that
determine growth rates and therefore employment. The higher the cost of
energy, the lower the amount of goods that can be produced, the lower the
number of people that we employ. If forecasts by international agencies are
correct (See this recent news), the coming energy boom might have a bigger impact on employment
in the USA than computing technology.
Then there are sociopolitical factors. Unemployment is high in Western Europe,
especially among young people, not because of technology but because of
rigid labor laws and government debt. A company that cannot lay off workers
is reluctant to hire any. A government that is indebted cannot pump money into
Another major factor that accounts for massive losses of jobs in the developed
world is the management science that emerged in the 1920s in the USA. That
science (never mentioned in this book) is the main reason that today companies
don't need as many employees as comparable companies employed a century ago.
Each generation of companies has been "slimmer" than the previous generation.
As those management techniques get codified and applied massively, companies
become more efficient at manufacturing (across the world) and selling (using
the most efficient channels) and at predicting business cycles. All of this
results in fewer employees not because of automation but because of
As i have written years ago, the
Gift Economy is the
scariest of the factors that emerged in the 2000s.
Unemployment cannot be explained simply by looking at the effects of technology.
Technology is one of many factors and, so far, not the main one. There have
been periods of rapid technological progress that have actually resulted in
very low unemployment, most recently the 1990s when e-commerce
Anyway, it is undeniable that automation has contributed dramatically to
eliminate jobs, so let's look at "intelligent machines". But first a little
digression on the brain's container: the body.
A lot of what books on machine intelligence say
is based on a brain-centered view of the human being.
I may agree that my brain is the most important organ of my body
(i'm ok with transplanting just about any organ of my body but not my brain).
However, this is not what evolution had in mind. The brain is one of the many
organs designed to keep the body alive so that the body can find a mate and
make children. The brain is not the goal but one of the tools to achieve
(Incidentally, i always remind people, especially when the discussion is about
"progress" and "immortality", that the longest-living beings have no brain,
trees and bacteria).
Focusing only on mental activities when comparing humans and machines is
a categorical mistake. Humans do have a brain but don't belong to the
category of brains: they belong to the category of animals, which are mainly
recognizable by their bodies. Therefore, one should compare machines and
humans based on bodily actions and not just of printouts, screenshots and files.
Of all the things that i do during a day (from running to reading a book)
what can a machine do? what will a machine be able to do in ten years? in 20
years? in 200 years? I suspect we are very far from the day that a machine
can simply play soccer in any meaningful way with six-year old children, let
alone with champions. Playing a match of chess with the world champion of chess
is actually easy. It is much harder for a machine to do any of the things that we routinely
do in our home.
Furthermore, there's the meaning of action. The children who play soccer
actually enjoy it. They scream, they are competitive, they cry if they lose,
they can be mean, they can be violent. There is passion in what we do.
Will an android that plays decent soccer in 3450 (that's a realistic date
in my opinion) also have all of that? Let's take something simpler, that
might happen in 50 or 100 years: at some point we'll have machines capable
of reading a novel; but will they understand what they are reading?
Is it the same "reading" that i do?
The body is the reason why i think the Turing Test is not very meaningful.
The Turing Test locks a computer and a human being in two rooms, and, by doing so,
it removes the body from the test. My test (let's immodestly
call it the Scaruffi Test) would be different: we give a soccer ball to both
the robot and the human and see who dribbles better.
I am not terribly impressed that a computer beat the world champion of chess.
I will be impressed the day a robot dribbles better than Messi.
If you remove the body from the test, you are removing pretty much everything
that defines a human being as a human being. A brain kept in a jar is not
a human being: it is a gruesome tool for classrooms of anatomy.
Now we can talk about what it means for a machine to be "intelligent".
In private conversations about "machine intelligence" i like to quip that
it is not intelligent to talk about intelligent machines: whatever they do
is not what we do, and therefore is neither "intelligent" nor "stupid"
(attributes invented to define human behavior). Talking about the intelligence
machine is like talking about the leaves of a person: trees have leaves,
people don't. "Intelligence" and "stupidity" are not properties of machines:
they are properties of humans.
We apply to machines many words invented for humans simply because we don't
have a vocabularity for the states of machines. For example, we buy "memory"
for our computer, but that is not a memory at all: it doesn't remember (it
simply stores) and it doesn't even forget, the two defining properties of
memory. We call it memory for lack of a better word. We talk about the "speed"
of a machine but it is not the "speed" at which a human being rides or drives.
We don't have the vocabulary for machine behavior. We borrow words from the
vocabulary of human behavior. It is a mistake to assume that, because we use
the same word to name them, then they are the same thing.
If i see a new kind of fruit and call it "apple" because there is no word in
my language for it, it doesn't mean it is an apple.
A computer does not "learn": what it does when it refines its data
representation is something else (that we don't do).
One of the fundamental states of human beings is "happiness". When is a machine
"happy"? The question is meaningless: it's like asking when does a human being
need to be watered? You water plants, not humans. Happiness is a meaningless
word for machines. Of course, some day we may start using the word "happy" to
mean for example, that the machine has achieved its goal or that it has
enough electricity; but it would simply be a linguistic expedient. The fact
that we may call it "happiness" does not mean that it "is" happiness.
If you call me Peter because you can't spell my name, it does not mean that
my name is Peter.
The objection to what i am saying is typically behavioristic in nature:
who cares what a machine does and how it does it, let's just measure its
performance and see if it matches human performance.
Well, the current performance is far less exciting than the specialists
would have you believe.
Despite all the hoopla,
to me machines are still way less "intelligent" than a
experiments with neural networks were hailed as incredible triumphs by
computer scientists because a computer finally managed to recognize a cat
(at least a few times) after being presented thousands of images of cats.
How long does it take a chimp to learn how a cat looks like?
And that's despite the fact that computers use the fastest possible
communication technology, whereas the neurons of a chimp's brain use
hopelessly old-fashioned chemical signaling.
One of the very first applications of neural networks was to recognize numbers.
Sixty years later the ATM of my bank still cannot recognize the amounts on
50% of the cheques that i deposit.
"Machines will be capable, within twenty years, of doing any
work that a man can do" (Herbert Simon, 1965). Slightly optimistic back then.
I haven't seen anything yet that makes me think that statement (by a Nobel
Prize winner) is any more accurate today.
It is interesting how different generations react to the stupidity of machines:
the generation that grew up without machines around gets extremely upset
(because they are so much more stupid than humans),
my generation (that grew up with machines) gets somewhat upset (because they
are not much smarter than they were when i was a kid), and the younger
generations are progressively less upset, with the youngest ones simply
taking for granted that customer support has to be what it is (lousy)
and that many things (pretty much all the things that require common sense,
expertise, and what we normally call "intelligence") are simply impossible.
Incidentally, the modern "machine learning" techniques that the media are
as revolutionizing the field are actually old. The mathematical
foundations come from work by Geoffrey Hinton who has discovered algorithms to
work efficiently with Restricted Boltzmann Machines and stack them one on top
of the other. Boltzmann Machines originated from
Hopfield's and Smolensky's work in the 1980s.
So we are talking about a technology that is 30+ years old.
What that technology does is very simple: lots of number crunching.
It is a smart way to manipulate large datasets for the purpose of
classification. It was not enabled by a groundbreaking paradigm shift but simply
by increased computing power: given the computers of 30 years ago, nobody
would have tried to build something like Andrew Ng's cat-recognition experiment.
(See my article Artificial Intelligence and Brute Force)
There has been very little progress in machine learning over the last 30 years.
I fail to see the "accelerating progress".
Last but not least, when we discuss whether machines are becoming more
intelligent than humans, let us not forget that there are two ways to achieve
that goal: 1. make smarter machines; 2. make dumber humans. As i have written
(See Machine Intelligence and Human Stupidity (The Turing Test Revisited) ), i see more evidence of 2 than of 1.
Even if humans eventually become so dumb that they cannot ride a bike anymore,
my question will remain the same: how long does it take for a computer to
learn how to ride a bike in regular traffic? or, more realistically, how long will it take before
we have computers that are at least capable of trying to learn how to ride a bike? (Read note 1 before you send me to the video of a miniature bike-riding toy).
Note that the solution envisioned by many is to reduce traffic to a very
predictable algorithm: again, make people dumber.
And that's a good segue to discuss progress.
A postulate at the basis of this book and of many other contemporary books (particularly those by futurists and self-congratulating technologists) is that
we live in an age of unprecedented rapid change and progress. But is our age truly so unique?
One century ago in a relatively short time the world adopted the car, the airplane, the telephone, the radio and the record, while at the same time the visual arts went through Impressionism, Cubism and Expressionism, while at the same time Quantum Mechanics and Relativity happened in science. The years since World War II have witnessed a lot of innovation, but most of it has been gradual and incremental. We still drive cars and make phone calls. Cars still have four wheels and planes still have two wings. We still listen to the radio and watch television. While the Computer and Genetics have introduced powerful new concepts, and computers have certainly changed lifestyles, i wonder if any of these "changes" compare with the notion of humans flying in the sky and of humans located in different cities talking to each other.... There has been rapid and dramatic change before.
Then one should discuss "change" versus "progress". Change for the sake of change is not necessarily "progress" (most changes in my software applications have
negative, not positive effects, and we all know what it means when our bank
announces "changes" in policies).
If i randomly change all the cells in your body, i may boast of "very rapid and
dramatic change" but not necessarily of "very rapid progress".
Assuming that any change equates with progress is not only optimism: it's the
recipe for ending up with exactly the opposite of progress.
As i wrote in my essay titled "Regress":
Ray Kurzweil has been popularizing the idea that exponential growth is leading
towards the "singularity"
The expression "exponential growth" is often used to describe our age. Trouble is: it has been used to describe just about every age since the invention of
exponentials. In every age, there are always some things that grow
exponentially, but others don't. For every technological innovation there was
a moment when it spread "exponentially", whether it was church clocks or
windmills, reading glasses or steam engines; and their "quality" improved
exponentially for a while, until the industry matured or a new technology
took over. Murphy's law (that translates into the doubling of processing power
every 18 months) is nothing special: similar laws can be found for many of
the old inventions. Think how quickly radio receives spread.
In the USA there were only five radio stations in 1921 but already 525 in 1923.
Cars? The USA produced 11,200 in 1903, but already 1.5 million in 1916.
By 1917 a whopping 40% of households had a telephone in the USA up from 5% in 1900.
There were fewer than one million subscribers to cable television in
1984, but more than 50 million by 1989.
The Wright brothers flew the first plane in 1903. During World War I
(1915-18) France built 67987 planes, Britain 58144, Germany 48537, Italy 20000 and the USA 15000, for a grand total of almost 200 thousand planes. After just 15 years of its invention.
I am sure that similar statistics can be found for old inventions, all the way
back to the invention of writing.
Perhaps each of those ages thought that growth in those fields would continue
at the same pace forever. The wisest, though, must have foreseen that eventually
growth starts declining in every field. In a sense Kurzweil claims that
computing is the one field in which growth will never slow down, in fact it will
David Deutsch's "The Beginning of Infinity" (Viking, 2011) is a much more
powerful defense of that thesis (see my review).
In my Alan Turing tribute of early 2012
(Machine Intelligence vs Human Stupidity: Are we building smarter machines or dumber humans?) i argued that
it is not so much "intelligence" that has accelerated in machines (their
intelligence is the same that Alan Turing gave them when he invented his
"universal machine") but miniaturization. In fact, Moore's law has nothing
to do with machine intelligence, but simply with how many transistors one
can squeeze on a tiny integrated circuit.
There is very little that machines can do today that they could not have done
in 1950 when Turing published his paper on the "intelligence test". What has
truly changed is that today we have extremely powerful computers squeezed into
a palm-size smartphone at a fraction of the cost. That's miniaturization.
Equating miniaturization to intelligence is like equating an improved
wallet to wealth.
Kurzweil used a diagram titled "Exponential Growth in
Computing" over a century, but that is bogus because it starts with the electromechanical
tabulators of a century ago: it is like comparing the power of a windmill with
the power of a horse. Sure there is an exponential increase in power, but
it doesn't mean that windmills will keep improving by the difference between
horsepower and windpower.
Yes, technologies have changed rapidly in a very short time compared with the
millions of years that it took humans to evolve their skills.
It is, however, a fallacy to claim that "machines evolved rapidly".
Read note 2.
Predictions about future exponential
trends have almost always been wrong. Remember the prediction that the world's
population would "grow exponentially"? Now we are beginning to fear that it
will actually start shrinking (it already is in Japan and Italy). Or the
prediction that energy consumption in the West will grow exponentially?
It has peaked a decade ago. As a percentage of GDP, it is actually declining
rapidly. Life expectancy? It rose rapidly in the West between 1900 and 1980
but since then it has barely moved. War casualties were supposed to grow
exponentially with the invention of nuclear weapons: since the invention of
nuclear weapons the world has experienced the lowest number of casualties ever.
Places like Europe that had been at war for 1500 year have not had a major war
in 60 years.
Digital devices have spread dramatically over the last decade, but so did
cars at some point: in 1900 no household in the USA owned a car, in 1930 one
in two did, but in 2012 the density of cars is not increasing anymore.
(For those who don't know, Kurzweil's "The Singularity Is Near" of 2005 is a revision of his 1999 book "The Age of Spiritual Machines" which was a revision of his 1990 book "The Age of Intelligent Machines", and that indeed is a kind of
Marketing and Fashion
What is truly accelerating at exponential speed is fashion. This is another
point where many futurists and high-tech bloggers confuse a sociopolitical event with a technological
event. We live in the age of marketing. If we did not invent anything,
absolutely anything, there would still be hectic change. Change is driven by
marketing. The industry desperately needs consumer to go out and keep buying
newer models of old products or new products. Therefore we buy things we don't
need. The younger generation is always more likely to be duped by marketing
and soon the older generations find themselves unable to communicate with
young people unless they too buy the same things. Sure: many of them are
convenient and soon come to be perceived as "necessities"; but the truth is
that humans have lived well (sometimes better) for millennia without those
"necessities". The idea that an mp3 file is better than a compact disc which is
better than a record is just that: an idea, and mainly a marketing idea.
The idea that a streamed movie is better than a DVD which is
better than a VHS tape is just that: an idea, and mainly a marketing idea.
Steve Jobs was not necessarily a master of technological innovation (it is
debatable whether he ever invented anything) but he was certainly a master of
marketing new products to the masses. What is truly accelerating is the ability
of marketing strategies to create the need for new products. Therefore, yes,
our world is changing more rapidly than ever; not because we are surrounded
by better machines but because we are surrounded by better snake-oil peddlers
(and dumber consumers).
If you take into account the real causes of the high unemployment rate in the USA and Europe, you reach different conclusions about the impacts that robots (automation in general) will have. In the USA robots are likely to bring back jobs. The whole point of exporting jobs to Asia was to benefit from the lower wages of Asian countries; but a robot that works for free 24 hours a day 7 days a week beats even the exploited workers of communist China. As they become more affordable, these "robots" (automation in general) will displace Chinese workers, not Michigan workers.
The short-term impact will be to make outsourcing of manufacturing an obsolete concept. The large corporations that shifted thousands of jobs to Asia will bring them back. In the mid term, if this works out well, a secondary effect will be to put Chinese products out of the market and create a manufacturing boom in the USA: not only old jobs will come back but a lot of new jobs will be created. In the long term robots might create new kinds of jobs that today we cannot foresee. Not many people in 1946 realized that millions of software engineers would be required by the computer industry in 2012. My guess is that millions of "robot engineers" will be required in a heavily robotic future. Those engineers will not be as "smart" as their robots at whatever task for which those robots were designed just like today's software engineers are not as fast as the programs they design.
And my guess is that robots will become obsolete too at some point, replaced by something else that today doesn't even have a name.
Futurists have a unique way to completely miss the scientific revolutions
that really matter.
If i had to bet, i would bet that robots (intelligent machines in general) will become obsolete way before humans become obsolete.
I would be much more worried about the
2011 Statistics on Sales of Robots (International Federation of Robotics)
Predictions on diffusion of robots (The Atlantic)
Robo-arm controlled by thought (BBC)
Human-looking automata that mimic human behavior have been built since ancient
times and some of them could perform sophisticated movements. They were
mechanical. Today we have electromechanical sophisticated toys that can do all
sort of things. There is a (miniature) toy that looks like a robot riding a
bicycle. Technically speaking, the whole toy is the "robot". Philosophically
speaking, there is no robot riding a bicycle.
The robot-like thing on top of the bicycle is redundant, it's there just
for show: you can remove the android and put the same gears in the bicycle
seat or in the bicycle pedals and the bike with no passenger would go around
and balance itself the exact same way: the thing that rides the bicycle
is not the thing on top of the bike (designed to trick the human eye)
but the gear that can be anywhere on the bike.
The toy is one piece: instead of one robot, you could put ten robots on top
of each other, or no robot at all.
Any modern toy store has toys that behave like robots doing some amazing thing
(amazing for a robot, ordinary for a human). It doesn't
require intelligence: just Japanese or Swiss engineering.
This bike-riding toy never falls, even when it's not moving.
It is designed with a gyroscope to always stand vertical.
Or, better, it falls when it runs out of battery.
That's very old technology. If that's what we mean by
"intelligent machines", then they have been around for a long time.
We even have a machine that flies in the sky using that technology
(so much for "exponential progress").
Does that toy represent a quantum leap
in intelligence? Of course, no. It is remotely controlled by a remote control
just like a tv set. It never "learned" how to bike. It was designed to
bike. And that's the only thing it can do. Ever.
If you want it to do something else, you'll have to add more gears of a
different kind, specialized in doing that other thing. Maybe it's possible
(using existing technology or even very old mechanical technology) to build
radio-controlled automata that have one million different gears to do every
single thing that humans do and that all fit in a size comparable to my body's
size. Congratulations to the engineer.
It would still not be me. And the only thing that is truly amazing in these toys
is the miniaturization, not the "intelligence".
A human is NOT a toy (yet).
In all cases of rapid progress in the functionalities of a machines it is
tempting to say that the machine achieved in a few years what took humans
millions of years of evolution to achieve.
However, any human-made technology is indirectly using the millions of years
of evolution that it took to evolve its creator. No human being, no machine.
Therefore it is incorrect to claim
that the machine came out of nowhere: it came out of millions of years of
evolution, just like my nose.
The machine that is now so much better than previous models of a few years ago
did NOT evolve: WE evolved it (and continue to evolve it).
There is no machine that has
created another machine that is superior. WE create a better machine.
We are capable
of doing that because those millions of years of evolution equipped us with
some skills (that the machine does NOT have). If humans gets extinct tomorrow
morning, the evolution of machines ends.
Right now this is true of all technologies. If all humans die, all technologies
die with us (until a new form of intelligent life arises from millions of years
of evolution and starts rebuilding all those watches, bikes, coffemakers,
airplanes and computers).
Hence, technically speaking, there has been no evolution of technology.
This is yet another case in which we are applying an attribute invented for one
category of things to a different category: the category of
living beings evolve, the category of machines does something else, which we
call "evolve" by recycling a word that actually has a different meaning.
It would be more appropriate to say that a technology "has been evolved" rather
than "evolved": computes have been evolved rapidly (by humans) since their
Technologies don't evolve (as of today): we make them evolve.
The day we have machines that survive without human intervention and that build
other machines without human intervention, we can apply the word "evolve"
to those machines.
As far as i know those machines don't exist yet, which means that there has been zero
evolution in machines so far (using the word "evolution" in its correct meaning).
Humans can build and use very complex machines.
The machine is not intelligent, the engineer that designed it is.
That engineer is the product of millions of years of evolution,
the machine is a by-product of that engineer's millions of years of evolution.