Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )


(These are excerpts from my book "Intelligence is not Artificial")

Teaser: Machine Ethics

If we ever create a machine that is a fully-functioning brain totally equivalent to a human brain, will it be ethical to experiment on it? Will it be ethical to program it? Will it be ethical to modify it, and to destroy it at the end?

The discussion about "machine ethics" is usually about ethics towards humans, not ethics towards machines (what is good behavior by a human towards a machine?) or between machines (what is good behavior by a machine towards another machine?). Morality between machines would entail a discussion about what are machine values, which would be a great topic of discussion but we are humans and don't really care what machines can do to each other.

Morality towards humans should be an easier discussion, except that, alas, humans are far from having reached consensus on what is ethically correct or not. For example, i personally don't consider religions to be very ethical (often, just the opposite), but apparently i am outnumbered by at least one billion Muslims and more than one billion Christians and Jews. And many of them, in turn, may not consider Darwin or Einstein as role models. Therefore we humans are a far cry from having reached consensus on what is good and what is bad.

Last but not least, we humans change our minds all the time about what is ethical and what is not. Not long ago it was really bad for an unmarried woman not to be a virgin (now it is almost the opposite) and not long ago it was really bad to say that gods don't exist (it will soon be the opposite). I am not even sure that our changing morality constitutes "moral progress". For example, the West has gone back and forth on homosexuality since Greek times. We cannot rationally prove that premarital sex is good or bad; only that right now and right here, to this generation of Westerners, it looks ok. It seems obvious to me that we shouldn't even think of teaching ethics to machines. We humans have killed, enslaved and oppressed way too many fellow humans in the name of our ethical principles.

In 2001 Eliezer Yudkowsky founded the field of "Friendly A.I." according to which it is imperative to design A.I. systems in such a way that they can never become dangerous to us. Steve Omohundro, however, warned that machines programmed to improved their skills, no matter how narrow that skill originally is, may develop an uncontrolled utilitarian "drive" that will make them the equivalent of human sociopaths ("The Basic A.I. Drives", 2008). Wallach, Wendell, Colin Allen, and Iva Smit introduced the term "artificial moral agents" ("Machine Morality", 2008). With all due respect for these thinkers (much smarter and more knowledgeable than me), i find their philosophical arguments to be weak and their mathematical "proofs" even weaker. I am not sure that i would want a machine that will never ever under any circumstances kill a human being: my father would have died in a Nazi concentration camp if bombs and machine guns had refused to kill Nazis. Hence sometimes it is ok to harm humans. But we have no consensus on when that is the case. And the last people whom i would trust with making this decision are the software engineers.

Even creating an omnipotent supremely good god (if we could do it) isn't quite as "good" an idea as it sounds. In 2010 a user of the blog LessWrong, founded by the same liezer Yudkowsky one year earlier, imagined an artificial intelligence, the Basilisk, that has been created for only one purpose: to create the greatest good for the greatest number of people. One of the conclusions that it is likely to reach is that, in order to achieve its goal, it has to torture and possibly eliminate anybody who does not contribute to its goal, for example any software engineer who doesn't help improve the Basilisk itself, or, for example, any writer like me who merely wonders whether the Basilisk would be a good or bad idea. Would you want this omnipotent machine programmed to achieve the greatest good for the greatest number of people, at the cost of torturing and eliminating anybody who doubts or (anathema) opposes it? Trivia: Roko's Basilisk became an Internet meme because Yudkowsky banned discussion on it. Obviously Yudkowsky never studied the history of taboos.

Incidentally, we humans still have to agree on the source of morality. Several biologists, from Charles Darwin in person to Frans de Waal (who published "Primates and Philosophers" in 2006) think that morality evolved naturally, and not only in humans; whereas Thomas Hobbes thought that we are amoral animals kept in line by the brute force of the state (by the "leviathan"). That debate never ended.

Back to the Table of Contents


Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact