(These are excerpts from my book "Intelligence is not Artificial")
Living with Machines that have no Common Sense
In 2017 it is virtually impossible to get the caption "gorilla" from Google's photo indexing system even when the picture is obviously a picture of gorillas. The reason is very simple: In July 2015 that program identified a black couple as gorillas, and this became a widely publicized scandal. Google's software engineers simply changed the code of the system to make sure that similar racial problems never occur again. Unfortunately, that decision has also severely handicapped the program in all applications that require recognizing gorillas.
Face detection is not hard at all: it is a problem that Artificial Intelligence solved a long time ago.
The real issue is that the system is incredibly stupid: it knows absolutely nothing of the real world, and therefore it has no way to differentiate someone who looks like a gorilla from a gorilla (or me from some animal that looks like me). If you know what people and gorillas are and do, you can tell in one split second which ones are people and which ones are gorillas. For example, it's unlikely that a gorilla is staring straight into the camera with a background of city traffic. In fact, you can even tell when they are people dressed in gorilla costumes, or when they are gorillas dressed in human attire.
But, because programs have no common sense, the only way to make sure that they don't say or do anything stupid is to physically erase the possibility, which inevitably affects the usefulness of the program.
Recognizing the North Korean dictator as a watermelon could trigger a world war, so let's make sure that the program will never spit out the word "watermelon", even when it is shown pictures of watermelons in a supermarket called "Watermelonland". Recognizing the president of the USA as a wanted terrorist could lead to a grotesque shootout between the FBI and the presidential detail, so let's make sure that nobody will ever be recognized as that terrorist, not even the terrorist himself!
That would be the future.
I wrote "would be" because the other possible future is that we simply ban
people from taking pictures that can confuse the program. In fact,
a Google executive immediately recommended that Google users watch a video
on how to light and photograph black faces.
If you ban everything that can be confusing or misleading, intelligent machines are already here.
However, both solutions fail to address many other subtle issues of machines with no common sense. For example, in 2016 a high-school student, Kabir Alli, did a very simple test: he searched for images of “three white teenagers” and “three black teenagers. The search for “three white teenagers” turned up pictures of nice smiling teens, whereas the search for “three black teenagers” turned up mugshots of juvenile delinquents. The poor Google engineers had to intervene again, and so now (one year later) both searches turn up nothing that is even remotely offensive... but nothing that is even remotely interesting. This was widely discussed in the media as "A.I. mirrors the racial bias hidden in society" when in fact it was more a problem of "A.I. doesn't understand the question at all". Ask any person who lives in that racially-biased society to pick a picture of three black teenagers and that person would look for normal ones, not pictures of juvenile delinquents.
Back to the Table of Contents
Purchase "Intelligence is not Artificial")