(These are excerpts from my book "Intelligence is not Artificial")
Demystifying the Turing Test
The Turing Test is the best known test to determine whether a machine has become as intelligent as humans: a person asks questions until it can tell whether the answers are coming from a human or a machine (that must not be visible, of course). If that person cannot reach a conclusion (or reaches the wrong conclusion), the machine has passed the test. Any apprentice philosopher can tell you that it all depends on the questions that are being asked. If you ask the questions that make us human, all computer programs fail the Turing Test, and they fail in awkward manners.
Linguists like to talk about the difficulty of understanding ambiguous sentences such as "Prostitutes appeal to Pope" and "Iraqi head seeks arms". But the job of a machine gets even more difficult when common sense is involved. In the sentence "Carl, who died last year, was a great scientist, and his son Dale has fond memories, and he now takes care of the center" it is pretty clear to whom the "he" refers, because one of the two men is dead and therefore he cannot take care of the center (or of anything else). This is not obvious to a machine that doesn't know what "dying" implies.
Ask the machine "The doll will not fit in the box because it is too big: which one is too big, the doll or the box?" If you ask questions like this one, the human being will get them right almost 100% of the time, but the machine will only get them right 50% of the time because it will simply be guessing (like flipping a coin). Ask just two sentences like this, and, most likely, ou will know whether you are talking to a machine or to a human being. The machine has no common sense: it doesn't know that, in order to fit inside a box, an object has to be smaller than the box. This is the essence of the Winograd Schema Challenge devised by Hector Levesque, at the University of Toronto in 2011.
Back to the Table of Contents
Purchase "Intelligence is not Artificial"