Intelligence is not Artificial

(These are excerpts from my book "Intelligence is not Artificial")

### Intermezzo: What is Intelligence?

Philosophically, progress in trendy A.I. techniques such as reinforcement learning is quite shocking: they are actually expressed in quite simple mathematics. You can write the formulas in a few lines. Those formulas may look complicated to non-mathematicians but they are actually infinitely easier than, say, Einstein's equations of gravitation. Then you need to run these few lines millions of times over a large dataset of examples (e.g. of Atari games). And then these algorithms start behaving like masters of the game. Except that these algorithms don't even know the rules of the games. The Atari program was "learning" to play the games by looking at the pixels on the screen of the computer. That program has no idea what the rules of the game are, and no idea that it is a game. It is just a mathematical formula that gets repeated millions of times over thousands of examples. You can legitimately question whether this is "intelligence". And here the philosophers may divide in two schools: the ones who think that "intelligence" requires a full understanding of what you are doing, and those who think that, ultimately, all our "understanding" is simply a massive iteration of simple neural algorithms. The former group keeps hoping that one of these days we will find a game that cannot be solved by simply repeating an algorithm millions of times. So far we've been humbled by the machines: machines "learned" to play increasingly difficult games and became better than us even if the machines don't actually know the rules of the game (and don't even know that they are playing a game).
However, don't overrate the machine: the machine algorithm learns to play the game, and eventually beats the human masters, only if someone has designed the machine algorithm correctly. The machine is just the "learning agent" that will interact with its environment until it successfully solves the problem. The fundamental step in reinforcement learning is to capture the key features of the problem and shape (accordingly) the behavior of the "learning agent". This is done by a human expert, who these days frequently employs a "Markov decision process". Machines are becoming "learning agents", but not (yet) designers of learning agents.
And there is still a fundamental difference in the "learning agent" itself. We humans actually learn a lot more than just by "trial and error". Humans and machines are using two different approaches to learning a game. Humans employ a lot of common-sense knowledge and intuition (after all, the games have been designed by humans who share our knowledge of the world). The human approach is initially "instructional": someone told us how to play, or we can guess by ourselves in a few seconds how the game works. Reinforcement learning does not need to know "what" it is doing: it just needs to know what the goal is and what the possible actions are, and the machine's task is to "select" the best behavior to achieve the goal. The machine's approach is "selectional". A human player starts playing well in a few minutes. Reinforcement learning will eventually play very well but it may take hours, days or months to learn to play (depending on how fast the computer is). The way you learned to ride a bicycle is a mixture of these two approaches: your parents probably told you (instructed you on) how a bicycle works but then you had to keep trying until you got it right, every time adjusting your behavior to avoid falling and to improve stability (punishment and reward).
Reinforcement learning has always fascinated psychologists because it works only if the learning agent has a "holistic" understanding of the environment. An Atari videogame or the moves of weichi constitute a very simple environment. Humans can apply reinforcement learning in much more complex environments.
There is another reason to be fascinated (not alarmed) by reinforcement learning algorithms. As Tambet Matiisen (University of Tartu in Estonia) wrote: "Watching them figure out a new game is like observing an animal in the wild".