The Year 2017 in A.I.

by Piero Scaruffi

My book on A.I. | My book on consciousness


The year 2017 was the year of A.I. The New York Times called it "The Great A.I. Awakening (14 December 2016). Having lived through the euphoria of the 1980s, i cannot remember a year when A.I. so clearly dominated the conversation, both in the USA and abroad.

In fact, any review of 2017 should start with China, not the USA: In July 2017, the Chinese government launched a national program to become the leading A.I. power by 2030. In August, the People's Daily announced that China plans to add A.I. courses in primary education. This comes after China had already been very active in this field: in 2016 China published more papers than the USA on neural networks. No surprise then that, mostly unnoticed in the USA, Chinese teams won most if not all the major A.I. competitions: in May 2017 Tsinghua Univ won the million-dollar Arnold Foundation's challenge; in August 2017 Nanjing Univ won the ILSVRC ("ImageNet"); in October 2017 Harbin University and iFlyted won the first Stanford reading comprehension test (SQuAD); in October 2017 Megvii beat Facebook and Google at Microsoft's COCO object recognition challenge; in November 2017 Yitu won the first Face Recognition Prize Challenge; and in January 2018 a joint team by Alibaba and Microsoft China not only won Stanford's reading competition but also surpassed human performance at it for the first time in history. The Chinese media are reporting A.I. applications just about in every sector. In October 2017 the robot Xiaoyi (made by Tsinghua University and iFlyTek) passed the medical licensing examination. A new hotel in Hangzhou is called "The A.I. Hotel" and the Shanghai airport has an "A.I. Store". We can argue about the real achievements of Chinese A.I. so far, but there is no doubt that China is probably the country where A.I. is having the biggest impact. For the record, China graduates ten times more STEM students than the USA (and it uses the metric system and writes the date year/month/day, whereas the USA is stuck with gallons, miles, feet, and a way to write the date that makes no sense whichever way you read it - rationality begins with the simplest things).

In the West the main topic of conversation was still DeepMind's AlphaGo, or, better, the follow-ups: AlphaGo Zero (that learned to play go/weichi like a master in a few hours by itself) and AlphaZero (that also learned to play other games). All articles on these systems should begin with the disclaimer that Google has not released these systems so nobody other than Google personnel has been able to use them. I am not implying that Google's claims are false, just that there is only so much that we can say about them. All opinions about their performance are as tentative as any opinion on the Trump-Russia collusion scandal without seeing Trump's tax returns.
Obviously, a system that can only do one thing, no matter how well it does it, is not much more impressive than a clock or a pocket calculator (two devices that do only one thing and do it a lot better than any human being). But playing go/weichi is considered much more difficult than other tasks, so the achievement is undeniable. AlphaZero also learned to play other games, although, again, this remains "narrow A.I." until the day that it can learn anything, not just three or four tasks. Personally, i was intrigued by two aspects of the AlphaGo project. The first one is that it is based on very old ideas. Reinforcement learning has been around (in computation) since at least the 1950s (Minsky's own thesis was on reinforcement learning). Monte Carlo search methods have been around even longer. DeepMind has certainly refined both, but, at the end of the day, the ideas are relatively simple; which makes you wonder: if A.I. of 1956 had had the datasets and the computational power of 2016, would McCarthy and Minsky have solved go/weichi back then? Or more humbly: how much of AlphaGo's progress is due to the datasets and the GPUs (i.e. to "brute force"), and how much to intellectual breakthroughs? The second notable aspect is that AlphaZero learned without any need for the dataset. AlphaGo learned from a dataset of games played by human masters, but AlphaZero didn't even need to learn from humans: it simply played against itself, nonstop, at astronomical speed. Therefore the philosophical question: is that really all there is to all intelligence? Is it only about "brute force"?

The other much talked program of the year was Pix2pix. Alexei Efros' students at UC Berkeley created a neural network that can turn the picture of a horse into the picture of a zebra using so-called the "generative adversarial networks" that became popular in 2015. Then they developed another kind of "adversarial network" and came up with the "Pix2pix model" capable of generating images from sketches or abstract diagrams: you sketch a house, and the neural network generates the picture of the house. When they released the related Pix2pix software, it started a wave of experiments (many of them by professional artists) in creating images: sketch your desired handbag and the system displays what appears to be a real handbag and even colors it. Later in 2017 Ming-Yu Liu's team at Nvidia used a slightly different architecture for their image-to-image translation system UNIT, i.e. variational autoencoders coupled with generative adversarial networks. (P.S. of February 2018: A friend alerted me to the phenomenon of "deepfake videos" and to the Reddit community /r/deepfakes for creating fake porn videos, started in November 2017 by a "redditor" called Deepfakes, who popularized a face-swapping software called FakeApp created by another user, Deepfakeapp).

Again, a few words of caution. This kind of neural networks can generate realistic images of objects that don't exist. However, computer-based visual effects have been around in the entertainment industry since at least the 1980s, such as Lucasfilm's "particle systems" method of 1983 and CalTech's "solid primitives" method of 1984. The Quantel Paintbox workstation was used for the visual effects of the video of Dire Straits' "Money For Nothing" (1985), and Pacific Data Images created the morphing visual effects in the video for Michael Jackson's "Black or White" (1991) using the Beier-Neely algorithm.

In fact, 2017 was also the year when a few more people came out to warn against the A.I. hype (when in 2013 i published my book "Intelligence is not Artificial" i felt terribly lonely!) First came Geoffrey Hinton's talk "What is Wrong with Convolutional Neural Nets". Then he was quoted as saying (https://www.axios.com/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html): "My view is throw it all away and start again" about back-propagation. And then he published his paper on "Capsule Nets" that became overnight the new hot topic to discuss in A.I. Meanwhile, in October, the other famous founder of "deep learning", Yann LeCun, stated to Verge (https://www.theverge.com/2017/10/26/16552056/a-intelligence-terminator-facebook-yann-lecun-interview): "All you're seeing now - all these feats of AI like self-driving cars, interpreting medical images, beating the world champion at Go and so on - these are very narrow intelligences, and they're really trained for a particular purpose... We're very far from having machines that can learn the most basic things about the world in the way humans and animals can do". (Tip for LeCun: you don't sell a lot of copies of your book if you say this, i know by personal experience).

Jaron Lanier posted a scathing critique on The Edge titled "The Myth Of AI" (https://www.edge.org/conversation/jaron_lanier-the-myth-of-ai) that is complemented by similarly skeptical comments from an interdisciplinary cast of scholars (George Church, Peter Diamandis, Lee Smolin, Rodney Brooks, Nathan Myhrvold, George Dyson, Pamela McCorduck, Sendhil Mullainathan, Steven Pinker, Neal Gershenfeld, D.A. Wallach, Michael Shermer, Stuart Kauffman, Kevin Kelly, ...)

For those who are beginning to get annoyed by the A.I. hype, there is also a funny article by Rachel Metz titled "And the Award for Most Nauseating Self-driving Car Goes to" in January 2018 in the MIT Technology Review (https://www.technologyreview.com/s/609938/and-the-award-for-most-nauseating-self-driving-car-goes-to): she tested a few self-driving cars and reported how annoyingly stupid they are.

The last piece of bad news came from an innocent-looking paper that took a while to resonate with the A.I. crowds. One of the most promising techniques to overcome some of the limitations of A.I. is "variational inference", which is a descendant of probabilistic inference. The fundamental limitation of neural networks is that they need to be "trained" with thousands if not millions of examples before they can "learn". A child can usually learn a new concept from a single example, creating a generalization that she will be able to apply to similar objects or situations. Some think that probabilistic inference, may be a better way to simulate human intelligence than neural networks. In fact, probabilistic induction was one of the very first proposals for artificial intelligence: Ray Solomonoff presented his "Inductive Inference Machine" at the first A.I. conference of 1956. Geoffrey Hinton himself was involved when he co-invented the "wake-sleep" algorithm (1995). Michael Jordan at UC Berkeley, Joshua Tenenbaum at the MIT, David Blei at Princeton, Max Welling at the University of Amsterdam, and Daan Wierstra at DeepMind are some of the towering figures in this field. A boost to the field came ten years ago from the "Theory Theory" of how children learn new concepts, advanced by one of the world's most famous developmental psychologists, Alison Gopnik of UC Berkeley. Joshua Tenenbaum was one of the authors of the paper "How to Grow a Mind" (2011), which argued that hierarchical Bayesian models underlie all of our cognitive life. Alas, one of our greatest chaos mathematicians, Jim Crutchfield of UC Davis, may have just demonstrated that probabilistic induction is not suited for non-linear systems, no matter what.

Yes, i know: there were also WaveNet, PixelCNN and PixelRNN, Lip Reading, Relation Networks, SketchRNN (one corporation is really good at P.R.); but all of these things were possible before, and it is too early to claim that deep learning does it better.

Is A.I. an existential threat to humanity? Not any time soon. Is A.I. overhyped? Of course, big time. Just ask Google: how many lives has DeepMind's A.I. saved so far? or, if you are the Wall Street kind of person: how many billion dollars has DeepMind's A.I. made so far? Unfortunately the answer is zero on both counts. To put things in perspective, in 1909 Fritz Haber and Carl Bosch invented a process to produce the fertilizer ammonia which is probably the single most important factor in the fact that today the world's population is five times what it was in that year.

Is A.I. useful? Of course. And it will be more and more useful as it becomes more robust, reliable and... hmmm... intelligent! NASA used a neural network to find a solar system. Insilico's Russian labs used a neural network to discover molecules that can fight cancer. A few years ago i was writing about the possibility of using neural networks to scan medical images for health diagnostics and this year we've seen multiple papers about just that, one from Stanford about detecting skin cancer (and this can indeed save millions of lives). Machine translation and speech recognition have certainly improved, although they are still wildly unreliable (it's not just that they make many mistakes but the silly mistakes they make could start a nuclear war). Maybe it is time to stop calling it "Artificial Intelligence" and just start calling it "Automation". Call it "automation" and the philosophical discussions become more sensible.

Last but not least: study math. That's what A.I. is: no magic, just computational math. And maybe start studying Chinese...