Intelligence is not Artificial

by piero scaruffi

Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )


(These are excerpts from my book "Intelligence is not Artificial")

Footnote: Simplifying the (Artificial) Brain

Convolutions are not fun systems to implement on machines. Convolutional neural networks demand large amounts of memory and computational power. Krizhevsky's AlexNet of 2012 used 61 million parameters. It performed 1.5 billion operations to classify an image. Deepface, one of the first systems to apply deep learning to face recognition, developed in 2014 by Yaniv Taigman of Facebook and Lior Wolf of Tel Aviv University (who both worked at Face.com before it was acquired by Facebook in 2012), classified human faces using 120 million parameters and was trained on the four million facial images of the Labeled Faces in the Wild (LFW) datase. Karpathy's Neural Talk system of 2014 used 130 million convolutional parameters and 100 million recurrent parameters to generate captions for images. The deep networks that came after these ones were orders of magnitude more complex and therefore even more demanding.

Several tricks were proposed to make it easier for these monsters to do their job. For example, Song Han at Stanford pioneered a technique of network compression that made it possible to fit these networks on existing chips ("Learning both Weights and Connections for Efficient Neural Networks", 2015), but it was neither easy nor cheap. In 2016 Han published "deep compression", a method to compress deep networks by an order of magnitude without losing accuracy, and designed a specific hardware architecture, Efficient Inference Engine (EIE). Han shrunk the memory requirement of AlexNet and VGG-16 by 35 times and by 49 times respectively while retaining the same accuracy. In 2016 Kurt Keutzer's group at UC Berkeley (including Forrest Iandola, founder of DeepScale.ai) in collaboration with Song Han published Squeezenet, that achieved the same accuracy of AlexNet with 50 times fewer parameters.

Another approach, low-precision convolution networks, was pioneered by Yoshua Bengio's student Matthieu Courbariaux at Montreal University. His "BinaryConnect" (2015) was a "binarized neural network": it limited weight values to only a + or - value, thus improving dramatically the computational efficiency of the algorithm and reducing the amount of memory required. This project evolved into BinaryNet by Ran El-Yaniv's group at Technion in Israel ("Binarized Neural Networks", 2016). Another binarized neuronal net, called Xnor-net, was proposed in 2016 by Ali Farhadi's group at the University of Washington, resulting in 58 times faster convolutional operations and 32 times memory savings. Meanwhile, also in 2016, Fengfu Li and Bo Zhang of the Institute of Applied Mathematics in Beijing published a study of ternary-weight networks (neural networks with weights that can have three values, +1, 0 and -1) showing that their performance was only slightly worse than in the high-precision counterparts ("Ternary Weight Networks", 2016). An even better result was achieved also in 2016 by Eriko Nurvitadhi of Carnegie Mellon University, working in Debbie Marr's group at Intel Labs ("Accelerating Deep Convolutional Networks Using Low-precision and Sparsity", 2016).

Back to the Table of Contents


Purchase "Intelligence is not Artificial")
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact