(These are excerpts from my book "Intelligence is not Artificial")
Cultural Background: The Network Age
Is the brain a neural network? No, of course no.
The brain is not just a network (and the neuron is not just a node) but we live in the age of the network. We think of everything in terms of networks. It is probably a consequence of liberal capitalism. When people lived under the dictatorship of kings and popes, the pyramid was the preferred topological model. Everything was supposed to be a hierarchy. For example, there was a hierarchy of nature with animals at the bottom, humans above them, spirits above humans, and a supreme god at the top. Now that we got rid of the hierarchy, we think of societies, economies and cities as networks. The network metaphor has become pervasive even in Physics and in Linguistics.
The millennial generation is often referred to "the digital natives" but they are
more properly the "network natives". Whether data, texts and images are digital
or analogic would make no big different to their lives: does anyone know
the difference between the new digital and the old analogic television?
The difference is that their lives (they are told) are networked. They grow
up thinking of networks: the encyclopedia is a network (Wikipedia), their social
life is a network (Facebook), their government structures health care in
"provider networks", public transportation is a network, etc.
The network is a modern invention.
The Silk Road was actually a network of trade routes, but nobody called it "Silk Network". The so-called "Peutinger Map" at the Austrian National Library in Vienna (a medieval French copy of a Roman original dating probably from the age of Augustus) is a parchment scroll, 34-cm high and 675-cm long, representing the road network of the Roman empire squeezed so as to look like a series of straight lines converging on Rome. When in 1820 Becquey proposed to build a network of canals in France, his 75-page "Rapport au Roi" never used the word "network".
Until recently, a machine was routinely showed as neatly divided into modules and the workings of the machine was routinely represented as flows of material, i.e. lines; now any manual begins with a diagram that shows the network of processes.
The buzzword (and the ruling metaphor) of the 1980s was still modularity. The French sociologist Emile Durkheim's "The Division of Labor in Society" (1893) had hailed the ubiquity of modularity (not of networking) in all human societies. For Georg Simmel's "The Metropolis and Mental Life" (1903) city life led to the division of labor, whereas today almost all sociologists think of city life as creating networks.
Modularity had been made popular in the 1920s by architects such as
Le Corbusier, Walter Gropius and Buckminster Fuller, leading to the
modular mass housing of the 1960s such as Moshe Safdie's "Habitat 67"
in Toronto and Kisha Kurokawa's "Nagakin Capsule Tower" (1972) in Tokyo.
Modularity was also made popular by computer science: the hardware (as per John Von Neumann's architecture) was represented as a set of modules, and programming languages such as Niklaus Wirth's Modula (1976) encouraged modularity in software.
Finally, the metaphor of modularity infiltrated cognitive science with
David Marr's "Vision" (1982) and Jerry Fodor's "The Modularity of Mind" (1983)
They didn't know it, but, indirectly and involuntarily, Emile Durkheim and Georg Simmel had founded "social network analysis", the study of patterns of social interactions, although it was only in 1954 that the term "social network" was coined (by the Australian anthropologist John Barnes in the article "Class and Committees in a Norwegian Island Parish"); and many consider "Who Shall Survive?" (1934), published by the Romanian psychiatrist Jacob Moreno, as the founding text of social network analysis.
As usual, it was Marshall McLuhan's "Understanding Media" (1964), the book that popularized the adage "the medium is the message", that pioneered today's "network" metaphor: "It is a principal aspect of the electric age that it establishes a global network that has much of the character of our central nervous system." The emphasis on cities came from Edward Laumann at the University of Chicago who wrote "Bonds of Pluralism - The Form and Substance of Urban Social Networks" (1973).
During the 1990s, with the invention of the World-wide Web, connectivity began to replace modularity as the ruling paradigm and now it's all about connectivity. The beneficial properties of network topologies are routinely hailed like the ideological dogmas at a meeting of a communist party. Publishers relish books such as Dutch sociologist Jan van Dijk's "The Network Society" (1991), Spanish sociologist Manuel Castels' "The Rise of the Network Society" (1996), Belgian sociologist Armand Mattelart's "Networking the World 1794-2000" (2001); or "Information Rules" (1998), subtitled "A strategic guide to the network economy", by UC Berkeley economists Carl Shapiro and Hal Varian, and "Platform Revolution" (2016), subtitled "How networked markets are transforming the economy", by Dartmouth College's Geoffrey Parker.
In 2003 the Organisation for Economic Co-operation and Development (OECD) published a report titled "Networks of Innovation" that begins with the sentence: "OECD countries are increasingly characterised as network societies". In 2014 the "Oxford Handbook of Innovation Management" includes a chapter by Timothy Kastelle and John Steen on "Networks of Innovation" that recites "Networks are fundamental to understanding and managing innovation".
There is a quasi-religious yearning to the prophecies of how the forces of connection will defeat the forces of division, a yearning expounded by books such as Parag Khanna's "Connectography - Mapping the Future of Global Civilization" (2016).
Social-network analysis rediscovered graph theory, invented by the Swiss mathematician Leonhard Euler in 1736 to solve the problem known as "The Seven Bridges of Koenigsberg", as well as the more recent "random graphs" introduced in 1959 by the Hungarian mathematicians Paul Erdos and Alfred Renyi.
In 1998 David Krackhardt and Kathleen Carley of Carnegie Mellon University invented Dynamic Network Analysis (aka DNA, but not related the genetic thing) and preached that networks occur across multiple domains and at different levels (meta-networks or high-dimensional networks).
In 1999 the Hungarian physicist Albert-Laszlo Barabasi at the University of Notre Dame in Indiana focused on "scale-free" networks, i.e. networks whose distribution follows a power law: according to him these networks are ubiquitous in natural, technological and social systems. Barabasi founded the Center for Complex Network Research at Northeastern University (to study how networks emerge, what they look like, and how they evolve) and the Network Science Society; and wrote the book "Linked - The New Science of Networks" (2002). Quote: "Networks are present everywhere."
For the record, later studies, for example by Aaron Clauset and his student Anna Broido at the University of Colorado, seemed to prove the opposite, that the power law is rare in real-world networks ("Scale-free Networks are Rare", 2018). But power laws had become a popular meme in their own after the physicist Kenneth Wilson at Cornell University had shown why they pop up in all sorts of phase transitions (regardless of the material involved) by linking them to a new mathematical object that the French mathematician Benoit Mandelbrot at IBM was studying at the same time for completely different reasons (see the legendary paper "How Long Is the Coast of Britain?" of 1967) and to which he would eventually give a name in his book "Fractals" (1975). Wilson's intuition marked the birth of "renormalisation group theory" which is now widely used in physics ("Renormalization Group and Critical Phenomena", 1971). Further boosting the popularity of power laws, one decade later the Danish physicist Per Bak observed power-law behavior in complex nonlinear systems and jumpstarted another whole new discipline (that took the name from another legendary paper, "Self-organized Criticality" of 1987).
The "network effect" is one of the most quoted "effects", although nobody really knows what it is. The first influential person to talk about the "network effect" was AT&T's president Theodore Vail in 1908: the value of a network is proportional to how many people use it. Similarly, David Sarnoff, the mogul of RCA from 1919 until 1971, stated that the value of a network in the broadcast industry is proportional to the number of viewers. In 1980 Robert Metcalfe, the inventor at Xerox PARC of the local-area networking technology Ethernet, stated that the value of a network is proportional to the square of the number of devices. In the age of the Internet and of social media, the network effect became a must in the business plan of Internet startups looking for venture capital, and also for honest economists trying to explain how trivial ideas like WhatsApp ended up being worth $18 billion.
No wonder then that, in our age, we should be thinking of the brain as a network. The idea originated in neuroscience around 1911 with Edward Thorndike.
In a sense, the other school of A.I., the symbolic knowledge-based school, still belonged to a transitional era between the king and democracy, an era in which a code of laws (represented by mathematical logic) was driving all the processes. Maybe this school was doomed to fade away, regardless of its scientific merits, simply because it didn't fit well with the "network" paradigm.
However, the fact that the network has become the favorite topology of the 21st century doesn’t mean that everything is indeed just a network. A century from now our descendants might laugh at our simplistic view of the brain the same way that today we laugh at Descartes’ view of the brain as a hierarchy.
Gerald Edelman and Giulio Tononi in their book "A Universe of Consciousness" (2000) explain that not only are there many different types of neurons in the human brain (more than 70 in the eye's retina alone), and that no two neurons are alike,
and that neurons don't even fire at a constant rate,
but also that there are different topologies in the brain: some regions are networks (notably the thalamo-cortical system, although this network is better viewed as a network of networks, a network of specialists), other regions are long loops (notably between the cortex, the cerebellum and the hippocampus) and other regions are fans (the nuclei responsible for categorization and action and that project into the whole brain).
The various regions of the neocortex are organized into columns and layers.
Communications between neurons can take place via more than 50 different kinds
In 2017 Ido Kanter, a physicist at Bar-Ilan University, published a study that contradicts the stereotype of how neurons communicate. Traditionally, we assumed that each neuron sums up all the signals from other neurons and, when this quantity reaches a threshold, the neuron fires its own signal to other neurons; but Kanter's team discovered that a neuron contains many independent excitable places, each acting as a threshold unit that sums up the incoming signals.
The neural networks of Artificial Intelligence resemble the neural networks of brains the same way that a car resembles a horse: the car can travel faster than a horse and offers a lot more comfort, but is not a horse.
A neural network needs to see thousands of images of an object before it can recognize that object. A human brain doesn't go through this kind of training.
A human can learn to recognize and write a letter of a foreign alphabet after just seeing one example of it.
I don't need to see a thousand different versions of the word "mouth" in Chinese: i have seen it once and memorized it, and now i can recognize it wherever i see it, in whichever size, color and shape, and even if the paint is faded and even if someone has scribbled over it.
A neural network treats an image as a pattern of pixels and, after seeing many patterns that belong to the same thing, constructs the model that will help correctly classify future instances of that pattern. My brain simply remembers how to write the character: a square with two strokes at the bottom.
Sometimes i simply remember a Chinese character by analogy with an object. The Chinese character for "enter" looks just like the letter lambda in Greek. If you straighten the top, it becomes the character for "person". And if you add a straight line where the two curves join, it becomes the character for "big".
My brain can employ several different forms of reasoning to remember a character that a neural network can learn only after seeing thousands of them. More importantly, the human brain can learn quickly how to act in dangerous situations. Sure, every time we get better at avoiding danger, just like a neural network, but we normally survive the first one. A neural network trained to learn how to cross the street has to die thousands of times before it learns how to do it without being run over by cars.
Perhaps a better paradigm for the 21st century would be the one proposed by the German sociologist Niklas Luhmann in his milestone "Theory of Society" (1997), which is really a theory of communication: social systems are systems of communication. Luhmann found a connection between human society, cybernetics, autopoiesis (the process popularized by Chilean biologist Humberto Maturana) and the favorite math book of the counterculture, "Laws of Form" (1969), a quasi-mystical treatise written by the British mathematician George Spencer-Brown.
"Humans cannot communicate; not even their brains can communicate; not even their conscious minds can communicate. Only communication can communicate" (Niklas Luhmann).
Back to the Table of Contents
Purchase "Intelligence is not Artificial")