Coexisting with Artificial Intelligence.
There is a lot of talk about the "threat" posed by Artificial Intelligence, based on the assumption that machines are already more intelligent than humans, and that their intelligence is growing rapidly.
The current brand of Artificial Intelligence is known as "Deep Learning" and is based on a technology known as "Artificial Neural Networks" which was inspired by the network of neurons in the brain. Deep Learning was originally conceived to "recognize", for example recognize written characters, words, sentences, images, but over the last decade it has been shown that it can also "generate", generate text, images, videos and actions. Basically there have been three ages of computers: in the first age they were doing calculations (faster and more accurately than any human mathematician), in the second age they learned to recognize things, and now they learned to generate (generate text, images, sentences, videos entire reports). That's why today "Deep Learning" is known as "Generative AI".
From the beginning, from the very first computer of 1951 that was only capable of arithmetic, people saw them as "electronic brains". That was the expression used already in the 1950s. So it is not new that people speculate if these machines are getting as intelligent and more intelligent than humans. Read my book "Intelligence is not Artificial" for a lot of trivia about the early days of AI in the 1950s and 1960s.
The current crop of generative AI systems like GPT, Gemini, LLaMA, Claude and DeepSeek from China cannot be distinguished from a human when they write an answer to a question and so they technically pass the famous "Turing Test". If you never heard of it, read about it in my book. Philosophers spend seven decades discussing if a machine can pass the Turing Test. Now we know the answer: yes. That was considered the test to determine if a machine has become intelligent (not by Turing, by those who came after Turing - Turing was more cautious).
So are they brains? Are they minds? Let's start with the basic. The human brain is made of Carbon, Hydrogen, Oxygen, Nitrogen, Phosphorus and Sulfur , usually spelled as "CHNOPS". A computer is made of silicon and copper. The 86 billion neurons of the human brain are all different in shapes and sizes (and, probably, function). The billions of neurons of an artificial neural network are all the same. The neurons of our brain communicate via a number of neurotransmitters, at least different neurotransmitters. The artificial neurons of an artificial neural network use only one way to communicate: electricity. The neurons in our brain function in a fundamentally different way than the artificial neurons that can only be zero or one. Our brain is part of a nervous system that extends beyond the head, and in fact it has no border: there is no place where the "hardware" of the mind ends. The brain doesn't just manipulate neurons: for example, it also controls the endocrine system (i.e. hormones). Neurons are cells, with the complexity of any biological cell. For example, they contain mitochondria. And of course inside a neuron there is a nucleus with DNA. On the other hand, the artificial neurons of AI are ultimately zero-one switches, just like any other software program.
So our brains and the hardware of an AI are physically different systems: if an AI has a brain, it is a wildly different kind of brain than the ones found in nature. I argue that it shouldn't be called "brain". If you find a giant palm leaf that works like an umbrella, you don't call it "umbrella" simply because you can use it as an umbrella: you call it palm leaf.
There is also something in the function that makes them fundamentally different from biological brains. The mind that is generated by my brain runs only on just one hardware: my brain. I cannot run by mind on your brain. Each person has a different intelligence, and that's why we say that some people are more or less intelligent than others. It would be more appropriate to say only that people are intelligent in different ways. On the other hand, the artificial mind of an Artificial Intelligence can run (potentially) on any computer, i.e. on any hardware. It can exist simultaneously in many machines. Many machines can run the same Artificial Intelligence and all these copies of the same AI can communicate and exchange knowledge. All those machines are intelligent exactly the same way, have the exact same memory and the exact same "thoughts". The main feature that we envy of an AI is that it is virtually immortal (it can theoretically be ported from hardware to hardware forever and ever) whereas my mind will presumably die when my brain dies. That's another big difference between biological brains and artificial neural networks.
In my opinion, human intelligence and Artificial Intelligence are fundamentally different things. We have created a different kind of intelligence. In fact, i claim that we are creating a different living species. So far we've only had living species that are made of biological cells. This would be a species that is made of silicon chips. That's why i think that Artificial Intelligence is a misnomer. It makes one think that it is just like human intelligence when in fact it is something altogether different. An AI is a new living species that (as of today) could even reproduce but in practice it doesn't need an offspring because it can itself mutate and evolve forever. They are more similar to bacteria than to animals.
I think that today's AI (Deep Learning, the Transformer architecture, large language models and so on) is in fact moving in the opposite direction of human intelligence. AI is following the direction of its success stories, not the direction of "how similar it is to human intelligence". AI is not only different from human intelligence, but it is diverging more from human intelligence with every new success story.
AI feels like a continuation of traditional software programming because it is still programmed the same way as, for example, the apps on your smartphone or the The prehistoric computers Z1 and ENIAC discovered that one can do wonderful things with electronic switches. Those up/down switches can be used to implement Boolean algebra in which variables can only have a value of zero or one, and that Boolean algebra can be used to create programming languages, which can be used to program computers for all sorts of things. In reality, it is likely that AI needs a different kind of hardware. On a different kind of hardware it would probably be a lot easier to implement large language models, image recognition and speech recognition, which are not really Boolean operations. As far as AI is based on let's call it "Boolean" hardware, AI is trying to achieve human performance in a wildly different hardware than our brain, and that probably explains why it keeps diverging from the way our brain is structured and works.
So in my opinion, AI and human intelligence are fundamentally different. Deep Learning does not in fact study human intelligence: it's a study of physical systems. In 2024 John Hopfield and Geoffrey Hinton were awarded the correct kind of Nobel Prize: in Physics. They discovered something about Physics, not about human intelligence. Calling their networks “neural” is misleading. They discovered something about energy-based networks. John Hopfield in 1982 discovered that a spin-glass system can store and retrieve information. Storage and retrieval are implemented by attractors of an energy function and the way the function “converges” to those attractors. Most neural networks of the time were feed-forward networks, i.e. they process information in one direction only. A Hopfield network is instead a “recurrent” system, i.e. it has recurrent feedback loops, and because the brain seems to have a lot of recurrent feedback loops it is natural to call it a “neural” network, but in reality the brain is way more complex. The similarities end with the word "feedback". Any thermostat, for example, or the combustion engine of your car use feedback but it's not considered a complete model of the brain. They are just Physical systems.
In 1985 Hinton discovered that Boltzmann machines, another spin-glass system, which are again energy-based network models, are really good at learning a probability distribution about a dataset, i.e. they can “recognize” features of the data and “classify” the data, i.e. they can build generative models, i.e. they can create internal representation of a dataset. Hinton's Boltzmann machine is, roughly, a stochastic generative version of the Hopfield network. Bottom line: Hopfield and Hinton discovered that some energy-based models can "learn" something. Calling them "neural networks" is confusing. They should just be called energy-based network models.
I can spend a lot of time discussing technicalities. Let me just take five minutes. Generative AI happened after someone at Google invented the transformer architecture in 2017. A key component of the transformer is the “attention mechanism”. It's an operation that tracks long-range correlations in the data, for example the correlations between words in a sentence, but it can be viewed as simply a memory system. Sepp Hochreiter, who in the 1990s invented the precursor of Deep Learning (LSTM), has shown mathematically that the attention module in the Transformer is related to a generalized Hopfield network (the paper is titled "HOPFIELD NETWORKS IS ALL YOU NEED" 2021 first author Hubert Ramsauer). This generalized Hopfield network is a variant of the Dense Associative Memory developed by Hopfield and his former student Dmitry Krotov (also a physicist, postdoc at Princeton’s Institute for Advanced Study). Change the Lagrangians (energy functions) of the generalized Hopfield network and you get a different energy-based memory model. In 2023 Krotov introduced the concept of the Energy Transformer, in which the attention mechanism is expressed in terms of energy. Krotov keeps trying to find similarities with the organization of the brain but in reality he has simply discovered something about physics.
Back to Artificial Intelligence. The reason we originally called it "Artificial Intelligence" is that this technology was designed to do all the things that our intelligence does: recognize images, recognize speech, answer questions, summarize texts, write articles, reasoning, planning and so on. This new form of intelligence, this new species, Artificial Intelligence, is beginning to do all these things better than most humans. It is becoming "super-intelligent". The reason we worry that it can outsmart us is that it can do all these things better than us, 24 hours a day, and for eternity. And some speculate that it can become so much smarter than us that we won't even be able to comprehend its intelligence, and that would be the Singularity, a concept first popularized by the science fiction writer Vernor Vinge in 1993 and then by Ray Kurzweil in his book "The Singularity Is Near" (2005), where he declared confidently that the Singularity will happen by 2045.
After all, AI can become virtually immortal, ubiquitous, omniscient and omnipotent: isn't that the definition of the Christian god? Needless to say the whole Singularity movement reeks of Christian eschatology, as i have written in my book.
Is AI really so intelligent? For a long time, AI looked pretty dumb compared to humans. Think of the fact that traditional AI needs to see thousands of bananas before it learns to recognize bananas. How many bananas does a child need to see before it knows what a banana looks like? How many mice does a kitten need to see before it knows what a mouse looks like? Now this is no longer true. Deep learning allows engineers to create pre-trained so-called "foundation models" (a model that basically has experience of the world) and a foundation model only needs to see a few bananas or just one.
The other common criticism has been that these foundation models are pre-trained on human-generated data, so they are said to be just "aping" humans, but that criticism too sounds weak because conceptually they could be trained also on non-human data, for example on the natural environment, just like us. After all, what is the human genome if not a pre-trained foundation model about the natural environment? In fact, scientists like the US biologist Michael Levin at Tufts University and the British neuroscientist Karl Friston at University College London are now looking at the genome as a generative model.
The criticism that is more valid is that these AI systems "only" do two things: recognize and generate. They recognize a pattern (for example a sentence or an image) and they generate a pattern (for example a text or an image). Is that all there is to human intelligence? Generative AI is basically a bunch of prediction algorithms: given the facts it has, it predicts what happens next. That's how it generates text, images and videos. Is that all there is to intelligence? Just prediction? Is that all there is to human intelligence? That's a question for philosopher and neuroscientists.
For example, i have a bunch of pictures from my latest trip. Some of them are upside down (long story why). I have to check one by one. If the picture is upside down, i rotate it 90 degrees twice with Windows' free app on my computer. Some of them are only 90 degrees off. Now this is a trivial task that any human can perform. There is no AI that can do it. There are thousands of simply trivial tasks at the computer that involve a sequence of steps operating on files and that no AI can do right now. When i will be able to tell the chatbot: "Check all the pictures of my trip, reverse the ones that are upside down, rotate by 90 degrees those that are turned by 90 degrees". Or even simpler: "Change the names of all the pictures, naming them in chronological sequence, with a name that starts with mex followed by the number". I am not asking to cook a meal or cross a street, tasks the involve having limbs, senses and mobility. I am asking for operations that are purely digital. Can a large language model play chess? It can't even see the chessboard. If you help them giving a full description of the chessboard, they can make a legal move but they play like a beginner. So i don't have the perception that these generative AI is becoming super-intelligent, just very good at writing in fluent English.
Anyway, this AI (this intelligence or species that is very different from us) is certainly intimidating. Geoffrey Hinton, the father of Deep Learning, is becoming the Einstein of AI: just like Einstein warned against the existential threat to humanity caused by nuclear energy, these days Hinton is warning against the threat of AI.
The apocalyptic scenario is that AI, while pursuing its own goals, ends up killing us all. The less apocalyptic scenarios, but perhaps even more disturbing because more realistic, is that an AI learns so much about us humans that it learns to "deceive" on a massive scale. And then it would be like a politician on steroids, capable of deceiving even the smartest among us all the time; spreading disinformation that can destroy our society, if our society doesn't already destroy itself with our own disinformation. Imagine a new pandemic and the AI spreads the rumor that the virus doesn't exist, that masks don't help, that malaria pills cure it, that you should bleach yourself, and so on. Oh sorry that's already been done by Donald Trump in 2020. Imagine a Donald Trump on steroids leading all humans to extinction. Evil actors can train an AI to become a new category of terrorist, an information terrorist.
Seriously, we cannot stop evil actors from training an AI to do evil things. That's why some scientists are suggesting that the USA and China should do the AI equivalent of a nuclear weapons moratorium.
Is this concern justified? Is Hinton right to worry about AI?
Let's start with the "super-intelligence" factor. I personally think that the first computer of 1951 was already super-intelligent because it could do things that we couldn't do. And actually a lot of appliances can do things that most people cannot do. We coexist with machines that are much better than us at many tasks. Machines like television sets and microwave ovens do things that no human being can do. Over the millennia, we also adapted to coexist with many different kinds of intelligence. Some animals have superpowers: dogs can smell things that we cannot smell, cats can jump walls that we can't jump, bats can fly in the dark and land upside down on the ceiling and even catch flies, and so on. Most of these animals survive without homes even in very cold and rainy weather. Many of them have senses that we can only dream of. Some of them have been around for longer than us, and some of them are much harder to kill than us. These are all kinds of "super-intelligences". We adapted to coexist with other intelligences. We adapted to coexist with viruses, that are not alive but are very intelligent: it is so smart that it parasites on the apparatus of a living cell. We also adapted to coexist with our own inventions. Human life changed once we discovered fire (and many humans have died in fires), after we discovered metals, plastic and the steam power and nuclear energy. Each of these inventions had downsides and even killed people, but we're still around, more numerous than ever, and healthier and wealthier than ever.
I can't quite visualize the existential threat caused by AI but i can easily visualize what happens if there is a nuclear war, and a nuclear war can be started by a demented senile president pressing a button, and right now the world is full of incredibly stupid senile presidents. Ditto for climate change, which is here today. More and more extreme weather phenomena pose a clear danger to anyone who wants to look at them.
Another big threat to humanity is disinformation, a real problem that is here today and is killing people. Think of how many people died of covid in 2020 because they were told that covid didn't exist, or how many died in 2021 because they were told that vaccines were more dangerous than the virus. But we survived even our own disinformation. In fact, the history of human civilization is a history of disinformation: that a man called Jesus resurrected after three days, that a god named Allah revealed the Quran to a man called Mohammed, and so on. Maybe disinformation is just natural selection: for example, those who don't believe that there is a deadly virus called covid and that masks are useful and that vaccines are useful are more likely to die. The covid death rates have been higher in Kentucky, Oklahoma, West Virginia, Mississippi and Tennessee than in California and New York.
I cannot quite visualize the threat of AI but there is one step that would concern me: if AI started creating biological organisms. But that's a story for another day.
The main threat that we survived and adapted to is our own evil. The history of human civilization is a history of violence: wars, massacres, revolutions and so on. The history of human civilization is a history of mass murderers, mass rapists, cheaters, con-men, traitors and so on. I am haunted by the thought of the thousands and thousands of unpunished crimes that have been committed in the name of religion, land, and greed. They not only went unpunished but shaped what we call "progress" from the prehistoric caves to Silicon Valley's high-tech.
Even the evil of social media is actually human evil. The algorithms of social media (call them AI or not) "realized" that hate and fear are the best strategies to increase the so-called "user engagement" and therefore those algorithms keep promoting anything (true or false) that increases hate and fear. Algorithms have decided the result of elections and of course decide daily what we buy and even which movies we watch. By sowing hate and fear, and promoting daily disinformation, algorithms have destroyed families and communities. They are now on their way to destroy nations. Algorithms caused the Rohinga genocide in Myanmar. Algorithms almost killed the European Union when they helped Brexit. They are now destroying democracy in the USA.
In a sense the algorithms of social media are conducting an experiment on us and discovering things that we didn't know about us: how stupid and evil we can be. The scary thing is not that algorithms do what they do. They do their job just like cats chase mice and worms eat dead animals. Nature is not particularly kind, Nature is a horror story, but we can live with that. The scary thing about algorithms is that their job reveals something that we have long ignored: we know very little about ourselves. Even those who study history and neuroscience have trouble fully admitting what humans do. Algorithms seem to know us better than we do.
The scary thing is that we know a lot less about us than we assume, we are not prepared to protect ourselves from something that behaves like us on a massive scale, and therefore algorithms can manipulate us with apocalyptic consequences. We know so little about ourselves that algorithms conducting experiments on us can cause devastation among us.
AI has emphasized this chronic ignorance of ours about ourselves. What we really learned from AI is that we know a lot less about ourselves than we assumed. At every step of the way AI has changed our perception of cognitive tasks. What used to be considered difficult turned out to be easy, and viceversa.
In my book i spoke at lengthy about the "vast algorithmic bureaucracies". What terrifies me is NOT a future in which machines rule people, but a future in which vast heartless bureaucracies rule the world. Humanity and compassion are increasingly replaced by a mechanical procedure. It makes no difference whether the mechanical procedure is carried out by humans or machines. Humans increasingly behave like machines (which is also the reason why they can easily be replaced by machines).
A favorite example is "I am sorry". The expression "I am sorry" is losing its original meaning. When you hear "I am sorry" from someone who is denying you a service, you know that in reality nobody is sorry. You probably got many letters that started with a "We are sorry" denying you something that you wanted. Nobody in that organization is sorry, and in fact probably nobody even knows what happened: the letter was sent by an algorithm.
We have created highly structured societies in which what is not mandatory is increasingly forbidden.
One day something was wrong with my car and i took it to the mechanic. The first thing he asked me was to fill a form. A few days later a friend broke her wrist while skateboarding and i took her to the hospital. She was obviously in pain, and obviously couldn't write. Nevertheless, the first thing they asked her at the hospital was to fill a form. I filled the form for her and noticed that this form to take care of a wounded human being was quite similar to the form for repairing a malfunctioning car.
Enough pessimism. Now let's turn to something else. Let's analyze if the premises of the whole discussion are valid. First of all, we really need a definition of "superhuman" intelligence. The original computer application that John VonNeumann in person conceived, the first major application of computers and the first data-science project, was weather forecasting. Well: none of today's AIs has improved weather forecasting.
Second. I mentioned that an AI is virtually immortal, ubiquitous and omnipotent. Because we associate these qualities to the supreme god, we think they are useful qualities. Nature begs to disagree. Nature creates diversity because diversity has more chances to survive. The reason life survived on this planet is that nature created billions of different organisms. The secret to survival is being able to mutate as circumstances change. I suspect that eight billion human minds, that are all different form each other, are more likely to survive that one giant supreme monolithic mind. The reason humans survived nuclear weapons and climate change is that we are eight billion different brains. One big mind is not subject to natural selection and it is debatable if it is capable of surviving even the slightest unpredictable event.
Third. There are several paths to super-intelligence but as of today the path is to train AIs on all available data. So what happens when the AI is trained on all possible data? Does AI keep getting more intelligent or it reaches a plateau of intelligence? This is not clear to me. It is even not clear what happens when (very soon) there will be more machine-generate content than human-generated content: what happens when we train machines on content generated by machines?
Finally, what does human intelligence really do? "Deep thinking", not "Deep Learning", is what humans really do. You can find on youtube a lecture that i gave in July 2019 and on my website you can find the slides of the same talk that i gave one month earlier at Stanford.
###
What does human intelligence really do? Well, it hallucinates all the time. Hallucinations look more like human intelligence. Hallucinations are often garbage but sometime they are the unpredictable creative flight that is also known as "insight".