The Metaverse

subtitled: "The Convergence of Virtual Reality and Blockchain Technologies"

by piero scaruffi
Cognitive Science and Artificial Intelligence | My book on A.I. | My book on consciousness | Human 2.0 | Bibliography and book reviews | Contact/feedback/email
(Copyright © 2021 Piero Scaruffi | Terms of use )


This primer includes the following articles:
  • Fundamentals
  • Philosophy of the Metaverse
  • Blockchain Technology
  • A brief History of Metaverses
  • Alternative Worlds in Literature and Cinema
  • Utopia
  • The Zeitgeist from Cyborgs to Cybernauts
  • Cyberspace as Migration and Dematerialization
  • The Cognitive Duality of Stories and Games (and Simulations)
  • A Critique of Immersion
  • Postmodernism, Cybertime, Utopia
  • Death of the Author and of the Reader
  • The Interface
  • The Future of Writing
  • Intelligent Metaverses
See also:

Light the first light of evening, as in a room
In which we rest and, for small reason, think
The world imagined is the ultimate good.

This is, therefore, the intensest rendezvous.
It is in that thought that we collect ourselves,
Out of all the indifferences, into one thing:

Within a single thing, a single shawl
Wrapped tightly round us, since we are poor, a warmth,
A light, a power, the miraculous influence.

Here, now, we forget each other and ourselves.
We feel the obscurity of an order, a whole,
A knowledge, that which arranged the rendezvous.

Within its vital boundary, in the mind.
We say God and the imagination are one...
How high that highest candle lights the dark.

Out of this same light, out of the central mind,
We make a dwelling in the evening air,
In which being there together is enough.

(Wallace Stevens, "Final Soliloquy of the Interior Paramour", 1923)


Fundamentals

A metaverse is a virtual space where large numbers of people can gather to play, work, socialize and trade. Potentially, the metaverse will become a digital/virtual 1-to-1 map of the real world. Your identity in the metaverse is an avatar, that interacts with other avatars. (The term "avatar" is borrowed from ancient Indian religions, in which "avatar" refers to a mortal incarnation of a deity on Earth). The metaverse is a shared virtual space where people are represented by digital avatars. The difference between the traditional Internet and the metaverse is that "things" (user-generated content) have a location in the metaverse, just like in the real world (hence the metaverse is a “spatial Internet”).

The term "metaverse" was introduced by Neal Stephenson in his science fiction novel "Snow Crash" (1992), coincidentally in the same year when Cynthia Dwork at IBM invented the proof-of-work algorithm (the foundation of blockchain technology), in the same year when the CAVE opened at the University of Illinois, and in the same year when Nicole Stenger presented the first immersive movie, "Angels" (1992).

The first thing to notice is that the metaverse is not only a space on the Internet, but it is also a "shared" space, a space that many "avatars" share. These avatars are not limited to accessing it but they actually shape it. The avatars create a shared space. Needless to say, this reflects what happens in societies where individuals create a town, their shared space.

Metaverses are community-building software.

User-generated content is a key feature of a metaverse, but, unlike in Web 2.0, it's avatars of users, not the users themselves, that (who?) shape the metaverse, by creating and trading content.

An important premise is that right now there isn't just one metaverse, the way there is only one Internet: there are many attempts at creating a metaverse, from the generation of Second Life (2003) to the blockchain-based generation of The Sandbox (2014) and Decentraland (2017).

A metaverse generally implements an economic system that is modeled on the economic system of the real world. Avatars can make things, can sell things and can buy things. Each metaverse has a kind of digital money that can be used for such trades, and generally they are cryptocurrencies based on blockchain technology. More realistically, the Metaverse is a mosaic of economies that are evolving naturally out of decentralized finance, or DeFi.

The early attempts at molding a metaverse were made by games, typically games with ambitions of "virtual reality", although they rarely required the use of VR headsets. Of course, if one wants to truly replicate the experience of the three-dimensional real world on the two-dimensional screen, a sensory immersive experience would be a must; but most metaverses are content with simply replicating the processes and rituals of the real world, minus the bodily experience.

The underlying technology of the metaverse owes a lot to videogames. It is debatable whether we would have metaverses without the rise and progress of multi-user videogames, notably MUDs (multiuser dungeons) and MOOs (MUDs object-oriented). In fact, so far the difference between MUDs/MOOs and metaverses is mainly in ambition: a metaverse typically wants to control all digital experiences, not just a game's set of rules.

The metaverse is inherently limited: the two-dimensional "flat web" of the screen on which the avatar "lives" is obviously a mere approximation of the three-dimensional real-world, just like a circle can only approximate a sphere. Total sensory immersion is a given in the real world, whereas in a metaverse it can only be achieved through artifices and strategems such as VR headsets and clever programming. The individual is a three-dimensional body of flesh within a three-dimensional universe whereas the avatar is a two-dimensional picture made of pixels on a screen. The (embodied) individual is not replaced by the (disembodied) avatar: the individual is given a disembodied "second life" in an alternative universe, the metaverse. No matter how closely it mimicks the real universe, the metaverse is a completely different kind of universe. In fact the common laws of physics may cease to exist in a metaverse, replaced by any alternative laws of physics (for example, in Second Life avatars can fly).

The big limitation of metaverses has always been that a metaverse requires vast computation power. A new model of computation is required to develop a large-scale metaverse. For example, in 2021 Yuna Jiang at Huazhong University introduced a kind of blockchain-based collaborative computing paradigm based on Coded Distributed Computing (an efficient method of distributed computing when work is distributed across uneven processors to mitigate the "straggler" effect, method devised in 2015 by Kangwook Lee at University of Wisconsin-Madison) that would allow the metaverse's computational requirements (notably the computationally-intensive tasks of real-time graphic rendering) to be satisfied by exploiting idle resources from mobile devices.

The fundamental features of a viable metaverse are: persistence (typically, blockchain storage), multisensory realism (immersive user experience), user-generated content, economy, ubiquity (accessible from anywhere), massively multi-user remote interactivity, interoperability across virtual worlds, scalability.


Philosophy of the Metaverse

Two premises before we delve into the culture of the metaverse. Virtual reality was born when language was invented: before language, living beings could only experience the reality that they directly experienced around them with their senses. After the invention of language, humans could experience realities that they never personally observed. We experience realities through the descriptions of other people. We even experience facts of the past that have been recorded in words. We linguistic animals can experience things we will never see in person, simply by listening to or reading about them. We can experience distant places that we will never visit and we can even experience events of the past. We constantly live in a virtual reality created by language. I have now way of proving that the Roman empire truly existed and no way of knowing that there are tigers in India: language creates these realities for me, which are different from the realities of my neighborhood. Language also enables storytelling, the creation of an infinite number of stories and worlds.

Second premise. Since the 1980s we have been witnessing an exodus of people to cyberspace. Initially it was only the tech-savvy, the ones who knew how to operate a personal computer and how to connect it to the Internet via a modem (to "dial up"). Then in the 1990s shoppers began to emigrate to cyberspace thanks to the likes of eBay and Amazon. In the 2000s social life migrated to cyberspace thanks to the likes of Facebook and Twitter. Inevitably, politics too migrated to cyberspace. Then the covid pandemic of 2020-21 accelerated the migration of work to cyberspace. And so now the exodus to cyberspace is almost complete. The great migration of humankind from the physical space to cyberspace has ended and now humans are "developing" cyberspace.

Historically, the metaverse of "Snow Crash" came out about a decade after cyberpunk literature had been inaugurated by William Gibson's novel "Neuromancer" (1984), itself based on Vernor Vinge's novella "True Names" (1981), and just one year after the debut of Tim Berners-Lee's World-wide Web. It was preceded by several narratives that toyed with the notion of simulation, from Daniel Galouye's novel "Simulacron-3" (1964) to Paul Verhoeven's film "Total Recall" (1987). The idea of simulated life spilled over from science fiction into philosophy and the humanities in general. The physicist Frank Tipler in his book "The Physics of Immortality" (1994) even "calculated" that evolution would end with a simulation of all the conscious beings who ever existed, i.e. the resurrection of all the dead (the "omega point" theorized by Pierre Teilhard in the 1930s).

The technical difference between William Gibson's cyberspace of the 1980s and Neil Stephenson's metaverse of the 1990s was that cyberspace was an alternative universe whereas the metaverse is tightly coupled to the real one. But the fundamental difference was one of mood: Gibson was a nostalgic existentialist lost in a vast and hostile habitat (the hacker as a hiker in the wilderness) compared with Stephenson's enthusiastic endorsement of a busy videogame-inspired hyper-reality (the avatar as an everyman going about an ordinary life). However, Gibson's cyberspace looked like the modern capitalistic world whereas Stephenson's looked like a chaotic anarchic medieval dystopia of warlords and plagues.

The 1990s witnessed a lively debate about the fact that information was being "dematerialized" by the Internet, even before the advent of Wikipedia (2001). The mathematicians involved in inventing the computer had long reached the same conclusion, although in different directions: Alan Turing (the universal machine), Norbert Wiener (cybernetics) and Claude Shannon (information theory) all published their main works before the debut of the first programmable electronic computer (1951). In the 1960s the young school of Artificial Intelligence envisioned "expert systems" that incapsulated human knowledge with no need for bodies and thereby "cloned" human experts (again, with no need for bodies). Even when, in the 1990s, Artificial Intelligence veered towards neural networks, the artificial neurons were one-dimensional numbers, i.e. mere approximations of the real three-dimensional neurons. Once computers started doing more sophisticated things than crunching numbers, notably with the invention of databases, implicit in the practice of computer science, even the most mundane one, was the assumption that information was indeed disembodied and could flow seamlessly from one substance to another, for example from the human mind to an electronic machine and viceversa, and of course from one machine to another, regardless of the machine's "body". It didn't take long to realize that self-regulating machinery wouldn't even need a human in the loop. In fact, at the same time that databases were evolving towards the world encyclopedia Wikipedia, a school of thought envisioned super-intelligent machines that would be better than humans at both learning and acting, a school of thought that developed via "Speculations Concerning the First Ultraintelligent Machine" (1965) by Jack Good (real name Isadore Jacob Gudak), Masahiro Mori's "The Buddha in the Robot" (1974), Hans Moravec's essay "Today's Computers, Intelligent Machines and Our Future" (1978), Ray Solomonoff's article "The Time Scale of Artificial Intelligence" (1985), Marvin Minsky's essay "Will Robots Inherit the Earth" (1994) and culminated with Ray Kurzweil's "The Singularity is Near" (2005), the book that popularized the notion of the "singularity". One can trace this mindset all the way back to the early days of electronic computers, when (in 1957) Herbert Simon declared that "there are now in the world machines that think, that learn, and that create - moreover, their ability to do these things is going to increase rapidly".

Katherine Hayles's book "How We Became Posthuman" (1999) came out in the same year as the Hollywood blockbuster "The Matrix" (1999), a mediocre remake of Rainer Werner Fassbinder's masterpiece "World on a Wire" (1973) but much more discussed by philosophers, sociologists, etc. Hayles' nightmare was "a culture inhabited by posthumans who regard their bodies as fashion accessories rather than the ground of being". Hayles correctly described "how information lost its body", and then described the "post-human condition" (when information prevails over matter) as one in which the difference between bodily reality and virtual simulation becomes blurred.

Since then, the post-human condition has become the normal condition for a vast population of humans who are constantly plugged into the Internet, a condition that accelerated when the covid pandemic of 2020 forced millions of people to live and work in isolation at home; and the metaverse could be the natural culmination of the post-human condition.

On the other hand, whether with or without a body, humans seems to have a genetic propensity to "build". As Edward Casey discussed in "The Fate of Place" (1997), the ancient Greeks perceived the relationship between human and world in terms of "place" while the scientific revolution of Galileo, Descartes and Newton shifted the view towards the more abstract notion of "space" (Galileo's extraterrestrial space, Newton's absolute space, Descartes' "res extensa"). The philosophers of phenomenology returned to "place". Maurice Merleau-Ponty's "Phenomenology of Perception" (1945) placed the body at the center of experience: there is a world because there is a body, and the tools we use to interact with the world are prosthetic extensions of our body. Martin Heidegger's essay "Building Dwelling Thinking" (1951) defined "place" as both an artifact and a process: the process of "cultivating" it (the experience of living in it) is as significant as the outcome, the physical manifestation of living in it (the "construction"). The human condition is so tightly coupled with the notion of "place" that even the post-human condition requires the same notion to exist, although not necessarily in the same physical way.

One can argue that today the concept of "place" is being replaced by "nonplace". Nonplaces are places where people coexist without physically living together: they share the space, not the identity and the history. We have witnessed a multiplication of nonplaces: highways, supermarkets, television shows... The artificial has been spreading like an infectious disease, coming to infect even the workplace (remote work turns the office into yet another nonplace). Meanwhile, a similar process has eroded the meaning of "time", replace by several kinds of artificial time: for example, the abolition of seasons, and the abolition of work hours in remote work.

Note that the the hyper-audiovisuality of the digital era can lead to either wildly fantastic abstract worlds, whose beings don't look like humans and whose objects bear no resemblance to the objects of human civilization, or to photorealism, an accurate reproduction of the world as we know it. All metaverses so far have chosen the latter. The metaverse is typically a place that looks a lot like the real place.

"Building" is not only about building the material order of a place, the order of roads and houses: that is only the exterior building of a place. It is also about building the interior, i.e. the social order that inhabits the place. The metaverse enacts the replica of social order through the virtual cloning of ordinary activities. The social order of our material world is the product of a historical process that stretched over thousands of years, through wars, revolutions, cultural movements, fads, etc. The social order of a metaverse is an experiment in employing a different route. One of the fundamental chores required to all members of a community is to learn to coexist with the other members. "Universal freedom" is a contradiction in terms: your degree of freedom depends on the amount of freedom that you want to grant to the people around you, to your neighbors, coworkers, relatives, etc. The more freedom you give them, the less freedom you have; the more freedom you have, the less freedom they have. A community organizes around a delicate balance of degrees of freedom. This in turn depends on what Harold Garfinkel called the "observable-reportable" character of practical reasoning and action in his "Studies in Ethnomethodology" (1967): our ability to interpret the actions of others, to be good psychologists, and our ability to make it easy for others to interpret our actions. In this aspect the metaverse is not any different from a community in the real world, except that one skips the adolescential training and is propelled immediately into adulthood.

The metaverse echoes the "rhizome" metaphor introduced by Gilles Deleuze and Felix Guattari in "Mille Plateaux" (1980): the rhizome does not have a beginning, an end, or a center, and can be entered from many different points, all of which connect back to each other. "The rhizome resists chronology and organization, instead favoring a nomadic system of growth and propagation."

The metaverse can also build "experiments". The metaverse is not limited to being an escapist illusion: it can be, directly or indirectly, a method to design the future. David Kirby introduced the concept of "diegetic prototypes" in "The Future is Now" (2010). He argued that imaginary devices presented in films indirectly help to usher in new technologies because those films present them to a large audience as a) feasible, b) useful and c) harmless. In other words, they justify a business plan to research and build them. At the same time, Julian Bleecker with his essay "Design Fiction" (2009) had argued for a kind of design that relied on narratives of speculative, and often provocative, scenarios to explore possible futures: designing ideas, not only artifacts. Both Kirby and Bleecker basically advocated to abandon the manichean "utopian/dystopian" depictions of possible futures and to instead focus on ways to critically explore possible futures before realizing them. If we were better at thought experiments, perhaps we would invent better worlds. The metaverse is a large-scale collection of diegetic prototypes and of design fictions, of speculative design objects that are not only imagined but even effectively deployed in the (virtual) world. Their simulated "materialization" can tell us something about the social impact of their future real materialization.

In fact, it will be interesting to capture the evolution of a metaverse in some equivalent of the photograph. A nascent branch of history is the one that deals with interpreting an era's society by studying the photographs of that era. A famous example of photographs that "tell" the story of an epoch were Zhensheng Li's photos of Mao's Cultural Revolution, published in "Red-Color News Soldier" (2003). The world knew and knows very little of what happened during that fateful decade (1966-76), but his photographs "immerse" us into Chinese ordinary life. In that case it was just one photographer depicting an era, but in general it is a whole generation of photographers who, indirectly, write the history of an era with their photographs of ordinary (as well as extraordinary) life. At the same time those photographs influence the way people perceive themselves. Photography has been a powerful force in shaping national, generational and cultural identities. The "photohistorian" (stealing the term from a journal published since 1989 by the Royal Photographic Society) can pick up aspects of history that elude the traditional historian. Ditto for the age of television and for the age of the Internet: just by looking at old TV shows and old web pages one can get a feeling of an era, and a "photohistorian" can organize and rationalize aspects of that era. Hopefully some technology will emerge that will allow to take "screenshots" of the metaverse as it evolves, to document its evolution.

Plato's dialogue (or, better, monologue) "Timaeus" (4th century BC) opens with the memorable question: "What is that which always is and never becomes?" It is not the universe. Unlike the creation stories of most religions, in which a god (like Yahweh in the younger sections of the Bible) or gods (like the elohim in the oldest section of the Bible) create the universe from nothing, Plato thinks that a "demiourgos" imposed order on a preexistent chaos to generate our beautifully ordered universe, the "kosmos". What is and always will be is the model (the "paradeigma") that the demiurge "copied" to shape the universe. Each player of a metaverse is such a demiurge, trying to shape the metaverse according to an ideal model. The difference is that there are potentially millions of demiurges in the same metaverse, each trying to realize a different model. The demiurges must coexist and collaborate. Creating a universe is just the beginning. The real challenge is to co-create its future.


Blockchain Technology

(See my introduction to Blockchain Technology if you are not familiar with it)

Humans of the real world have invented governments, laws, tribunals, banks and various agencies to make transactions trustworthy. When you pay for a home, your country has set up a sophisticated procedure that makes comfortable with the idea that someone will take your money and you will indeed get their home. Ditto when you buy a car or even if you just buy intangible assets like stocks. The individuals of a real-world society rely on a "centralized" system of legal procedures. Because it is not governed by a centralized government, a metaverse needs an alternative method to make people believe that the transactions performed by their avatars are safe; a metaverse needs a different way to establish "trust" between individuals. Luckily in 2008 someone invented blockchain technology, a technology that provides that kind of service in a decentralized community. That's why, today, virtually all metaverses use a cryptocurrency based on blockchain technology.

The blockchain secures the virtual assets of avatars (including the identity of such avatars), i.e. ownership, and even enforces the proper execution of rules because the blockchain contains the very instructions for a deterministic implementation of a transaction (no need for a police force!) Cryptocurrencies built on the blockchain are programmable payment systems. The programming makes the network "trustless" and the absence of a central authority makes it "permissionless'. From a financial point of view, blockchain technology acts as a clearing and settlement platform, while at the same time being the very infrastructure for transfer/ circulation of assets.

In 2018 Bektur Ryskeldiev at the University of Aizu proposed a blockchain-based system for archiving, recycling, and sharing virtual spaces in mixed reality.

One thing to emphasize is that, in general, human societies foster collaboration. Human civilization progressed so quickly and dramatically thanks to the ability of humans to collaborate, sometimes on very large scales, from the pyramids to the computers. When we think of economic systems, we tend to think of competition: corporations ferociously competing with each other for supremacy, nations competing for resources and domination, individuals competing for higher salaries and positions. But the key to progress has always been collaboration, sometimes indirect, and rarely as publicized as competition.

Similarly, the blockchain fosters collaboration in the metaverse. The lack of a centralized authority makes collaboration virtually boundless. Alas, it can also foster collaboration of the undesired kind (criminal kind), but the positive side is that it fosters collaboration among complete strangers. The blockchain establishes trust between individuals regardless of who they are, where they live, what job they have, how much money they have, whether they are Christian or Muslim, men or women, elderly or teenagers. Collaboration on the blockchain is not geographically constrained. This doesn't sound too different from the existing Web 2.0 that has given us social media and, in general, systems of user-generated content. However, in the metaverse there is a cryptocurrency and there is a clear proof of ownership, and that makes the difference on the way to Web 3.0: users can automatically and systematically monetize what they create. (See my introduction to Web 3.0).

A fundamental phenomenon that is supercharging growth in the metaverse is the advent of "non-fungible tokens" (NFTs) that are making it a lot easier to sell all sorts of digital content. The trade of NFTs happens on the blockchain. (See my introduction to Blockchain Technology and in particular Cryptoart).


A brief History of Metaverses

The medieval fantasy world of JRR Tolkien's novels "The Hobbit" (1937) and "The Lord of the Rings" (1949) exerted a huge influence on popular culture of the 1960s and 1970s, on everything from rock music (early Genesis and Led Zeppelin) to cinema (George Lucas' "Star Wars" itself can be read as a sci-fi paraphrase of Tolkien's novels). Its influence on games was first visible in Gary Gygax's and Dave Arneson's tabletop game "Dungeons & Dragons" (1974), and computer games, notably the so-called MUDs (multi-user dungeon games), soon began to imitate the spirit of Tolkien's novels, in which the stories are just excuses to explore an imaginary world. The first MUD was created in 1980 by Roy Trubshaw and Richard Bartle at Essex University. MUDs were originally text-based, just like a novel, but generated by multiple users on their computers. In 1986 Lucasfilm launched Habitat, a graphical MUD created by Randy Farmer and Chip Morningstar and inspired by Gibson's "Neuromancer", running on Commodore 64 computers connected via dial-up lines. Habitat was a social virtual world in which each user was represented by an "avatar", a term borrowed from Hindu scriptures that refers to a human manifestation of a deity. In 1990 Pavel Curtis at Xerox PARC launched his computer game LambdaMOO. Technically speaking, it was still a text-based MUD, but it also worked as a chain of transmission between the era of MUDs and the era of "virtual worlds".

Meanwhile, there were places on the Internet where to meet others. "The WELL" ("Whole Earth Lectronic Link"), created by Stewart Brand in 1985 and tied to the counterculture of the San Francisco Bay Area, was such a place: a text-based world (preceding the World Wide Web by almost a decade) a virtual community of computer users structured in bulletin boards for online discussions. Annette Markham (1998) identified three ways in which users can perceive and use an online community: as tool, as place, and as way of being. These three categories are not exclusive, and instead belong to a continuum: the same user, depending on the day, might think of the virtual world as a way of being, a place to be visited or a tool for engagement. The WELL was precisely such a "place" as well as a way of being and a tool for engagement.

Neal Stephenson's "Snow Crash" came out in 1992 and coined the word "metaverse".

Ron Britvich's WebWorld (developed while working for a Boston company in 1994), later remamed AlphaWorld, in which users could build their own structures, can be considered the first serious attempt at a metaverse because instead of a fixed environment it allowed participants to shape the environment in real time, even create entire cityscapes. Jim Bumgardner's virtual world The Palace (California, 1995) was influenced by virtual reality but in a two-dimensional space, inspired also by comic books. Fujitsu's virtual world WorldsAway was created by a team led by Randall Farmer and launched in 1995 by Fujitsu Cultural Technologies on Compuserve. Also in 1995 San Francisco's Knowledge Adventure Worlds (later renamed Worlds Inc) launched Worlds Chat, a simulated 3D virtual world in which users could socialize (Britvich was hired by Worlds Inc and that's where WebWorld became AlphaWorld and then Active Worlds, a more direct adaptation of Neal Stephenson's novel "Snow Crash", competing internally with Worlds Chat). At this point there was enough momentum that Bruce Damer organized the conference "Earth to Avatars", held in San Francisco in 1996. The following year Damer published the book "Avatars! Exploring and Building Virtual Worlds on the Internet" (1997). The metaverse became popular while thinkers were focusing on "collective intelligence", especially after Pierre Levy's book "Collective Intelligence" (1994). Howard Rheingold's book "Smart Mobs" (2002) explored how technology could augment such collective intelligence. Those who remembered it also found analogies with Herbert Wells's concept of "world brain" (from a lecture of 1936 at the Royal Institution).

Despite this early conceptual attempts, Stephenson's metaverse remained science fiction until Web 2.0 happened and made it possible: the same technology that enabled social networks also enabled metaverses. Sampo Karjalainen's and Aapo Kyrola's virtual hotel Habbo Hotel (Finland, 2001), where players could design rooms, play games and trade goods, and Derek Liu's virtual society Gaia Online (Silicon Valley, 2003), influenced by Japanese manga and by the MMORPG Ragnarok, where players were represented by avatars and congregate in fora, laid the foundation for social networks because they enabled strangers to become online friends.

Will Harvey's and Jeffrey Ventrella's virtual world There.com (Silicon Valley, 2003) and Philip Rosedale's virtual world Second Life (Silicon Valley, 2003) came out at the same time that a philosopher, Nick Bostrom, was hinting that we may live in a simulation in his essay "Are You Living in a Computer Simulation?" (2003); and despite being dismissed by all reputable physicists, to this day plenty of philosophers discuss that hypothesis. At this point the idea of the simulation was competing with the idea of the singularity for popularity among futurists. Another thing that was becoming popular was Wikipedia (launched by Jimmy Wales in 2001) that created a whole new awareness about the "collective brain": in June the Wikimedia Foundation was founded, in October the first workshop of Wikipedians took place (in Germany) and by the end of the year it boasted hundreds of thousands of articles in multiple languages.

However, a threat was looming large on virtual worlds like Second Life: the combined rise of social networking software (Friendster launched in 2002, MySpace in 2003, Facebook in 2004), of texting on mobile devices, and of voice and video over IP (Skype in 2003, YouTube in 2005). Social networks introduced a competing model, that seemed to exert a strong appeal on the generation of the 2000s: a curated version of their real life in this universe instead of an anonymous vicarious life in an alternate universe. Vanity over immagination. The social network became a game to get as many "likes" as possible. The model of social networking sites was a disembodied world of information that remains tightly coupled with the real embodied world. Social networking sites shared with the metaverse vision the property of grass-roots, bottom-up self-organization rather than the traditional top-down organization. At the same time the success of MMORPGs (massively multiplayer online role-playing games), which were basically a graphical version of the MUD, from Richard Garriott's & Ralph Koster's Ultima Online (1997) to Brad McQuaid's & Steve Clover's Everquest (1999), culminating with Rob Pardo's World of Warcraft (2004), which tapped into Blizzard Entertainment's massive audience, impressed another paradigm shift: a sort of social networking but decoupled from the real world, and with the goal of producing a story, a story emerging from the interactions among players. While the stories posted on social networks like Facebook were individual, "vanity" stories, the stories produced by MMORPG players were communal stories. The MMORPG game was not about winning but about exploring. It had no end point, no actual "winning" state. Will Wright's life-simulation game The Sims (2000), in particular, which was a descendant of his old SimCity (1989) for building a city, was a step towards non-competitive, community-centric entertainment. The emphasis was not so much on extraordinary skills as on ordinary behavior. However, the generation of the 2000s seemed more interested in vanity and competition than in imagination and collaboration. And virtual worlds like Second Life paled in comparison with fantasy worlds of videogames, notably Azeroth, introduced by Warcraft (1994) and refined in World of Warcraft, and Hyrule, introduced by Nintendo's The Legend of Zelda (1986) and transitioned to 3D in 1998. A MMORPG like The Sims Online (2002), which simulated a world economy, a multiplayer version of The Sims, came very close to being a metaverse: it emphasized coexistence over competition, and was set in the real world instead of a fantasy world. David Baszucki's and Erik Cassel's Roblox (2006) and Markus Persson's Minecraft (2011) were particularly revolutionary because they enabled players to create their own games. The players of Minecraft could interact with and modify a 3D environment, thereby creating complex architectures.

Thus came the "winter" of virtual worlds, but during this winter some important ideas were put forward. For example, Solipsis, designed by Joaquin Keller and Gwendal Simon at France Telecom, was ahead of its time: a few months before Bitcoin was born, it envisioned a metaverse distributed via peer-to-peer technology. Solipsis was meant to be an open-source architecture for creating a network of virtual worlds over P2P technology. Surveys such as Tamiko Thiel's "Cyber-Animism and Augmented Dreams" (2011) and John-David Dionisio's "3D Virtual Worlds and the Metaverse" (2013) photographed the nascent discipline, its rise and fall. Some tried to merge the two paradigms of virtual world and social network by creating platforms where players could hang out with real people in virtual reality, like Altspace (Bay Area, 2013) and High Fidelity (San Francisco, 2013), the latter conceived by Philip Rosedale.

If "Snow Crash" had launched the vogue of the metaverse in the 1990s, one could argue that Ernest Cline's novel "Ready Player One" (2011) restarted it in the 2010s.

The "winter" lasted until Sebastien Borget launched The Sandbox (Britain, 2014) and Ari Meilich and Esteban Ordano launched Decentraland (Argentina, 2017), worlds in which avatars could purchase parcels of virtual land (using Ethereum-based crypto tokens such as "mana" for Decentraland and "sand" for The Sandbox) and build on them. Decentraland and The Sandbox empowered gamers to create virtual casinos, theme parks, concert venues, shopping malls and other revenue-generating virtual spaces in the metaverse. Every plot of land in the metaverse and every item in a plot of land was a non-fungible token (NFT). These blockchain metaverses allowed players to create worlds and games within them, and to monetize their creations. The Sandbox, for example, had three main components: an NFT editor for creating assets, a marketplace for trading assets, and a platform to develop interactive games. Other virtual worlds grounded on the Ethereum blockchain were Artur Sychov's Somnium Space (Britain, 2017) and Ben Nolan's CryptoVoxels (New Zealand, 2018). This generation introduced blockchain technology and cryptocurrencies in the metaverse so that players could buy, sell and trade. Upland (Silicon Valley, 2018), developed by Dirk Lueth, Idan Zuckerman and Mani Honigstein, based on the EOS blockchain (supposedly more energy-efficient than Ethereum), allowed players to trade virtual properties linked to real-world properties, basically transporting the concept of NFTs into videogames. SecondLive (2021) was built on Binance Smart Chain. A new generation built on more modern blockchains included Charles Hoskinson's Cardano-based metaverse Pavia (2021) and Christian Zhang's Solana-based metaverse Solice (2022).

A blockchain metaverse is ultimately a game but, unlike traditional games, it is a game that comes with a gameplay that makes players able to vote for it, i.e. to shape its philosophy and future direction.

The problem of any metaverse based on a blockchain was simple: as of 2021 blockchain technology wasn't mature enough to handle a potential user base of tens of millions of gamers. A metaverse could be seen as a universe of intertwined games, each one with its own economy. Hence a metaverse also needed a cryptocurrency that could unify and harmonize multiple in-game economies.

Inevitably, game developers started aiming for their own metaverse. First and foremost was Epic Games, a company founded by Tim Sweeney in 1991 in North Carolina as Potomac Computer Systems, and previously mainly known as the maker of one of the most popular game engines, the Unreal Engine (first debuted in 1998), which in 2017 had introduced the game Fortnite. This game was revolutionary in many ways. It was a free game and designed as a social experience, updated about twice a week to make it open ended, and could be played on all major gaming device (a rarity for mainstream videogames). By the end of 2018 it had become the most popular videogame outside China, generating more revenues per month than any previous videogame in history. In "battle" mode, Fortnite was just a regular videogame, but in "party" mode it was a platform for non-gaming activities, ranging from pop-star concerts to social meetups, and in "creative" mode it even allowed players to invent their own islands. In 2021 Epic raised $1 billion from investors to fund its long-term vision of the metaverse.

Roblox was a platform, not a game: games, worlds, and experiences were created by the community. Most game creators on Roblox were under 18. Its virtual world was always evolving, permanently under construction by the community of game creators.

In both Roblox and Fortnite the "gamer" was spending a lot of time exploring the world. Fortnite even became a platform for live events: in February 2019 dj Christopher "Marshmello" Comstock pioneered virtual concerts on Fortnite's game mode Battle Royale, and in April 2020 rapper Travis Scott performed his concert "Astronomical" in Battle Royale to an audience of 45 million viewers. Swedish pop star Zara Larsson promoted her new album in May 2021 with a virtual "dance party" in Roblox and then got rich by selling merchandise to her fans to dress up their Roblox avatars. In 2020 Fortnite started hosting entertainment on a public island, separate from the main Battle Royale island, and at the same time Roblox introduced "Party Place" for users to organize their own private social events, with the difference that the "place" was determined by the players themselves. Fortnite (250 million active users in 2019) and Roblox (100 million active users in 2019) looked increasingly more like social networks than games. In 2019 kids under 13 were spending more time in Roblox than on Facebook, YouTube and Netflix combined. Games were becoming the medium for kids to communicate with their friends. The "game" was increasingly a virtual gathering place, a place to hang out, whether playing or not. The covid pandemic of 2020, that forced kids to stay home from school and other activities, only accelerated that trend.

Facebook, Epic Games and Roblox were building centralized metaverses, metaverses that were controlled by a centralized developer, metaverses in which they were getting a kickback from user-created content (the old revenue-sharing business model). Decentralized metaverses, instead, built on a blockchain and therefore run by the users themselves, were moving towards a business model in which 100% of the revenue was going to the user/creator/player. It was in these blockchain metaverses that the ordinary player was truly able to participate in building and governing the metaverse.

The metaverse was happening also thanks to universal avatar platforms that created avatars accepted by multiple apps (as opposed to having to create an avatar in each app). Wolf3D, founded in 2014 by Timmu Toke and others in Estonia, originally to develop selfie-based digital avatar using 3D laser scanning technology, became the most popular avatar creator software in 2020 when it introduced the ReadyPlayerMe kit for developers. ReadyPlayerMe allowed users to "travel" between apps such as video games using a single virtual identity.

In 2018 South Korea's Camp Mobile (a subsidiary of Internet conglomerate Naver) introduced Zepeto, a social platform where users interact, play and create content as 3D avatars. By the end of 2021, Zepeto boasted almost a quarter of a billion users, 70% of which were women. Zepeto was particularly popular for virtual fashion items. De facto, Zepeto had become the largest virtual fashion marketplace in the world. In 2021 Zepeto also introduced gaming-producing features similar to the ones on Roblox (a platform mostly popular with men).

At the end of 2020 Nvidia unveiled Omniverse, a collaboration tool for designers of 3D applications, but publicized as a metaverse. In 2021 Mark Zuckerberg publicized Facebook's transition towards a "metaverse company" and even changed its name to Meta, but the virtual world Horizon (announced in 2019) was still not released and Horizon Workrooms, which launched in 2021, was another collaboration tool a` la Omniverse.

After Facebook announced its name change to Meta (in October 2021), the cryptocurrencies associated with metaverses skyrocketed in value, their market capitalization reaching $13.4 billion. That included the token of Ethverse, a brand new metaverse built on the Minecraft gaming engine and the Ethereum blockchain, and the token of Starlink, a metaverse DAO that was still in beta.


Alternative Worlds in Literature and Cinema

Progenitors of the metaverse are any fictional universes devised by writers. Some novels and some films are "metaverses" in the sense that what happens is less important than the world that they create. Ideally the reader or viewer should be able to jump into the novel or into the film and explore the world with the fictional characters. We don't know how many readers of novels like Tolkien's "Fellowship" and how many viewers of films like Lucas' "Star Wars" started dreaming of themselves living in those worlds and interacting with those characters. MMORPGs, virtual worlds and metaverses unlocked that imagination by turning that kind of daydreaming into physical action.

One can start with Homer's poems as proto-metaverses, followed two thousand years later by Chretien de Troyes' five romances of the Arthurian cycle in the 12th century.

Fantasy worlds have been the settings for many novels, from the islands of Jonathan Swift's "Gulliver's Travels" (1726) to the children's worlds of Lewis Carroll's "Alice's Adventures in Wonderland" (1865) and Frank Baum's "The Wonderful Wizard of Oz" (1900), from the sci-fi visions pioneered by Edwin Abbott's "Flatland" (1884) and Clive Lewis' "Out of the Silent Planet" (1938) to the modern mythological settings of Mervyn Peake's "Gormenghast" series (1946-59) and JRR Tolkien's "The Lord of the Rings" (1937-49). Note that Frank Baum conceived of see-through glasses in his novel "Master Key" (1901), predating augmented reality by almost a century. Stanley Weinbaum's story "Pygmalion's Spectacles" (1935), in which the protagonist transported into a fictional world by a pair of goggles, predated virtual reality by half a century.

The most popular worlds in the age of the Internet were the ones introduced by George Martin's "A Game of Thrones" (1991), adapted into a TV series (2011-19) and a videogame (2012), and JK Rowling's Harry Potter series (1997-2007), adapted into several movies (2001-11). At the same time several novels featured someone living in a simulation, starting with Frederik Pohl's "The Tunnel under the World" (1955), Philip Dick's "Time Out of Joint" (1959), Stanislaw Lem's "Professor Corcoran" (1961) and Daniel Galouye's "Simulacron-3" (1964). Fast forward to the age of the metaverse, and the virtual worlds of literature had become much more sophisticated, for example the world of Aincrad in Reki Kawahara's web-only novels "Sword Art Online" (2002-08), later adapted into a book novel (2009), a manga (2010-12), an anime (2012) and a videogame (2013), and the world of OASIS (or the Ontologically Anthropocentric Sensory Immersive Simulation) in Ernest Cline's novel "Ready Player One" (2011). Tad Williams' tetralogy "Otherland" (1996-2001) felt like a metaverse version of Tolkien's "The Lord of the Rings".

Television and cinema had presented several imaginary worlds. The sci-fi ones, like Gene Roddenberry's TV series "Star Trek" (1966-69), were descendants of comic books such as "Buck Rogers" (1929, by Phil Nowlan and Dick Calkins) and "Flash Gordon" (1934, by Alex Raymond). The characters of Rainer Werner Fassbinder's film "World on a Wire" (1973), based on Galouye's "Simulacron-3", lived in a simulation, and the protagonist of Steven Lisberger's film "Tron" (1982) was a hacker trapped into a computer.

Life is a reality television show in Peter Weir's film "The Truman Show" (1998). Clearly the most influential film was "The Matrix" (1999), the Hollywood remake of Fassbinder's "World on a Wire". Of all the elaborate sci-fi worlds of cinema one of the most impressive in the Internet age was created for Mamoru Oshii's film "Avalon" (2001). Virtual reality and many other futuristic technologies were ubiquitous in TV series such as Charlie Brooker's "Black Mirror" (2011) in Britain and Greg Daniels' "Upload" (2020) in the USA.


Utopia/ Take 1

The concept of utopia (a Greek word that means "no place") has to do with creating or founding a "perfect" society, not only with chieving individual happiness or liberation. Hence, becoming very rich or being very healthy are not utopias, just material states. And Buddhist nirvana is not a utopia, but more like a state of mind. Utopia is for this world, not the afterworld, and therefore the Elysian Fields of the ancient Greeks, or the Heaven of Christianity and Islam, or the Pure Land of Daoism and Mahayana Buddhism are not utopias. Nor are utopias the descriptions of fantastic lands, whether Atlantis or the valley of Yuanming Tao's fairy tale "Peach Blossom Spring" (421 CE). There are however utopias located in the past, like the Arcadia of the ancient Greeks and the Garden of Eden of the Jewish Tanakh the "Old Testament"). Those were utopias in the sense that humans were told they were possible, they had existed, and it was just a matter of reconquering them. For at least two thousand years, Westerners thought that the best era of all time had existed in the past. Religion instructed them on how to behave towards the gods in order to regain those mythical harmonies of the past.

Today most people think that they live in the best era that ever was, although far from perfect. Very few people dream of returning to the bucolic world of Arcadia.

In between these two stages, the mythological stage and the current materialistic stage, there was an intermediary stage in which some humans believed that they could manufacture the perfect society. One can start with "The Analects of Qufu" (Confucius), which prescribe how to create a harmonious society: the Chinese have been trying to build their utopia for more than 2,000 years. Plato's "Republic" (4th c BC) was another pioneering work, this time for Western civilization, but hardly as influential as Qufu's work. Zeno's "Republic" (3rd century BC) was also a famous work but it's lost. The New Jerusalem of the Book of Ezekiel was influential for both the Jews throughout their history and for the Puritans who in the 17th century colonized what is now the USA. Thomas More's "Utopia" too was a prescription on how to create the best society (a society with no poverty), but this time based on egalitarian principles (e.g. everybody farms) unlike Qufu and Plato who envisioned societies in which people had different roles and duties (Qufu privileged old men, because they are the wisest, and Plato privileged philosophers, because they are the wisest, and of course one can suspect that both simply privileged their own class). The egalitarian spirit, which was the essence of Buddha's and Jesus' messages but conveniently overlooked for centuries by the ruling classes, fueled the American and French revolutions at the end of the 18th century. These humans were more concerned with creating utopia in the future than in re-creating it from the past: instead of a creation myth, they represented a "destination myth" (as Gregory Claeys called it). So far the ultimate expression of this concept of human-made utopia is Marx's communism. Capitalism has traditionally been presented as based on greed and competition, although capitalism too aims at creating an ideal society in which wealth is maximized. And capitalism couldn't exist with widespread cooperation and collaboration: there is virtually no major scientific-economic progress that didn't involve the cooperation of many people. The scientific revolution was not the outcome of the competitive process of a few maniacs but the outcome of a cooperative process among dozens of curious thinkers. Silicon Valley was driven more by (direct or indirect) cooperation than by competition. A better definition of capitalism is as "liberalism", the utopia of a society that maximizes individual liberty, which at closer analysis depends less on selfishness and greed than on cooperation and collaboration.

The "destination myths" represented a new stage in the way humans saw their destiny on Earth. For millennia humans had been at the mercy of the elements, of the forces of nature, and the only hope for humans was that some merciful supernatural being would come to help them. Paradise was not as appealing as divine intervention in this life. Thomas More lived before the scientific revolution but already in an age in which engineers were taming nature.

In fact, the zeitgeist had changed dramatically at the beginning of the Italian Rinascimento (Renaissance), when Italians became fascinated with the concept of the "ideal city". The movement was perhaps started by Leon Battista Alberti's treatise "On Architecture" (1452), which largely interpreted architecture according to the values of Plato's "Republic". Alberti's pupil Antonio di Pietro Averlino, aka Filarete, designed a new city for Francesco Sforza, then Duke of Milan, known as "Sforzinda", according to principles laid out in his "Trattato di Architettura" (1465). Several Italian paintings of an "ideal city" survive, all of them tentatively and temporarily titled "The Ideal City": one in Baltimore, attributed to Fra' Carnevale and painted in the 1480s; one in Urbino, formerly attributed to Piero della Francesca but most likely by Luciano Laurana, the principal architect of the Palazzo Ducale of Urbino; and one in Berlin, painted in the 1490s, formerly attributed to Paolo Uccello, but more likely by Francesco di Giorgio Martini.

The discovery of America, widely publicized by the man after whom it is named, Amerigo Vespucci, in the letter "Mundus Novus" (1503), stoked the imagination of his contemporaries, and one decade later Thomas More published his book "Utopia" a few years later in 1516. Coincidence or not, Columbus' home country of Italy produced several utopian works: Antonio Francesco Doni's "I Mondi" (1552), Francesco Patrizi's "La Citta` Felice" (1553) and Tommaso Campanella's "La Citta` del Sole" (1602), plus the monetary utopia of Gasparo Scaruffi's "L'Alitinonfo" (1582). In 1593 the Republic of Venice created Palmanova, a fortress designed as a nine-pointed star, a self-sustaining community where everyone was equal.

Utopias after More's were born when science and engineering had created the impression that humans were now masters of their own destiny. Humans now had the tools to control and shape their natural environment. Utopias were more possible than ever. One could plot the precise route from the current society to utopia, which is precisely what Karl Marx did.

The 17th century witnessed an acceleration in utopian thinking, as demonstrated by Johann Valentin Andreae's book "Christianopolis" (1619) in Germany, by Gabriel Foigny's novel "A new Discovery of Terra Incognita Australis" (1676) in France, and in England by Francis Bacon's book "New Atlantis" (1627), the archetype of utopias based on scientific and technological thinking, by Gabriel Plattes's "A Description of the Famous Kingdome of Macaria" (1641), another text that emphasized scientific knowledge, by Samuel Gott's Puritan utopia "Nova Solyma" (1648), by the commune of the Diggers (1649), which inspired Gerrard Winstanley's "The Law of Freedom" (1652), and by James Harrington's book "The Commonwealth of Oceana" (1656), loosely based on the republic of Venezia/Venice.

The age of the Lumieres (Enlightenment) brought about a general optimism about the human mission, grounded in science and reason. Julien LaMettrie's "L'Homme Machine" (1748) presented the human mind as a machine. Jacques de Vaucanson and Pierre Jaquet-Droz built automata. Georges Buffon's "Histoire naturelle" (1749) told a story of the Earth and life on it that was not based on the Bible. Thomas Wright's "New Hypothesis of the Universe" (1750) speculated that there might be "ten thousand times ten thousand worlds. peopled with myriads of intelligent beings". Paul-Henri Holbach's "Systeme de la Nature" (1770) argued that the universe is matter in continuous transformation, with no beginning and no ending. Charles Messier compiled a catalog of comets and nebulae (1771), and in 1781 William Herschel discovered a new planet, Uranus (besides arguing that the Sun and the Moon must be inhabited). James Hutton's "Theory of the Earth" (1785) proved that the Earth is very ancient and Georges Cuvier's "Memoir on the Species of Elephants, Both Living and Fossil" (1796) proved that fossils are animals that have become extinct. James Cook's voyages to Oceania (1768-79) inflamed popular imagination for mythical lands, as did Edward Gibbon's "Decline and Fall of the Roman Empire" (1776) for the most famous land of the past. Abraham Anquetil-Duperron's translation of the "Zend Avesta" (1771) and Charles Wilkins' translation of the "Bhagavad Gita" (1784) began the "Oriental Renaissance" in Europe. At the same time there were conflicting forces: on one hand the rather cynical economic and social analyses of Adam Smith (1776), Thomas Malthus (1798) and David Ricardo (1817); on the other hand the spiritual communes of Ann Lee's Shakers, who emigrated from England in 1787 and founded Mount Lebanon near New York, and of Johann-Georg Rapp's Rappites, who emigrated from Germany in 1803 and founded Harmony in Pennsylvania.

Since More's "Utopia", all utopias were set in a distant, imaginary, present place. If Jean-Jacques Rousseau "Discourse on the Arts and Sciences" (1750) set his utopia in the ancient state of nature (basically a secular version of the religious Eden), Louis-Sebastien Mercier's "The Year 2440" (1771) was the first utopian work to set utopia in a distant future.

The American and French revolution, with their aspiration to rewrite the rules of society, probably increased the motivation to think of alternatives to the existing society. Their influence is visible in William Hodgson's "The Commonwealth of Reason" (1795); John Lithgow's "Equality" (1802), the first utopia published in the USA; and G. A. Ellis' "New Britain" (1820).

Meanwhile, the British philosopher William Godwin predated both anarchism and communism in his treatise "An Enquiry Concerning Political Justice and Its Influence on General Virtue and Happiness" (1793). Henri de Saint-Simon's "The New Christianity" (1825) argued that scientists should lead society, and his pupil Auguste Comte started lecturing in 1826 about the new doctrine, "positivism". It was the age of nationalist revolutions, largely inspired by the American and French ideals: France had multiple revolutions (1824, 1830, 1848 and 1870); Latin American countries fought for independence from Spain and Portugal; Serbia and Greece fought for independence from the Ottoman Empire, Poland from Russia (1831), the Magyars and Czechs from Austria (1848); etc.

Robert Owen's experiment New Lanark (1800) in Scotland and Charles Fourier's book "The New Industrial World" (1829) in France created the two most popular paradigms of the proto-socialist commune. In particular, Owen and Fourier inspired several utopian communities in the USA, starting with Owen's own New Harmony (1825), located on the land originally settled by the Rappites, in the USA. Further impulse towards abandoning the modern city came from the "transcendentalism", an anti-materialist philosophy that encouraged a return to nature (the Transcendental Club was founded in 1836 in Boston by the likes of Waldo Emerson and Henry Thoreau). Albert Brisbane popularized Fourier's thought in the USA with his book "Social Destiny of Man" (1840). Fourierist experiments included George and Sofia Ripley's Brook Farm near Boston (1841-47), Charles Sears' and Nathan Starks' Colts Neck in New Jersey (1843-54), and Humphrey Noyes' Oneida near New York (1848-81). Other socialist utopias were John Minter Morgan's "The Revolt of the Bees" (1826), John Francis Bray's "A Voyage from Utopia" (1842), and Etienne Cabet's "The Voyage to Icaria" (1839), the manifesto of the proto-socialist communes that he later (1848) founded in Texas.

Meanwhile in Europe the anarchist Pierre-Joseph Proudhon was distributing his pamphlet "What is Property?" (1843), that answered "property is theft", and Karl Marx and Friedrich Engels were publishing the "Communist Manifesto" (1848). The First International was formed in 1864 and the ephemeral Paris Commune was created in 1871. The forces of anarchism and communism converged towards a whole new category of more or less scientific utopia, all the way to the Russian anarchist Pyotr Kropotkin and his treatise "Fields, Factories, and Workshops" (1899).

These thinkers were criticizing the materialist and industrial society, although from different perspectives. They were joined in Britain by John Ruskin, particularly with the chapter "The Nature of Gothic" in the second volume of his "Stones of Venice" (1853), by Edward Bulwer-Lytton, who published the novel "The Coming Race" (1871), by Edward Carpenter, who distributed the pamphlet "Civilisation" (1889), and by William Morris, who published the novel "News from Nowhere" (1890). Similar views were discussed elsewhere by utopian novels such as Edward Bellamy's "Looking Backward" (1888) in the USA, a novel set in the year 2000 in which money and private property have been replaced by a system of wealth distribution, and Theodor Herzka's "Freiland" (1890) in Austria. Ruskin and Morris didn't view technological progress as leading to a better world and instead preached a society of "gardens" in which simple values prevailed, along the lines of Henry Thoreau's transcendentalist memoir "Walden" (1854) and consistent with Samuel Butler's satirical allegory "Erewhon" (1872). Hence the bucolic settings of utopian works such as William-Henry Hudson's "A Crystal Age" (1887) and William-Dean Howells' "Traveler from Altruria" (1894).

Henri de Saint-Simon and Pyotr Kropotkin had already viewed electricity as a revolutionary technology. Michael Angelo Garvey's "The Silent Revolution - The Future Effects of Steam and Electricity upon the Condition of Mankind" (1852) envisioned an 'electrical" utopia.

There were also twisted utopias that pretended to be based on science, but the science was Francis Galton's eugenetics, outlined in his influential book "Hereditary Genius" (1869), or Maltus' fear of overpopulation, and so these utopias, like Ellis-James Davis' "Pyrna A Commune" (1875), John Petzler's "Life in Utopia" (1890), Kenneth Follingsby's "Meda" (1892) Andrew Acworth's "A New Eden" (1896) and John-William Saunders' "Kalomera" (1911), discriminated against "inferior" beings (unhealthy children, sick people, large families, etc).

Ebenezer Howard's book "To-morrow" (1898), better known as "Garden Cities of Tomorrow", initiated the "garden city" movement in urban planning: he was influenced by Henry George's study "Progress and Poverty" (1879), by Bellamy's "Looking Backward", and by trascendentalists (he spent five years in the USA), anarchists and communists. All those critiques of the political and economic status quo converged in his vision of the "garden city".

That's when the architect as a visionary was born, or reborn. Tony Garnier's "La Cite' Industrielle" (exhibited in 1904, published in 1917) Italian futurists (Antonio Sant'Elia) and and Russian constructivists contributed to reimagine the city (if not the whole state), cultimating perhaps in Bruno Taut's "The Dissolution of Cities" (1920), an indirect product of German expressionists. A direct line can be drawn between that book's title and Frank Lloyd Wright's book "The Disappearing City" (1932), in which the Chicago architect discussed his own ideal city, Broadacre City. Another pinnacle of utopian architecture was the Ville Contemporaine conceived by Charles-Edouard Jeanneret, better known as Le Corbusier, a concept presented in 1922 at the Salon d'Automne in Paris and described in the book "The Radiant City" (1935). The communists of Eastern Europe, in the era of "social realism", had their own vision of the utopian city, built for steel workers around giant steelworks. The best example was Nowa Huta, created in 1949 in Poland.

Herbert Wells, perhaps the first intellectual who deserved to be called a "futurist", penned the programmatic novel "A Modern Utopia" (1905) and then the sci-fi utopia "Men Like Gods" (1923), located in a parallel universe. There was also the scientifically-engineered utopia of "Walden Two" (1948) by the psychologist Burrhus Skinner, a novel meant to promote the tenets of behaviorism.

While utopia was being at least conceived, if not implemented in Western Europe, in Russia it was dying: Tolstoy died in 1910 and Kropotkin in 1921, and Lenin had seized power in 1917 with his own version of utopia.

Utopian experiments continued in the West, above all with the hippy communities (for example, "Hog Farm", founded in 1965 in Berkeley by activist and clown Hugh "Wavy Gravy" Romney and his wife Jean "Jahanara Romney" Beecher, "The Farm", established in 1971 by San Francisco's Zen and LSD guru Stephen Gaskin in central Tennessee, and the libertarian city within a city of Christiania, created in 1972 by Danish hippies and anarchists in an abandoned military base located on the island of Amager in Copenhagen), and spiritual enclaves such as the Buddhist community of "Plum Village" at Loubes-Bernac, 40 kms east of Bordeaux in the southwest of France, established in 1982 by Vietnamese master Thich Nhat Hanh.

In 1959 the visual artist Constant Nieuwenhuys conceived the utopian city "New Babylon". Ron Herron, a member of avantgrade architecture group Archigram, envisioned the "Walking City" in 1966, a city that can "walk" (i.e. move) from ocean to land so as to maximize its resources. "Plug-in City", designed chiefly by Peter Cook in 1964, also a member of Archigram, imagined a city literally hosted inside a giant machine.

Texts about utopia were rare in the post-modernist world. Bernadette Mayer's "Utopia" (1984), which is a collection of poems, stands out.

In 1968 Roger Anger designed Auroville near Pondicherry in India, a city inspired by the philosophy of yoga guru Aurobindo Ghose who had died in 1950, and in 1970 Paolo Soleri designed Arcosanti in Arizona according to principles of "arcology", a combination of architecture and ecology. The weirdest utopian project of the time was perhaps the rock garden built in secret and illegally for many years by Nek Chand in India Chandigarh, the Indian city where Le Corbusier was building his masterpiece, the High Court. Nek Chand used waste from demolition sites around the city to create a divine kingdom that he called Sukrani. The authorities found out only in 1976, more than 20 years after he had started it.

Then came the environmental utopias, the cities built from scratch to minimize the "carbon footprint". Utopian visions of sustainability have popped up in the most unlikely places. Songdo City near Seoul in South Korea (started in 2002) was the first highly publicized "smart city". Masdar City (2008) in Abu Dhabi was marketed as the world's first zero-carbon city) and was followed by The Sustainable City (2015) in Dubai. In 2006 Nigeria began building Eko Atlantic on land near Lagos reclaimed from the Atlantic Ocean, and in 2009 Britain approved construction of its first eco-town, North West Bicester. There are cities designed for a return to a rural life, like The Cannery in California near Davis (opened in 2015) and Agritopia in Gilbert near Phoenix (2004). In 2016 ReGen Villages (a Silicon Valley startup) began construction of an eco-city in Almere near Amsterdam. In 2017 Toronto partnered with Google to develop the "smart city" Quayside. In 2017 Saudi Arabia announced the plan to build Neom, an eco-city bigger than Israel in the northern desert. Critics of eco-cities point out that they tend to become "green" enclaves for the very rich, while nothing is being done to confront the issues of poverty, pollution, homelessness and crime of existing megacities in the developing world. And most of them tend to become "smart cities" that collect an infinity of data, with the obvious threat to affect the privacy of citizens.

Modern utopias are scientific utopias: they are based on existing technology and a predictable path to achieving them. Perhaps the most visible utopia is the techno-utopia of globalization, of world integration, of the abolition of borders via cyberspace. The Internet has been hailed by many utopians as the ultimate free world, with the utopia of unlimited communication replacing the futurist utopia of endless progress. Artificial Intelligence and other software technologies are sometimes presented as vehicles to a utopia of omniscience and even immortality and sometimes as apocalyptic dystopias. It is not clear even to their prophets if the "singularity" will be good or bad for its creator, humankind.

The reason most people today think that they live in the best era yet is that today we calculate collective happiness based on gross national product, median income, life expectancy, health, peace, safety, and other material, measurable factors. Past eras cannot compete with our current era in any of these dimensions.

Clearly that was not the whole truth because the 20th century has mostly been the century of anti-utopias, of dystopias, both because of the rise of fascism and communism (both invented in Western Europe), and because of the industrial system invented by Frederick Winslow Taylor (summarized in 1911 in his book "The Principles of Scientific Management") While Eastern European intellectuals were being silenced by their own dystopia, Western intellectuals were no less terrified by social and political trends in their own societies, a phenomenon heralded by Fritz Lang's film "Metropolis" (1927) and by novels such as Jack London's "The Iron Heel" (1907), Franz Kafka's "The Trial" (1915), Yevgeny Zamyatin's "We" (1921), in which government ("Big Brother") is the problem, Aldous Huxley's "Brave New World" (1932), in which people themselves are the problem, and Ayn Rand's "Anthem" (1938). During and after World War II there came an avalanche of dystopian novels: Rex Warner's "The Aerodrome" (1941), George Orwell's "1984" (1949), Kurt Vonnegut's "Player Piano" (1952), Ray Bradbury's "Fahrenheit 451" (1953), Anthony Burgess' "A Clockwork Orange" (1962), Arkady and Boris Strugatsky's "Hard to be a God" (1964), James Ballard's "Crash" (1973), Doris Lessing's "Memoirs of a Survivor" (1974), Margaret Atwood's "The "Handmaid's Tale" (1985), Phyllis James' "The Children of Men" (1992), Lois Lowry's "The Giver" (1993), David Foster Wallace's "Infinite Jest" (1996), Kazuo Ishiguro's "Never Let Me Go" (2005), Cormac McCarthy's "The Road" (2006), Vernor Vinge's "Rainbows End" (2006), Naomi Alderman's "The Power" (2016), etc. And some even more disturbing dystopias popped up in cinema: Chris Marker's "La Jetee" (1962) Jean-Luc Godard's "Alphaville" (1965), John Frankenheimer "Seconds" (1966), Franklin Schaffner's "Planet of the Apes" (1968), George Lucas's "THX-1138" (1971), Andrei Tarkovsky's "Stalker" (1979), Richard Fleischer's "Soylent Green" (1973), John Boorman's "Zardoz" (1973), Norman Jewison's "Rollerball" (1975), Michael Anderson's "Logan's Run" (1976), George Miller's "Mad Max" (1979), Ridley Scott's "Blade Runner" (1982), David Cronenberg's "Videodrome" (1983), Terry Gilliam's "Brazil" (1985), Paul Verhoeven's "Robocop" (1987), Kevin Reynolds's "Waterworld" (1995), Kathryn Bigelow's "Strange Days" (1995), Jean-Marie Jeunet's "City Of Lost Children" (1995), Mamoru Oshii's "Ghost in the Shell" (1996), Andrew Niccol's "Gattaca" (1997), Peter Weir's "The Truman Show" (1998), continuing in the 21st century with Steven Spielberg's "Minority Report" (2002), Kar-wai Wong's "2046" (2004), Andrew Stanton's "WALL-E" (2008), Alex Garland's "Ex Machina" (2015), Yorgos Lanthimos's "The Lobster" (2015), etc.


The Zeitgeist from Cyborgs to Cybernauts

The 1980s were very much the decade of the cyborg (the body augmented with electronic organs or limbs), not of the avatar. Futurists were more intrigued by the potentialities of extensing the human body than by the potentialities of living in an alternative universe. It was the decade of Stelarc's robotic prosthesis "Third Hand" (1980) and James Cameron's film "The Terminator" (1984). That intellectual mood was perhaps best represented by Donna Haraway's "A Cyborg Manifesto" (1985), in which she correctly pointed out that Darwin in the 19th century had blurred the distinction between human and animal and then in the 20th century computers had blurred the distinction between natural and artificial. That trend culminated with cyborgs that blurred the distinction between body and nonbody. However, these intellectuals didn't think of avatars that blur the distinction between reality and fantasy. (To be fair, Lynn Hersham created an avatar in the real world: between 1972 and 1979 she lived an ordinary life as "Roberta Breitmore", a fictitious person). The focus was on the individual (being extended into a cyborg), not on the whole society (being increasingly extended into a computer-controlled smart city).

Because they were focusing on the individual instead of the society as a whole, many thinkers missed the decline of socializing that started at least as far back as the invention of email (1972) on Unix. Email allowed people to keep in touch much more frequently but marked the decline (and eventual death) of the handwritten letter. If it is debatable how much the linguistic skills declined because of email (after all, people wrote more often, so they practiced more often, even though the emails were typically shorter and full of abbreviations), there is consensus that the "depth" of communication was reduced. Emails went to more people and more often, but tended to be more superficial. Quantity generally comes at the expense of quality. Precisely because they were sending many more emails, users were being less careful about what they were writing, and this generated a pandemic of anxiety related to rude emails and misunderstandings. Furthermore, email and then texting, and then videochats, implemented the transition from the physical movement of people to the virtual movement of information, thereby dramatically reducing the opportunities for physical encounters. Then came videogames which turned the computer into a combination of hermit-like reclusion and opiate-like addiction, although they created their own subcultures and communities. (Multi-user online gaming became a social lifeline). During the 20th century, the radio set and then the television set had taken over the fireplace's role as the magnetic core of domestic sociality, but the television set became obsolete in the 2000s, at least for the younger generations who obtain their entertainment and news from personal devices like the laptop and the smartphone. Then in the 2000s came social networking and chat systems like Facebook and Wechat with their shallow social life. Facebook had a "Like" button but not a "dislike" button, implying an almost drug-like quality of digital friendship. Friendship became a commodity. Platforms like Facebook started making money out of the user's social life: a user's social life became someone else's business model. At the same time the social networking platform became a vanity platform on which the most common activity was to post "selfies" that glamorized the user's own life: a platform born for social networking became a platform for solitary cult of personality. (Don't blame this phenomenon on the platform: the tendency of tourists to take pictures of themselves in front of just about anything predates Facebook). The ultimate form of large-scale vanity and self-cult of personality was live video streaming, the equivalent of a reality TV show but with the protagonist being the creator, at the same time making and being the show. Digital social networking became an interesting experiment in self-representation and self-perception.

The metaverse is a place in which to socialize all the time, but it is not in the real world. The rise of the metaverse signals a need that videogames and social networks could not satisfy. It is all vicarious, but the avatar is neither lonely nor vain. While in live streaming (and to some extent also in social networks) you are naked in front of everybody, in a virtual world you hide yourself, you become someone else. Your condition is the same as the condition of the player of a multi-user player, except that there is an actual "life" to talk about.

The neuroscience of sleep seemed to imply that "we" routinely live alternative existences. For example, Allan Hobson in "The Chemistry of Conscious States" (1994) argued that two chemical systems inside the brain regulate the waking and the dreaming experiences: respectively, the "aminergic" and the "cholinergic" systems. Our conscious and unconscious identity swings between these two end points. There are universes that we all involuntarily experience, many times in our life: dreams. During sleep our "avatar" is thrown into these universes that are often fantastical and sometimes terrifying.

Perhaps the popularity of Buddhist meditation among high-tech Silicon Valley visionaries (from Steve Jobs of Apple to Jack Dorsey of Twitter), too, contributed to demystifying the notion of entering a metaverse. The oldest Buddhist meditation practice, Vipassana meditation, in particular, popularized by Joseph Goldstein's and Jack Kornfield's book "The Path of Insight Meditation" (1995), trains the "user" to achieve an alternative state of mind, like a metaverse of sensations and no reactions. In 2007 Google launched a meditation program called "Search Inside Yourself" which then became the Search Inside Yourself Leadership Institute. In 2009 Soren Gordhamer started the annual Wisdom 2.0 conference in San Francisco. Books like Daniel Ingram's "Mastering the Core Teachings of the Buddha" (2008) and Jay Michaelson's "Evolving Dharma" (2013) became bestsellers among software engineers, who flocked to Jack Kornfield's Spirit Rock meditation retreat north of San Francisco. Polls showed a phenomenal increase in the number of people who meditate in the USA. Meditation apps on smartphones were downloaded millions of times.

During the covid pandemic, many activities (from work to study) moved online thanks to videoconferencing platforms like "Zoom", and their users kept moving in and out of a metaverse of sorts, the metaverse where they met coworkers, customers, teachers, classmates and so on, a metaverse juxtaposed to the physical universe that was reduced to an apartment or a home.

One reason why the metaverse resonates with the public is that we already live in a metaverse of sorts. Sci-fi writer Philip Dick asked in 1978: "What is real? Because unceasingly we are bombarded with pseudo-realities manufactured by very sophisticated people using very sophisticated electronic mechanisms". But he had not seen anything yet. What came after the opening of the Internet to the public was astronomically more invasive than anything that Dick had seen on television in the 1970s. We already live in a metaverse, but it's a metaverse made of ads, banners, pop-up windows, and all sorts of distracting (and often brainwashing) experiences, a metaverse controlled by corporations that want to hijack our attention span. (Personally, i also find very annoying the videos that start automatically, whether they are commercials or not, and, honestly, even most pictures, especially when i'm just searching for simple information). The whole apparatus of redundant decoration around a piece of information is part of this unwanted metaverse forced on Internet users. Radio and television entertainment got flooded very quickly with commercials, and the Internet has simply provided an even more efficient platform to reach consumers in every corner of the world and reach them multiple times a day. Marshall McLuhan's "global village" turned out to be more like a global marketplace (and sometimes a marketplace of conspiracy theories) than a homely village. Platforms like YouTube exploded the amount of time that one has to spend watching commercials: if television was showing a commercial every 20-30 minutes, virtually any video on YouTube starts with a commercial, even if the video is only a few seconds long. (The creator of the video, of course, gets absolutely no money from the commercials that are displayed before and during the video). We live in the age (foreseen by Jean Baudrillard) in which the advertisement has become longer than the show. This artificial world of commercials has also invented a fantastic way to spy on us: the "cookie", which websites can deploy on any device that connects to the Internet. In 1994 Lou Montulli, working at Netscape, invented the Internet "cookie", a piece of information deployed by the browser on the user's computer, initially to find out whether visitors to the Netscape website had already visited the site before. The "cookie" has become the accepted method for websites to record the user's browsing activity, i.e. to "spy" on the users. Tracking cookies map a user's online life with increased accuracy. A website can then drop even cookies that belong to its advertisers, the "third-party cookies", so that the websites spying on your online life are not the ones you voluntarily accessed but the ones who paid for advertising space on those websites. These cookies are used to customize the adverts based on your online life, so that the marketing campaign follows you, the unsuspecting cybernaut, as you visit different websites. Radio and television never had the power to customize advertisements for each individual viewer. It is a nobrainer on the web. This greedy metaverse views the cybernaut only as a consumer. The cybernaut travels cyberspace shackled to a billboard that keeps posting adverts continuously refined based on the stations of the journey and continuously suggesting new destinations. This metaverse is constructed by corporations for the purpose of extracting the maximum amount of money from the cybernauts who venture into it. It is a shopping universe superimposed on the universe of digital information, and powered by a vast apparatus of surveillance.

Maybe people are ready for a metaverse where they can feel free and not surveilled.


Cyberspace as Migration and Dematerialization

Humans have been searching for new worlds ever since. For unknown reasons, and perhaps for no reason at all, they migrated out of the plains of Africa, spreading north and east, towards unknown and frequently unfriendly territories. Wherever they went, humans invented new architectures, new languages, new mythologies, new technologies and new societies.

One can argue that the migration has led humans away from primal matter. Their migrations has involved not only the colonization of new physical spaces but also the colonization of new virtual spaces: Greek philosophy, Buddhism, Confucianism, Scholastic philosophy, mathematics, medieval music, Renaissance art, science, German idealism, romantic poetry, cubism, surrealism and so on.

Over the millennia, the hunters and gatherers of the Paleolithic have become intellectuals through a sequence of migrations that carried them far away from the material world of prehistory. Human migration has been as much about dematerializing the environment as it has been about farming, trading, state building, warmongering, exploiting and enslaving. Cyberspace is the latest stop of humanity's endless migration towards dematerialization of its natural environment.

Being populated with software and data, and powered by hardware, cyberspace is a much more concrete space than, say, Pierre Teilhard's noosphere (1922), which is the evolving sphere of thought enveloping the biosphere and that would culminate in a transcendent "omega point"; but cyberspace could have its own "singularity" as the migration continues undeterred towards more and more virtual environments.

It is difficult to answer the question "why do humans need to migrate to cyberspace?" the same way that it is difficult to answer the question "why did humans need to migrate from Africa to Scandinavia?". One wonders if the appeal of migration came from new avenues for leisure rather than from new avenues for wealth. After all, how could humans get "richer" by moving towards the arid lands of Greece and the cold winters of the steppes. And yet they did. One wonders if archeologists are missing the importance of leisure time in their descriptions of the primal motivations underlying migration patterns. Is the "promised land" a land of plenty or a land of entertainment? Was Columbus searching for gold or for fun? Were the millions of illegal immigrants in the USA and Europe attracted by the horrid conditions in which they now live and work or by Disneyland and Las Vegas?

Leisure is a kind of dematerialization: the material affairs of production and trade (or, if you prefer, of work and shopping) are replaced for a few hours by the immaterial affairs of reading a book, of watching a TV show, of playing a game.

Cyberspace was originally (in the early days of computer networks) a space for work, study and commerce, a transnational virtual world, but leisure percolated into it from the beginnings and expanded dramatically when the personal computer and the smartphone vastly expanded ease of access to it (like veritable prostheses for our bodies to enter cyberspace). Leisure on cyberspace now encompasses everything from socialization to play, from pornography to dating, from world news to entertainment.

Like they have always done in their migrations, humans have developed new architectures, new languages, new mythologies, new technologies and new societies for this new space, cyberspace.

Cyberspace is a descendant of mathematical space. Its languages are descendants of symbolic languages such as the "characteristica universalis" invented by Gottfried Leibniz in "De Arte Combinatoria" (1666). Its technologies are descendants of George Boole's binary algebra (1854) and of Alan Turing's "universal machine" (1936), to name just two.

However, when it comes to architecture, mythology and society, cyberspace has expanded into realms that mathematical space never considered.

Cyberspace doesn't have a creator god nor an origin myth because we can prove that it was created by humans and we can write its history from the first programmable computer to the metaverse, but it has its own mythology: there are hackers, cypherpunks (the inventors of cryptocurrencies), transhumans/cyborgs (technologically enhanced humans) and many other legendary characters; and there is a general mythology of algorithms, which are sometimes (especially in Artificial Intelligence) treated like divinities. Society in cyberspace was established with the first "bulletin boards", like CommuniTree (1978), and the first "newsgroups", like the Usenet (1980) and the WELL (1985), and with the multi-user games, like Habitat (1986) and especially the MMORPGs of the 1990s, such as Ultima Online (1997) and Everquest (1999), and truly blossomed with the social networking platforms Geocities (1994), MySpace (2003) and Facebook (2004). And finally cyberspace began to grow its own primitive architecture with city-building games like Utopia (1982) and SimCity (1989).


The Cognitive Duality of Stories and Games (and Simulations)

Life is a stories-generating machine.

The ultimate meaning of life is the stories it generates.

The stories remain and constitute the real contribution of life to the universe.

The stories evolve as life evolves. The human mind is programmed to construct stories out of what happens around us and even to construct fake/imaginary variations on the real stories that we witness and learn about. The ultimate purpose of our cognitive life is to tell stories.

Everything is a story. For example, a home is the story of how it was built, in which circumstances, why and by whom and for whom, and who has lived in it since it was built and even what happened in that neighborhood.

The world is more than just a landscape: it is a story-teller that is telling us stories all the time, stories that we need to interpret in order to survive and thrive.

Story-telling is the main way we communicate with the world and understand what the world tells us.

At the same time, life is a game, and the world is a player that is playing that game with and against us.

Life is a game at multiple levels. Evolutionarily speaking, the survival of the fittest is a game. Genes play a game to create the phenotype of our body. Every day's life is a multi-user game in which we play with and against other players, a MUD of sorts. Bureaucracies are games. Capitalism is a game. Wars are games. Work is a game.

At the same time, life is a simulation: our life is never what we pretend it is, it simulates what we would like our life to be. We really live in a simulation, except that we personally build that simulation.

We tell stories, we play games and we simulate. This is particularly visible in children, who spend most of their cognitive life talking, playing and daydreaming. That's how children become part of our collective simulation. These three dimensions of cognitive life not only coexist but animate each other. There is a straight line, or, better, a circle, connecting storytelling, gaming and simulating.

Aarseth Espen in "Cybertext" (1997) tried to separate storytelling from gaming, viewing them as the two extremes in the transition from linear media to nonlinear media. Somewhere in between the two there is what he called "ergodic literature", i.e. dynamic texts for which the reader must perform specific actions to generate a meaningful story and in which every reading (by the same or different reader) may yield a different story. Espen was reacting to the brief vogue for the hypertext novel: Michael Joyce's "Afternoon" (1991), Stuart Moulthrop's "Victory Garden" (1991), John McDaid's "Uncle Buddy's Phantom Funhouse" (1992), Bill Bly's "We Descend" (1997), Stephanie Strickland's "True North" (1997), Nick Montfort's "Winchester's Nightmare" (1999), Talan Memmott's "Lexia to Perplexia" (2000), Wes Chapman's "Turning In" (2001), etc

Espen argued that the dominant user function in literature is the interpretative one, while in videogames it is the configurative one.

But one can argue that all narratives are "ergodic" in the sense that each of us reads a slightly different version of a novel and sees a slightly different version of a movie; that the interpretative is simply a kind of configurative. As Frederic Bartlett showed in the 1930s, human memory is "reconstructive": we unconsciously retain a very subjective version of a story. That's after all the main difference between human memory and computer memory: the computer memorizes every single bit of a sentence or of a scene and can replay it exactly as it was, whereas human memory memorizes events in a highly manipulated form and, when replaying them, customizes them to the context. We never read the same text the same way, and we never watch a movie the same way, especially if we are talking to someone else who also read that text or watched that movie. When we re-tell a story, we "simulate" what was in the original story, and we play a game with the listener, sometimes a collaborative game (we reconstruct the story and its context together). The protagonist of the stories that we tell is always us. The effort we have to make when reading a novel or watching a movie is precisely the effort to encode the story in a subjective form. In a sense, life is easier for a computer that simply stores every bit/pixel of the story. In computer terms, the user has to do non-trivial work to absorb a story, and that "work" is similar to navigating in a space. Therefore there is an element of gaming and simulation even in the simple action of re-telling a story because we adjust the story to the occasion and in any case we humans just don't have the cognitive skills to retain every detail of the story so at best we "simulate" what was in the story. The main difference between storytelling and games (think of old-fashioned table games like Monopoly and chess or even just outdoor games like Hide & Seek) is that games come with clear rules whereas storytelling has no explicit rules: there is no rule for how we are supposed to summarize a car accident or a political speech.

On the other hand, the computer as a cultural medium has been viewed as extending old forms of cultural expression: Brenda Laurel's "Computers as Theatre" (1991) saw continuity with the drama, Janet Murray's "Hamlet on the Holodeck" (1997) found continuity with the novel, and Lev Manovich's "The Language of New Media" (2001) with the film.

However, the computer (the digital medium) introduced the possibility of programming a (digital) text, i.e. the possibility of making it explicitly interactive and even collaborative. And so, again, a straight line leads from texts to games, from digital texts to videogames.

Games don't tell stories, but, conversely, one can argue that games are a medium to construct stories, imaginary stories that don't exist in the real world but that are often more cognitively challenging, cognitively interesting and cognitively addictive than real-world stories. A game is a system for players to construct more interesting stories than the ones that compose our ordinary lives.

In real life the world determines which stories can be constructed and told in it. In literature and cinema the story generally dictates the world in which it takes place. A game is more similar to life because it is its rules (its world) that dictate which stories can take place in it.

Every story and every game is a simulation to some degree, but the full-fledged simulation is a game with enough rules to truly match the complexity of the real world, a vast landscape of possible stories. Videogames have been steadily evolving away from the purely competitive mode towards the role-playing and exploration mode, i.e. towards simulations that are more cognitively "real". The player of a simulation is not an explorer but a writer, a poet, a filmmaker, a creator looking to construct amazing stories within the simulation's landscape. A metaverse, as the pinnacle of simulation, satisfies our genetic propensity to generate stories.

Ludwig Wittgenstein in his "Philosophical Investigations" (unfinished when he died in 1951) was puzzled that we know exactly what a "game" is but it is so difficult to find what games (such as a solitaire and chess) have in common. He tentatively submitted that a game has both rules and "a point". On one hand, one can argue that that's obviously not enough because many things, from bureaucracies to driving, have both rules and a point. There must be a difference in the "kind" of point that makes a game a game as opposed to dealing with the tax authority or the department of motor vehicles. On the other hand, one can argue that Wittgenstein was right on target and indirectly showed that dealing with a bureaucracy, driving and many other activities are games, that games are pervasive and pretty much the essense of living. All interactive media are games. In fact, all devices that we interact with are games. It took me a full day to figure out how to disable all the automatic settings of my new smartphone and i often struggle to figure out how to perform even simple operations. The designers who designed that smartphone designed a game, and apparently a fairly difficult one. Every user interface is a game. A metaverse is a game that simulates even the "boring" aspects of obeying rules and aiming for a "point".


A Critique of Immersion

This is an essay in three parts: 1. A critique of immersion based on Brech's estrangement; 2. A critique of immersion based on the power of imagination; 3. A critique of immersion based on the kinaesthetic demands of gaming.

The current trend in computer entertainment (and videogames in particular) is towards more realistic graphics, an "Aristotelian" trend in the sense that it aims for full immersion of the spectator in the "drama", as Aristoteles/Aristotle preached in his "Poetics". In the 1930s Bertolt Brecht reacted against the Aristotle's principles by instead wishing the audience to be fully aware of being in the presence of a fiction and he invented the technique of "estrangement" ("verfremdungseffekt" or "v-effekt"). While Aristotle wanted a theater so powerful that the spectator subconsciously identifies with the protagonist, Brecht desired a theater in which the spectator consciously assimilates the protagonist's actions and words. The leftist Brecht, whose mission was an agit-prop theater, wanted the spectator to analyze causes and effects of actions carried out by the characters rather than to get thrown emotionally into the lives of the characters. This "estrangement" leads the spectators to question their own dogmas. The current trend in videogames moves in the opposite direction, towards total immersion, total identification with the "character". Augusto Boal's "Theatre of the Oppressed" (1979) went as far as to claim that Aristototelian immersion is a tool for oppressing the masses. As it turns the workers into spectators, it solidifies the dictatorship of the state.

Scott McCloud's "Understanding Comics" (1993) argued that low-resolution characters (corresponding to weak-immersion games) force the reader to use her/his imagination to "fill the blanks": you compensate the lack of passive immersion with a dose of active imagination. One wonders if making avatars too photorealistic deprives the players of some degree of imagination. The original MUDs were text-based, the exact opposite of photorealistic: their success was due to the appeal of two things: world-building and community-interaction. They were unrealistic the same way that the pieces of chess are not photorealistic replicas of real-world queens, bishops, knights and rooks. This does not mean that there is less "immersion" when playing chess of text-based MUDs: the "immersion" comes from the gameplay itself, not from the (non-existent) graphics.

Visual communication is a two-way process. The viewer is as engaged as the designer, and his engagement depends on how much freedom of imagination the designer is allowing her. If i present you with a picture of your mother, you are unlikely to go on a flight of imagination. If i present you with a sketch of a person, you are likely to. It is commonly accepted that a picture represents something, delivers a message (more than a thousand words), a message that somehow the viewer will consciously or unconsciously "decode", but it is less often appreciated that the way the picture is drawn, its degree of realism, also has an effect on the viewer, triggering a cognitive process no less important than the process of "decoding": the process of "imagining" what is not explicitly represented. The lousier the drawing, the more "imagining" it may trigger. Two dashes above a line are viewed as a human face although there is absolutely no human face there: that is an amazing flight of imagination. The more you remove from an image the more you focus the viewer's attention on what is left. Just tilt the dashes a little bit and the viewer will perceive a change in emotion in that imaginary face. For an actor to cause the same change in perception it would take a complex facial movement. Emojis are popular not because they are simple but precisely because their simplicity conveys a strong message that would take a while to deliver with facial expressions.

James Newman in "Reconfiguring the Videogame Player" (2001) argued that the appeal of videogames is not primarily visual but kinaesthetic and even cognitive. The story embedded in a videogame is generally trivial about and the game's endpoint is often also trivial; but getting "there" requires physical and mental skills.

The psychologist Mihaly Csikszentmihalyi, who spent his life studying the sources of happiness, claimed that the happiest moments in our lives are those during which we are concentrated in an effort to accomplish a difficult goal. We are so focused that we no longer realize where we are, who is around us and even who we are. We lose the sense of time. We are fully immersed in what we are doing. Those are the states of mind that he called "flow" in his book "Flow" (1990). He called "autotelic self" a person who is so good at "flow" that s/he actually enjoys any challenge in life, the opposite of those who get depressed or panic when something goes wrong. These people are the happiest people. They never get anxious or bored. It is not surprising that Sherry Turkle in "The Second Self" (1984) describes the state of playing a videogame as very similar to Csikszentmihalyi's flow. Simply saying that videogames are addictive is reductive: videogames are addictive because they create precisely this state of "flow", of maximum concentration, which is almost the exact opposite of the state of "immersion", of drifting unconsciously in an artificial world, which in turn is similar to the state created by psychedelic drugs.

While Csikszentmihalyi focused on how to train the mind for maximum flow, it is interesting to consider, conversely, what kind of training is achieved via flow, what skill gets trained during the state of flow. The first application of computer simulation was for training. Training the body is a way to train the mind. That's why the military trains soldiers in simulation. That's why airlines train pilots in simulation. What soldiers and pilots learn in simulation they transfer in the real world: they have trained not their body but their mind. It is tempting to say that, by the same token, a kid who plays a first-person shooter game is being trained to kill, but there is no evidence that this is the case: there has never been a serial killer who started killing after becoming also addicted to such games. In reality, the game does not train the kid to kill but to navigate a phase space of possible actions and to "win" the game. The fact that the actual gestures have to do with guns and the actual effects of the gestures are killings is not the real core of the gaming experience. In fact, games that are not photorealistic at all may be more engaging than photorealistic ones precisely because the player can be less aware of what the gun and the dead are. The gun is simply a set of pixels on the screen that is used to affect some other groups of pixels which happen to be people. The real training that takes place during a game is training in navigation, in finding an exit to the labyrinth, in finding a way to "survive". There is a difference between special-purpose simulation, in which the person is being trained to operate something like an airplane, and general-purpose simulation, in which the person is being trained to improve general-purpose skills like navigation.

Estrangement is a tool to maximize the social usefulness of videogames, whereas immersion tends to hypnotize. Boosting imagination is one of the benefits of playing videogames, as opposed to photorealistic immersive environments which leave little room for imagination. Flow is a form of training for the unpredictable difficulties that can emerge in life whereas immersion acts as a drug fix, a retreat from reality and a surrender to fate.


Postmodernism, Cybertime, Utopia

Chemistry changed the world in many more ways than the ones that we superficially perceive. The videogame is the end point of the enormous pop-cultural revolution caused in the 20th century by the mnemonic technologies of phonography (the storage of sound), photography (the storage of image), and cinematography (the storage of motion). Before those three "chemical" inventions there were codes to represent sound, image and action but they all depended on writing and drawing on paper: there was a notation for music, there were lengthy reports of events, there were lengthy descriptions of artworks. After those "mnemonic" inventions, the visual art left the museum and the gallery, music left the concert hall and the dance hall, reportage left the newspaper. And, soon, telephony, radio and television spread them to every corner of the world. Sigmund Freud wasn't much of a psychologists but perhaps was a good sociologist: in "Civilization and its Discontents" (1930) he commented on the fact that photographs and phonographs extended human memory, like auxiliary organs, and this turned man into a "prosthetic god". Phonography, photography and cinematography did more: they "broadcast" the memory of something to the whole world. They were broadcasting technologies, typically from one producer to many consumers. They helped separate neatly producer and consumers.

In 1936 Walter Benjamin called "aura" the intangible value of a work of art, which could now easily be replicated all over the world. Its "aura" was clearly not the same once it was possible to make an unlimited number of copies (of "mechanical reproductions", as he called them).

The computer arrived a few years later (1948, the "Manchester Baby") and almost immediately it proved its power to turn any piece of information into a digital file of zeroes and ones. Hence sound (the culture of sound, like music), image (the culture of light, like painting) and motion (the culture of time, like theater) were stored and manipulated by the same device, the computer. Sound, image and motion literally became the same substance, managed by the same device. Because they were now digital files, for the first time in history it was also relatively easy to "edit" them, not only store and reproduce them. Today's world of "multimedia" is really the world of audiovisual digital files (we still haven't had much success in recording the other three senses). The "editable" feature is important for "fictional" purposes: the original gramophone and the original camera were simply recording devices (they recorded the real world). In order to create fictional audiovisual experiences (films) one needed expensive studios like the ones in Hollywood. The "editable" feature of digital audiovisual files exponentially increased the potential for creating imaginary sounds, images and scenes, i.e. for fictional experiences, and it exponentially increased the opportunities for human-machine interaction. Videogames combine fiction and interaction, and are therefore a somewhat natural step in the evolution of audiovisual digital files.

The videogame is possibly the most demanding technology that deals with editable audiovisual files, which is the reason why the superpowerful computer chips known as GPUs were born for gaming. Videogames have pushed the technology of the audiovisual digital file in all directions: faster response to inputs, greater three-dimensional graphic accuracy, large-scale distributed computing, more parallelized multitasking, high-capacity mobile computing, tighter integration with analog devices, etc. Videogames stretch the simulations of both time and space. They aim for real-time interaction with the machine ("real time" in the time dimension) and for photorealistic rendering by the machine ("false space" in the space dimension).

Phonography, photography and cinematography transported sound and images, and cinematography transported "false space" (imaginary, constructed places), but didn't aim for real-time interaction. "Real time" belonged to the realm of communication media (telegraph, radio and television) but these were not interactive. Videogames are unique in the way they depend on both false space and real time.

The psychological impact of phonography and cinematography was enormous because for the first time people could hear the voice of a dead person and see a train approaching in a theater. The psychological impact of editable, digital, audiovisual files is to blur the boundary between the real and the fictional. The psychological impact of combining false space and real time, of reordering space and time, is to create a "real fictional", a fictional that feels as real as the real. Any utopia has to do with a reordering of space and time, and of the social relations built around them. The step from a videogame to a utopia is very short: it is just a matter of purpose.

David Harvey in "The Condition of Postmodernity" (1989) argues that postmodern societies experience space and time in a novel way. The history of capitalist societies has been characterized by an acceleration in the pace of life, in a "time-space compression": the time required for travel has been reduced from months to hours and the time required for information to travel has been reduced from days to milliseconds, so that the whole planet is becoming the "global village" envisioned by Marshall McLuhan in his book "The Gutenberg Galaxy" (1962). For example, Harvey sees Picasso and DeChirico (or Joyce and Proust in literature) as reacting to the spatial and temporal restructuring caused by the rapidly transforming capitalist world before World War I. He writes that such a restructuring consisted in "the annihilation of space through time" or "spatialization of time", an acceleration in speed which is physically and symbolically represented by the diffusion of the railway and of the telegraph in the second half of the 19th century. Time is the real protagonist of the postmodern condition.

Steven Jones in "The Internet and its Social Landscape" (1997) noticed that cyberspace is more about time than about space. In fact, humans could physically travel to the end of the universe if only they had "time". What stops us from traveling to the nearest star, Proxima Centauri (which is a mere four light-years away) is the fact we wouldn't arrive alive (the fastest rocket ever, NASA's Parker Solar Probe, reached the amazing speed of 540,000 kms per hour, but that is still only about 0.05% of the speed of light, i.e. such a journey would take more than 8 thousand years). Therefore what we can do in videogames hosted in cyberspace that we cannot do in real life is mostly about time. Maybe cyberspace should really be called "cybertime".

As for space, Michel Foucault in "Discipline and Punish" (1975) only focused on institutions of social control like prisons, hospitals and schools, but his view of modern life as an "imprisonment" within spaces of social control is quite general. The "false spaces" created by videogames elude that control, although the rules of the videogame might indirectly introduce some other kind of control.

And so the mnemonic technologies of phonography, photography and cinematography, recast as editable digital files in the interactive world of computer, led to the reordering of time and space that is the prerequisite for the construction of utopias. Walter Benjamin's "aura" is zero in the age of "copy and paste" but a different kind of aura has emerged that has to do with the potential for a work of art to evolve into a utopia.


Death of the Author and of the Reader

As Bernard Stiegler noticed in "Technics and Time" (1994), the new mnemonic technologies of the Internet age are similar to writing: by learning how to read and write, ordinary people became producers as well as consumers of literature; and today, by learning how to use tools like YouTube, ordinary consumers can broadcast their own music and videos. They are prosthetic tools to become prosumers. This modifies the course of our coevolution with mnemonic technology, a coevolution which is about the changing ecology of anamnesis and hypomnesis. Plato/Platon coined these terms in the "Memo" for natural memory and artificial memory (such as writing). Plato viewed writing as a "pharmakon", both a remedy and a poison, and ultimately saw its influence as deleterious: not as effective as its legendary inventor Thoth contended and addictive like a drug and weakening the mind in general; but perhaps he neglected the fact that writing enabled ordinary people to become philosophers and poets, i.e. prosumers. Today's "hypomnemata" are different from the "broadcasting" hypomnemata of photography and phonography: rather than dissociating production from consumption, these new hypomnemata facilitate the growth of the consumer into prosumer.

French post-structuralist philosophers of the 1960s expressed doubts about the role of the artist in essays such as Roland Barthes' "The Death of the Author" (1967) and Michel Foucault's "What is an Author?" (1969). The rise in participation both in the arts (Allan Kaprow's happenings in the 1960s, Mary Jane Jacob's events in the 1990s) and in technology (the personal computer was invented by hobbyists for personal use) blurred the line between producer and consumer and gave practical meaning to that debate, popularizing the idea of the "prosumer", a term coined by Alvin Toffler in his book "The Third Wave" (1980). Web 2.0 definitely tore down the wall between producer and consumer: anybody with a computer or even just a phone can now be a producer of text, images, and videos. In the 2000s MMORPGs, virtual worlds and metaverses were also examples of the "death of the author": they produced stories but the author was the whole community of players. They too blurred the line between author and audience.

Paraphrasing the three aesthetic categories proposed by Janet Murray in "Hamlet on the Holodeck - The Future of Narrative in Cyberspace" (1997), an interactive experience consists of three processes: immersion (Coleridge's "willing suspension of disbelief"), agency (the power to change the course of the universe), and transformation (the effect of being someone else for the duration of the experience), and transformation can be viewed as the indirect effect that the agency and immersion have on the player.

The metaverse is a space in which the player is the co-author and is transformed while playing as an avatar.

Murray argues that closure occurs when the "reader" understand the "text". The reader of a book knows that the rule to read a novel is simple: turn the pages of the book, one at a time, in the order. The reader reaches closure by reading one page at a time until the story ends. The player of a game is faced with a more open environment in which the number of rules is much higher, and therefore his possible actions are combinatorially more numerous. The player can decide how to explore the space of possible states, i.e. which moves to make. Here the closure does not consist in the pleasure of reading the ending of a story but in the pleasure of making moves based on the rules. Murray in fact accepts the fact that the pleasure of playing lies in the exploration (i.e. in the many ways that the player can lose), not in the winning. A game that is easy to win is not much of a game. This mirrors what happens in real life: if you are born rich, you may be depressed for your whole life; if you are born poor and get out of poverty, you are more likely to feel satisfied, even if your wealth will never match the wealth of the one born rich. It is the journey, as they say, that matters. It is the story that you can tell.

It is not that you have to understand the game but that the course of the game is shaped by your evolving understanding of the effects of your moves. In a metaverse, which is the ultimate open-ended multi-user game, you understand the game very quickly but the real transformation comes from playing something that resembles real life as a different person.

The player of a metaverse is transformed both as an "author" and as a "reader".


The Interface

The philosophy of modern computing has focused on the network but maybe it should have focused on the interface. Computing is, by definition, interactive: a human user needs some calculations to be performed by the machine and therefore there's an input (a request) and an output (a result). The interface is the way this process of input/output is carried out. Originally, there was little interest in shaping the interface to reflect the application: the user interface was designed to maximize the user's ability to express the desiderata and to minimize the time required to do so (later also to minimize the chances of mistakes). Originally, the command line was good enough: the user interact with the machine by writing "commands" in a cryptic language whose grammar was limited to the "verb + object" construct (e.g., "delete filename"). However, the interface is really the counterpart of the interaction: the human user was talking to the interface, not to the machine (whose working has become more and more obscure with each new generation of interfaces).

As Don Norman said: "The real problem with interface is that it is an interface. Interfaces get in the way. I don't want to focus my energies on interface" (2002). Nicholas Negroponte wrote similarly that goal of interface design should be to "make it go away".

The role of the interface is actually bigger. The metaverse depends on the interface just like the real world depends on looking and feeling the way it does.

Neal Stephenson's essay "In the Beginning Was the Command Line" (1999) points out that the graphical user interface has created a layer (or even multiple layers) of abstraction between the human user and the actual functioning of the computer, between what the user wants and what the computer does. The "interface" has become increasingly sophisticated, in theory mimicking the way humans interact, but in practice it has moved the user farther away from the machine. The original interface was simply a set of switches on the panel of the computer's console. Then came the punched cards that separated programmer from the computer that was being programmed. Then came the command line and a language to communicate orders to the computer, and of course we knew that the computer cannot "read" such command lines: there's a layer of software that translates them into on and off "switches". Then came the "GUI" that created the metaphor of the virtual "desktop". Anything you do with the GUI gets translated into commands, which means that the GUI adds another inscrutable layer between you and the machine. And the GUI kept expanding, moving us further and further away from the command line and from the actual "switches" that still exist somewhere, buried under layers of software.

Steven Jones pointed out that the interface has become a highly recursive phenomenon (2008): the user who browses the web has to deal with the interface provided by the operating system (MacOS, Windows, iOS, Android, Linux...), then with the interface provided by the browser, then by the interface provided by the specific website that the user accesses (and each website has a different interface). And that's not to mention the physical device: the experience is different if one uses a smartphone, a smartwatch, a tablet, a laptop or a desktop.

There was no obvious line connecting functionality, interface and aesthetic. The funny thing is that a straight line has emerged that works in the opposite direction, from aesthetic to interface to functionality.

Lev Manovich's "The Language of New Media" (2001) relates the so-called "new media" (which really means "computer-based media") to the visual cultures of the past (to the "old media", the pre-computer media), and shows how new media fit in the lineage of visual aesthetic that begins with the invention of perspective by the Italian Rinascimento. A straight line connects Rinascimento painting, photographic camera, TV monitor and computer screen, the same way that a straight line connects Johannes Gutenberg's printing press, Francesco Rampazetto's Scrittura Tattile (1575), Christopher Sholes' QWERTY typewriter (1874) and the computer keyboard. New media mostly adapt conventions of old media to computer-based technology. What is truly unique about "new media" are the interface and the database. The "user interface" was from the beginning the prosthetic extension that allowed computer users to access cyberspace (a cyberspace that for five decades consisted of independent, separated databases).

The interface was, first and foremost, a language, a language to interact with a machine, and then, starting with Ivan Sutherland's Sketchpad (1963) it became a visual language in a virtually infinite coordinate system. The Graphical User Interface (GUI), first demonstrated in 1968 by Douglas Engelbart in San Francisco and first commercialized in 1973 by Xerox PARC with the Alto desktop computer, turned the computer into a virtual world because it made it possible to conceptualize an environment and ways to explore it. The GUI "simulated" our interaction with the natural environment. By clicking on an "icon", the user was transported into a program, and many such programs used a GUI themselves, therefore launching their own representational space, their own virtual world. The GUI turned the screen into a complex representational space. Matthew Kirschenbaum in "Interface, Aesthetics, and Usability" (2004) noticed another lineage, connecting the GUI with its overlapping "windows" to to the artistic collages of dadaism, futurism and cubism at the beginning of the 20th century. The art critic Clemente Greenberg in 1959 wrote that "collage was a major turning point in the whole evolution of modernist art in this century". By analogy, the GUI was a major turning point in the evolution of cyberspace. The digital scanner (1957) and later the digital camera (1990) created a bridge between the real world and the virtual world, opened the channel over which we can transport the real world into the virtual world. As digital content grew more complex in the age of the personal computer, it became important to find ways to "search" and to "navigate" cyberspace.

Brenda Laurel in "Computers as Theatre" (1991) suggested that interface design should learn from drama theory: make the experience as "dramatic" and emotional as a theatrical drama. A user should "feel" the interface, not just use it. The exact opposite happened in that year.

In 1991 Tim Berners-Lee came up with the World-wide Web, building it on top of the largest network in existence, the Internet. Surprisingly, the early browsers and the early search engines (notably Google) did not fully take advantage of the GUI, as if they didn't know how to represent visually the enormity of data that they were assigned to interface; but that fact also highlighted a transformation in the function of the user interface, a slow but constant alteration in the balance of power between user and computer. The interface, that was born as a language for the user to deliver commands to the computer, was becoming increasingly a channel for the computer to deliver content to the user. It was as if the exploration of and interaction with cyberspace, now organized in a recursive web of webs, could not rely anymore on a visual representation squeezed into the small monitor of the user.

Around 1996 Silicon Valley startups like Pointcast and Marimba popularized "push" technology: Pointcast gathered information from the Web and then displayed it on persona computers, which conceptually was the exact opposite of "surfing" the Web. Marimba pioneered the model of subscription-based software distribution so that a computer user could automatically get updates to the applications running on that computer. Push technology created a new way for technology to communicated with humans: via "notifications", which became pervasive and invasive starting in 2009 when Apple introduced them on its iPhones. Notifications further changed the dynamics of the user's interaction with the device because it felt like the user was no longer the one deciding when to interact with the device: it was the device deciding when the user was supposed to interact, just like it was the "newsfeed" deciding which news the user was supposed to read.

Pushed to the extreme, the science of human-machine interfaces becomes the quest for creating artificial life forms: it is tempting to think of computational artifacts as "naturally" interactive, and therefore similar to pets, if not humans. The personification of machines happened independently of the experiments of artificial intelligence: users routinely curse the computer as if the computer were a stupid or stubborn or evil person. The MIT roboticist Pattie Maes in 1995 gave a talk titled "Interacting with Virtual Pets and other Software Agents" that described a future in which society is made of both real and virtual life forms. The virtual ones, today better known as "bots", will actively interact with the real ones. She argued that there was a real need for these artificial life forms because "the digital world is too overwhelming for people to deal with, no matter how good the interfaces we design". In other words, the interface to the digital world must evolve towards artificial life forms or humans won't be able anymore to interface the digital world created by machines. The personification of interfaces is widely represented by the myriad different kinds of interfaces that the user has to interact with. Each application reveals a different interface, and often each one is a mindboggling interface: knowing how to find a statement on your bank's website doesn't help you find a statement on another bank's website, because the path, the screen layout and the titles can be completely different. At the same time her boss Nicholas Negroponte was writing in "Being Digital" (1995) that the goal of the interface should be to "know you, learn about your needs": again, an artificial life form, an artificial person as the interface. And this artificial "person" will be increasingly in control of the interaction.

If that's the "mainstream" story of the interface, there's another parallel story, that begins with "Pong" (1972), the first major arcade videogame, and with the Magnavox Odyssey of 1972, the first videogame console. Arcade games, as innocent as they looked, operated an important reversal of trends: they brought the player closer to the machine because there was a bodily component to playing the game. There was still an interface, but the interface was meant to challenge the physical skills of the player. The videogame arcade of the 1980s spawned a new kind of athlete, who was both the equivalent of a sport athlete and the equivalent of a chess player, both body and mind, immersed like the sport athlete in the embodied theater of the physical space of the game (e.g. the stadium) while immersed like the chess player in the disembodied theater of the virtual space of the game (the combinatorial space of chess moves). These machines were powered by computers but, because they were self-contained machines, they were perceived as a completely different artifact compared with multipurpose computers. Except for being coin-operated and for requiring a power outlet, an arcade machine or a videogame console belonged to same functional category of a deck of cards or a tennis racquet: its function was directly accessed by the player. Last but not least, the player was in control of the interaction. A parallel history of human-computer interaction (and of simulation) originated with videogames, and this history progressed while the big story of human-computers interaction continued to steal the limelight.

These two modes of interacting with the machine both collide and complement each other in the metaverse: the user is confronted by the usual multi-layer interface but the user's avatar interacts directly and bodily with the world.

Readings on the Interface:
Johnson, Steven: "Interface Culture: How New Technology Transforms the Way We Create and Communicate" (1997)
Kirschenbaum, Matthew: "Interface, Aesthetics, and Usability" in The Oxford Companion to Digital Humanities (2004)
Jones, Steven: "The Meaning of Video Games" (2008)
Laurel, Brenda: "Computers as Theatre" (1991)
Manovich, Lev: "The Language of New Media" (2001)
Negroponte, Nicholas: "Being Digital" (1995)
Norman, Don: "The Design of Everyday Things" (2002)
Stephenson, Neal: "In the Beginning Was the Command Line" (1999)


The Future of Writing

The function of a novelist has always been to create a world and then escort the reader into that world. The future of "storytellers" in the age of metaverses could be to create a digital world and then escort people into that world; and then socialize with them. The "readers" will "experience" the story in the metaverse by becoming part of the story, by joining the characters in the story. This will create a different kind of bond between "writer" and "reader" (between producer and consumer of the experience).

The word "my" can be ambiguous and misleading. For example, when we describe a place as "my hometown", we use "my" in the sense of "where i was born" or "where i ended up spending my formative years", not in the sense of "i created it". My bicycle is "mine" because i bought it; but my website is "mine" because i made it. "My home", "my life", "my world" will mean something different in the metaverse.

Every parent knows that her or his children are not the most beautiful nor the most intelligent in the world, but they are her/his children, children that she/he raised. The creators/ demiurges will have a similar feeling for the virtual worlds that they will have created.

Everything that happens in one's life is a story, and everything in the metaverse will be a story. The difference is the place where "everything" happens: in real life it generally happens in a place that we had limited freedom to choose and shape. "Life" in the metaverse will not be about going to a place that preexists but about going to the places that we design.


Intelligent Metaverses

Artificial Intelligence creates "artificial beings", and Virtual Reality creates an "artificial world". The beings that inhabit a virtual world are avatars of real people, but could also be independent beings that exist only in that world, robots that exist not in hardware but in software. You could create a virtual world and populate it with artificial people that interact with your avatar and with the avatars of your friends. Artificial Intelligence can code and shape the personality of an artificial person: a young salesman who in the evening plays in a rock band; a Buddhist girl who dropped out from university and memorizes Chinese classics that she recites at the neighborhood park; a retired airliner pilot who paints landscapes from the window of his apartment; etc. Artificial Intelligence will "power" such an artificial person so that s/he behaves just like a real person. You will not know which are avatars of real people and which are artificial people.

Secondly, Artificial Intelligence could be used to "power" your own avatar. My avatar in a metaverse disappears when i am not playing. A.I. could keep it "alive" even when i am not playing. This new generation of avatars could be "autonomous" avatars who learn your personality and then continue living in the metaverse even when you "switch off" and return to real life. You could wear augmented-reality glasses to watch what your avatar is doing in the metaverse or you could simply ignore it and catch up days later. Your avatar will live in the virtual world while you live in the real world. Every time that you plug into the virtual world again, you will regain control over your avatar. The avatar will learn from you how to behave (what kind of person you want it to be), and you will learn from your avatar's progress what that behavior leads to (what that kind of person does). Maybe at some point the roles will get reversed: the avatar will "teach" you what to do in the real world. That's assuming that Artificial Intelligence gets intelligent enough to simulate the human mind.

David Hanson (Hanson Robotics) has been around since 2003 building robots that simulate a real person's personality, most famously Sophia (2016), that was granted citizenship in Saudi Arabia, and the (failed) android/avatar of Russian billionaire Dmitry Istkov (founder of the 2045 Initiative who wants to create immortality). Sophia is mostly a testament to how far A.I. still is from creating any form of intelligence (let alone human one). In 2020 OpenAI's system GPT3, capable of answering ordinary questions in ordinary language, represented a major improvement in A.I. and renewed speculations that some day a system like GPT3 will power psychologically realistic avatars.

When a person dies, the avatar of that person may continue living forever in the metaverse. In fact, the avatar of a person can live in multiple worlds. Grandchildren can interact with the avatar of a grandparent they never met. The dead person will be dead but the avatar will keep evolving. In 2014 there was already a startup selling this kind of service (Marius Ursache' Eterni.me).


The Metaverse as Homotopy

A homotopy is a continuous deformation of a map into another map.

In 1908 Bertrand Russell published his paper "Mathematical Logic as Based on the Theory of Types" (1908) to repair a flaw in classical logic, i.e. to outlaw some antinomies that occur in set theory (like the famous 1901 paradox of the set of all sets that are not members of themselves). In the Theory of Types, objects are classified based on types, and types express properties which makes it is possible to reason about their objects. Alonzo Church in "A Set of Postulates for the Foundation of Logic" (1933) expanded the Theory of Types into a rigorous formal system. Several decades later, Per Martin-Lof in "An Intuitionistic Theory of Types" (1972) proposed a constructive intensional type theory as an alternative to traditional logic, which makes no distinction about how one has reached a conclusion.

Steve Awodey and Michael Warren invented Homotopy Type Theory with their paper "Homotopy Theoretic Models of Identity Types" (2007). Homotopy Type Theory, an expansion of algebraic topology, views types as spaces rather than sets, tokens as points rather than elements of objects, and equalities as paths. For example, the identity of two objects of the same type can be understood as the existence of a path from one point to the other point within the space of their type. Types are treated intensionally rather than extensionally: two types that are identical in terms of content are not identical if they are defined differently (e.g. "the number that follows 2" and "the number that precedes 3"). There is a type associated with each mathematical proposition, and the tokens of a type are "certificates" (or "witnesses" or "proofs") to the truth of that proposition. The theory comes with a proof technique called "path induction" (which is the elimination rule for the identity type) which facilitates automatic proof verification. A proof consists of a sequence of applications of rules that transform tokens, beginning with the tokens of the premises and ending with a token of the conclusion. In 2009 Vladimir Voevodsky. added "univalence" to Homotopy Type Theory: the "univalence axiom" states that "identity is equivalent to equivalence", i.e. that isomorphic things can be identified. All mathematical entities can be described in the language of tokens and types, which means that Homotopy Type Theory can be used as the foundation for the whole of mathematics.

To be continued...


The Singularity will Happen in the Metaverse

Forthcoming


The Algorithmic Society

Forthcoming


Peace Tech

Forthcoming


Readings (in chronological order)


Frank Lloyd Wright: "The Disappearing City" (1932)
Le Corbusier: "The Radiant City" (1935)
Bertolt Brecht: "Alienation Effects in Chinese Acting" (1936)
Walter Benjamin: "The Work of Art in the Age of Mechanical Reproduction" (1936)
Martin Heidegger: "Building Dwelling Thinking" (1951)
Marshall McLuhan: "Understanding Media" (1964)
Jack Good: "Speculations Concerning the First Ultraintelligent Machine" (1965)
Jean Baudrillard: "Simulacra and Simulation" (1981)
Vernor Vinge: "True Names" (1981)
William Gibson: "Neuromancer" (1984)
Sherry Turkle: "The Second Self" (1984)
Donna Haraway: "A Cyborg Manifesto" (1985)
Friedrich Kittler: "Gramophone Film Typewriter" (1986)
Bill Nichols: "The Work of Culture in the Age of Cybernetic Systems" (1988)
David Harvey: "The Condition of Postmodernity" (1989)
Mihaly Csikszentmihalyi: "Flow" (1990)
Ruth Levitas: "The Concept of Utopia" (1990)
Brenda Laurel: "Computers as Theatre" (1991)
Neil Stephenson: "Snow Crash" (1992)
Scott McCloud: "Understanding Comics" (1993)
Bernard Stiegler: "Technics and Time" (1994)
Frank Tipler: "The Physics of Immortality" (1994)
Pierre Levy: "Collective Intelligence" (1994)
Janet Murray: "Hamlet on the Holodeck - The Future of Narrative in Cyberspace" (1997)
Bruce Damer: "Avatars" (1997)
Aarseth Espen: "Cybertext" (1997)
Edward Casey: "The Fate of Place" (1997)
Steven Jones: "The Internet and its Social Landscape" (1997)
Malcolm McCullough: "Abstracting Craft" (1998)
Annette Markham: "Life Online" (1998)
Katherine Hayles: "How We Became Posthuman" (1999)
Peter Lunenfeld: "The Digital Dialectic" (1999)
James Newman: "Reconfiguring the Videogame Player" (2001)
Howard Rheingold: "Smart Mobs" (2002)
Rachel Falconer: "Hell in Contemporary Literature" (2008)
Julian Bleecker: "Design Fiction" (2009)
David Kirby: "The Future is Now" (2010)
Ernest Cline: "Ready Player One" (2011)
Gregory Claeys: "Searching for Utopia" (2011)
Tamiko Thiel: "Cyber-Animism and Augmented Dreams" (2011)
John-David Dionisio: "3D Virtual Worlds and the Metaverse" (2013)
Richard Gilbert & Andrew Forney: "The Distributed Self - Virtual Worlds and the Future of Human Identity" (2013)
Laurence Scott: "The Four-Dimensional Human" (2015)


See also my History of Virtual and Augmented Reality, my Thoughts on Virtual Reality, and my Timelines of VR/AR.
A bibliography is include at the end of Thoughts on Virtual Reality,
Back to the index