An Introduction to Web 3.0

by piero scaruffi
Cognitive Science and Artificial Intelligence | My book on A.I. | My book on consciousness | Human 2.0 | Bibliography and book reviews | Contact/feedback/email
(Copyright © 2020 Piero Scaruffi | Terms of use )


There is much confusion about Web 3.0 because it was originally associated with the "semantic web" proposed by Tim Berners-Lee in 1999 but that was about adding "meaning" to the data and intelligent agents that can operate based on that meaning.

The Web 3.0 that we discuss below is a different beast. It is fundamentally a libertarian evolution of the world-wide web that largely depends on the advent and rise of blockchain technology.

Web 1.0, as introduced by Tim Berners-Lee in 1991, was a network of static pages programmed in a language called HTML. The revolutionary aspect of Web 1.0 was that a page had "hyperlinks" to other pages. Browsers were developed so that anyone with a desktop could view those pages and "navigate" from one page to another. Web 1.0 was revolutionary also because it was "decentralized": it ran on top of open protocols such as like HTTP for browsing, SMTP for email, SMS for messaging, FTP for file transfer, etc. But Web 1.0 was also still limited to the traditional producer-consumer paradigm, designed for information production by a producer (the HTML coder) and information consumption by a consumer (the user of the browser).

Note that in the meantime the computer had transformed. The early computers of the 1950s were room-size monsters of cables, switches and lights. The computers of the 1980s were mostly the size of a desk or even smaller. Thanks to Moore's Law of exponential improvement in silicon chips, the personal computer revolution had redesigned the relationship between humans and computers, turning the computer into little more than a house appliance.

Web 2.0 eliminated the distinction between producer and consumer: it facilitated user-generated content and greatly increased the interactive experience. This was the age of social media. Mobile phones and cloud computing supercharged the Web 2.0. However, Web 2.0 greatly increased the importance of the intermediary: the interaction between users (whether it was a comment or a purchase) depended on the existence of a trusted intermediary, either an old well-established brand of the brick-and-mortar economy or a new reliable brand of the digital economy. This created and empowered centralizing tech giants (such as Facebook in social networking and Amazon in ecommerce) that profited from “rents” charged to use their services. Content was generated for free by users on the platforms of these intermediaries. The behavior of users was available for free to these intermediaries (because they controlled all interactions/transactions between parties) that de facto instituted a sophisticated system of silent software surveillance. User-generated content and user behavior constituted valuable data that could be monetized in several ways. These intermediaries owned the data and data turned out to be valuable assets. Web 2.0 redesigned the Internet as a participatory medium but required intermediaries between users. The original P2P technology (already available in Web 1.0) failed because it couldn't compete economically with the intermediary-based Web 2.0.

Web 1.0 was "stateless": it didn't capture the state of the client, i.e. user data. The owner of a website didn't have a way to find out if a user had visited before, and therefore didn't have a way to "tailor the experience" for each user, i.e. to try and sell you all sort of garbage. In fact, Web 1.0 didn't even have protocols for e-commerce. The "stateful" Web 2.0, indirectly, created a "data economy". It was an exploitative economy in which big tech owned and monetized data about the users (centralized data repositories) while users were only asked to provide content for free (for example, home-made videos), pay for many services (for example, a streaming service to watch videos) and meekly accept the implicit surveillance system. The user was willing to accept this unfair deal because the intermediary implemented a trust algorithm that defined how trustworthy each participant was, resulting in an explicit measurement of trust, similar to the credit score that bank loan officers use to assess risk.

The business model of Web 2.0 became the "attract to extract" business model: attract users and then extract their data to "customize" their experience (typically to market them products) and then maybe also sell those data to other vendors. And so Web 2.0 became a sociological nightmare, a dystopia (but an economic boon for trillion-dollar corporations). The Web became a market, designed for corporations to capture and sell data.

This data economy came with many aberrations. The whole point of collecting user's data was to feed advertising to the user. Soon the industry realized that one could multiply advertising opportunities by retaining the user's "engagement", and so each advertising-driven channel looked for more effective ways to keep people engaged, and, guess what, the most effective way to keep people engaged is to make them angry, very angry. The most engaging content is the content that makes people angry. Inevitably, the advertising-driven model led to Web 2.0 platforms that preferred and fostered angry behavior.

Web 1.0 had created digital communities that could potentially spread throughout the world. Web 2.0 made digital communities more abstract by increasing the freedom of users to "choose" the people in their social circle. This is clearly different from the real world in which we cannot choose our family, our neighbors, our schoolmates and not even our friends (they are often the kids we grew up with). In the real world we have to learn to coexist with the people who occupy the same space with us. In theory it is great to be able to choose the people we socialize with. In practice something is lost when we don't have to learn to coexist with your social surroundings. The social bonds are weaker and in general social cohesion is weaker. It's hard to build any form of collaboration when social bonds are weak.

The biggest aberration was perhaps how Web 2.0 ended up being a perfect tool for propaganda. User-generated content is a key feature of Web 2.0. Our planet too is an example of user-generated content: we humans littered the planet with buildings, roads, plastic, etc. Alas, this has resulted in non-biodegradable garbage, depletion of natural resources, and an unhealthy lifestyle in an unnatural environment. Human nature did not change in Web 2.0: the effect of user-generated content on Web 2.0 websites was a lot of permanent and unhealthy "garbage", misinformation, "fake news". Alas, the "natural resource" of Web 2.0 was unlimited and free: storage. Therefore there was no limit to how much "garbage" the users of Facebook, YouTube, Twitter, etc could generate. Far from being primitive and backwards, it was Islamic terrorist groups ("jihadists") like Al Qaeda that first understood the power of Web 2.0 as a propaganda medium and used it to radicalize and recruit Muslims (the same way that, a century earlier, Lenin was perhaps the first to understand the power of cinema for propaganda purposes and Mussolini was perhaps the first to understand the power of radio for propaganda purposes). In one of the many twists of modern technology, Al Qaeda's propaganda branch As-Sahab created professional-quality videos with software developed by US companies like Adobe, Microsoft and Apple, and then spread it via the Internet, originally created as the ARPAnet by a military agency of the US government; and all of this to wage war against the USA. Al-Qaeda maximized the benefits of using home content production and the Internet, ironically, against the nation that invented such technologies. The Islamic world had been very slow to adopt the printing press in Gutenberg's time, but already in the 1970s Ruhollah Khomeini used cassette tapes made in exile to stir up opposition to the Shah of Iran, eventually leading to the successful Islamic Revolution of 1978. Web 2.0 allowed the terrorist to become an influencer (just like any TV celebrity), a producer (just like any TV producer), in some cases a reality personality (a protagonist of videos), a tutor (just like any military instructor), and an ideologue (writing and distributing pamphlets and manifestos). Soon, white suprematists in the USA and Europe adopted the same methods and Web 2.0 was flooded with all sorts of demented conspiracy theories. User-generated content generated a constant and endless flood of non-censured and non-hierarchical information. Extremists exploited Web 2.0, which involuntarily exposed many more vulnerable people to unchecked propaganda, turned the Internet into an "echo chamber", allowed radicalization to occur even without physical contact, increased the ability to self-radicalize, and generally created new opportunities to become radicalized and accelerated the process of radicalization. The avalanche of extremist content even "normalized" opinions, behaviors and attitudes that would be considered inappropriate and even unacceptable by the same people in their physical world. Extremists conquered the Internet. Web 2.0 indirectly fostered an explosion of subcultures, each one similar to the "underground" of the 1960s that was anti-war and anti-capitalist. And soon the governments of Russia and China adopted the methods of "propaganda 2.0" to destabilize Western democracies using their own media on their own soil (what Larry Diamond of Stanford's Hoover Institution called "China's global sharp power"). Russia even managed to decisively influence the 2016 presidential election in the USA by crafting misinformation on US social media.

Worse: user-generated content can act as propaganda even when it is intended to support the opposite views. Nicholas O'Shaughnessy and Paul Baines in "Selling Terror" (2009) conceived propaganda as a form of advertising in which the good being sold is an ideology, rather than a product. For example, during the 2003-08 invasion of Iraq, the TV networks of the USA indirectly helped "advertise" the jihadist product with their 24-hour coverage of jihadist atrocities, and therefore worked as pro-jihadist propaganda just like the anti-US propaganda of TV networks like Al Jazeera. Brigitte Nacos of Columbia University in her essay "Terrorism as Breaking News" (2003) compared the coverage of Al Qaeda's terrorist attacks of 2001 to a Hollywood blockbuster. Indirectly, the coverage on television and social media of Al Qaeda's terrorist attacks provided more effective propaganda than anything Al Qaeda itself ever did. Ditto in 2016 when most TV networks vilified Donald Trump but, by covering his outrageous behavior nonstop, ended up publicizing him not only to the small audience of white suprematists but also to a broader middle-class audience that would have normally ignored or despised him. Again, the anti-Trump camp ended up helping Trump as much as Trump's cheerleaders at Fox News, just like CNN had helped advertise the jihadist movement as much as Al Jazeera. And so the sheer amount of user-generated content on Web 2.0 constitutes propaganda for the worst forms of behavior, the ones that attract nonstop coverage and discussion.

This misuse of "user-generated content" also reveals a fallacy in the statement that Web 2.0 enabled "user-generated content": it is obviously not true that user-generated content was not possible in Web 1.0, as any website was user-generated content. What changed with Web 2.0 is that users were able to post content on "somebody else's" website. Users began to "gift" content to others instead of posting it only on their own website. Web 1.0 was a network of user-generated content whereas Web 2.0 became a network of platforms hosting (and exploiting and monetizing) user-generated content. In Web 1.0 you were publishing your writing or your pictures or your videos on your own website (your website, your content); on Web 2.0 you were publishing your writing, pictures and videos on social media. This was considered "progress" but it was "progress" mostly for the platforms like social media that were hosting (and exploiting and monetizing) other people's content. From the point of view of the user, the appeal (the "progress") was the ability to be heard/viewed by many more people, by all the other users active on the same platform. It was the difference between writing something on the walls of your apartment (and inviting friends over to see it) and writing something on the walls of the main square (visible by anyone passing by).

Web 2.0 is today's world (the world of 2021).

The Web 2.0 revolution paralleled the smartphone revolution: a smartphone of the 2010s was thousands of times more powerful than the room-size computers of the 1950s. The landscape of computing had changed so much that the distinction between "smart devices" (devices powered by computers) and appliances like the TV set had become blurred. This computing revolution nonetheless was facilitated by the intermediaries without whom the smartphone would have remained essentially the same device that the old telephones used to be, just wireless instead of wired.

At the same time, societies in general were increasingly dominated by centralizing forces. In 2021 power is more concentrated than ever, a fact that is especially disturbing in democracies where it should be the opposite, with economic power often conditioning political power. The decision processes employed by those in power, including corporations, are opaque. There is little or no accountability. And this has generated a demand for a more fair distribution not only of wealth but also of power.

Web 3.0 allows participants to interact peer-to-peer with strangers without a trusted third party middleman because trust is built into the network and doesn’t need to be measured (nor enforced): the network is a decentralized network that self-controls. Blockchain technology is the enabling technology of this decentralized web. First of all, this introduces a new level of control of personal data, a new degree of ownership control: there is no need for an intermediary, therefore you own your data and the content that you create. Secondly, privacy is automatic. Thirdly, content becomes a financial asset: content can get compensated, for example via non-fungible assets. Web 3.0 is a "fair" web in which the user owns the content it creates and in which her behavior cannot be monetized without her permission. Fourth, decision processes can be made inherently fair. The goal of Web 3.0 is to do to every other service what bitcoin (the original blockchain) did to money: remove the intermediary while providing the same function.

Verifiability is the key property of blockchain technology: applications are automatically verifiable. This property alone changes the way people collaborate, businesses operate and governments legislate. Trust is built into the system. Blockchain is a "zero-trust" (or "trustless") system.

Note that Web 3.0 resurrects P2P technology (the technology that failed in Web 1.0) but in the form of blockchain technology, having overcome the economic limitations, and in fact removing the economic advantage from the intermediary.

Blockchain technology provides the secure decentralized infrastructure of Web 3.0. Blockchain introduces a whole new way to build applications and deliver services as both computation and storage are fully decentralized and secure while the application becomes fully autonomous, not residing on anyone's privileged computer.

A simple example of what can be done in Web 3.0 with blockchain technology is "credentials". In the traditional pre-digital society one's credentials were one's achievements in society and work, the contributions to the community and to the workplace. For example, the degree obtained from a university or the resume detailing a life's work. These are top-down credentials. Then there are bottom-up credentials that are left to the community to determine: your actions determine how the community views you. Of course, both kinds can be tampered with, especially the latter one that is vulnerable to visious gossiping. In the digital world one's credentials are often listed in LinkedIn or can be found by bots searching through the debris of social media. In the world of Web 3.0, credentials could live on the blockchain. For example, POAP (Proof of Attendance Protocol) issues NFTs that reflect provable participation in important community events. Every participation in (online) community project could be reflected in credentials stored on the blockchain, i.e. proven and unaltered, i.e. any decentralized application could define their own scoring system and add the ratings to that blockchain.

See also my introduction to Blockchain technology and my introduction to the Metaverse.


Back to the index