Excerpts from the book

A History of Silicon Valley

Table of Contents | Silicon Valley History pages | Purchase | Correspondence
(Copyright © 2010 Piero Scaruffi)

 

32. The Downsizers: Facebook, YouTube, Web 2.0, Tesla (2003-06)

by Piero Scaruffi

THESE ARE EXCERPTS FROM THE BOOK A History of Silicon Valley

 

Distributed and Small

The early 2000s were an age of downsizing. Silicon Valley companies had to learn the art of cost cutting. Startups had to learn the art of actually developing a product and selling it. Once again, the beneficiary was India. Creating a lab in India (where software engineers earned a fraction of Silicon Valley engineers) was a relatively painless way to dramatically cut costs. By 2005 more than 50% of all jobs outsourced by Silicon Valley companies went to India.

There were other lessons to be learned too. In July 2003 AOL spun off Mozilla. It was originally founded by Netscape to foster third-party development on the browser under a free open-source license. It quickly built a reputation as a new browser. The first chair of the Mozilla Foundation was Lotus’ founder Mitch Kapor. The lesson learned by Netscape through Mozilla was that the open-source model works, but it is a Darwinian process, and, just like in nature, it works very slowly. The Mozilla community hated Microsoft’s Internet Explorer and therefore loved the Netscape browser. Unfortunately, this meant that dozens of people added features to Mozilla to the point that it became famously fat and slow.

Mozilla needed a re-birth. This came in 2002 when a new batch of developers (mainly Stanford students Blake Ross, who had started working on Mozilla at the age of 14, and Dave Hyatt) produced a “lighter” version of the Mozilla browser. It was eventually named Firefox. Firefox was indeed a state-of-the-art browser that could match IE (whose team Microsoft had just disbanded anyway in 2001). Yet precious time had been lost. In 2003 Microsoft’s Internet Explorer (IE) owned 95% of the browser market.

Computing devices had been getting smaller since the first Eniac was unveiled. That trend had never really stopped. It just proceeded by discontinuous jumps: the minicomputer was a significant downsizing from the mainframe, and so was the personal computer from the minicomputer. The laptop/notebook, however, was just a variation on the personal computer, the only major difference being the screen. In 2005 sales of notebook computers accounted for 53% of the computer market: the traditional desktop computer was on the way out. IBM pulled out of the market for desktop computers. There was a clear trend towards a portable computing device, but the laptop per se did not truly represent a quantum leap forward, just a way to stretch the personal-computer technology to serve that trend.

At the same time sales, of smart phones were booming too, but there was a lesson there too. In 2004 Motorola introduced the mobile phone Razr, an elegant-looking device that by July 2006 had been bought by over 50 million people, propelling Motorola to second position after Nokia. Yet sales started dropping dramatically in 2006. Motorola learned the hard way an important rule of the cell phone market: phones went in and out of fashion very quickly. There was room for more players, and Silicon Valley had largely been on the sidelines until then.

The positive note for the dotcoms was that the Web was spreading like a virus all over the world. By 2006 Google had indexed more than eight billion pages, coming from the 100 million websites registered on the Web. In March 2006, the English version of Wikipedia passed one million articles. The Internet was being accessed by 1.25 billion people in the world. The dotcom bubble had not been completely senseless: one just had to figure out how to capitalize on that massive audience.

By 2005 Yahoo!, Google, America OnLine (AOL) and MSN (Microsoft’s Network) were the four big Internet “portals,” with a combined audience of over one billion people. Never in history had such a large audience existed. Never in history had Silicon Valley companies controlled such a large audience (most of that billion used Google and Yahoo!). There were only two threats to the Internet: spam (undesired marketing emails) and viruses (malicious software that spread via email or downloads and harmed computers).

Mobile television, already available in South Korea since 2005, spread worldwide in a few years, finding millions of customers in Asia, Africa, and Latin America. Ironically, the West lagged behind, and in 2010 mobile TV was still a rarity in the US. But even in this case Silicon Valley was actually at the vanguard: the leading mobile TV chip maker, Telegent Systems, a fabless company founded in 2004 by LSI Logic’s inventor Samuel Sheng, was based in Sunnyvale.

 

Google and its Inventions

Google’s growth had been particularly stunning, dwarfing even the excesses of the dotcom bubble. In 2003 Google had 10,000 servers working nonstop to index the Web (14 times more servers than employees). In 2002 Google acquired Blogger and in 2004 they acquired Keyhole (a CIA-funded startup), the source for their application Google Earth. More than a search engine, Google was expanding in all directions, becoming a global knowledge provider. In early 2004 Google handled about 85% of all search requests on the Web. In fact, a new verb was coined in the English language: to “google” something (search for something on the Web). Google’s IPO in August 2004 turned Google’s founders, Sergey Brin and Larry Page, into billionaires. In 2004 an ever more ambitious Google launched a project to digitize all the books ever printed.

In 2004 Google hired German-born natural-language expert Franz-Josef Och, whose machine-translation system at the University of Southern California had been selected by DARPA. In 2005 Google introduced its own automatic-translation system to translate webpages written in foreign languages. In October 2004 Google acquired Danish-born Australian-based Berkeley alumnus Lars Rasmussen‘s company Where2 and its mapping software; and in 2005 Google introduced Google Maps. MapQuest, the pioneering Web-based mapping service acquired by AOL in 2000, lost to Google Maps because the latter allowed third-party developers to add information to the map and use the map into their own software. The time-consuming process of scaling a web application was more easily done by “exploiting” the Internet community of software developers.

Much was being said of Google’s ethics that allowed employees vast freedom to be creative. However, almost all of Google’s business was driven by acquisition of other people’s ideas. Gmail, developed internally by former Intel’s employee Paul Buchheit and launched by invitation only in April 2004, was not much more than Google’s version of Hotmail. What made it popular was the hype caused by the “invitation-only” theatrics, plus the large amount of storage space offered. Google Checkout, introduced in June 2006, was a poor man’s version of PayPal. Google Streetview, introduced in 2007, was so similar to Vederi’s ScoutTool, launched in 2000 and later renamed the StreetBrowser, that Vederi sued.

Google replicated Microsoft’s model. Its own research labs were incredibly inept at inventing anything original, despite the huge amount of cash poured into them. That cash mostly bought them patents on countless trivial features of their products, a tactic meant to prevent innovation by the competition. What drove Google’s astronomical growth was (just like in Microsoft’s case) the business strategy, not the inventions.

Yahoo! had lost part of its sheen, but still generated yearly revenues of $1.6 billion in 2003 (up from $953 million in 2002), with an astronomical yearly growth and market value. In 2003 it acquired Overture/GoTo, nurtured by Los Angeles-based incubator Idealab, and introduced the “pay per click” business model for advertisers. In 2006 revenues would reach $6.4 billion. Note that these dotcom companies were mainly selling ads. The initial dotcom business plan of simply becoming popular had eventually worked out: all you needed was a large audience, and then the advertisers would flock to your website. What was missing in the 1990s was the advertisers.

 

Social Networking

Initially, instead, the idea behind the dotcoms had been to transfer commerce to the Web; hence e-commerce. This was a more than viable business, but, in hindsight, it lacked imagination. It soon proved to be viable mostly for the already established “brick and mortar” corporations.

It took a while for the dotcoms to imagine what one could “sell” to one billion people spread all over the world: social networking. For the first time in history it was possible for one billion strangers to assemble, organize themselves, discuss issues, and act together.

Social networking was another practical implementation of Metcalfe’s law, that the value of a network of users increases exponentially with each new user. Three important companies were Facebook, Ning, and Twitter:

Facebook and Ning overlapped a little. In February 2004 Harvard student Mark Zuckerberg launched the social-networking service Facebook. It soon spread from college to college. Weeks later Zuckerberg and friends relocated to Silicon Valley and obtained funding from Peter Thiel of PayPal. Somehow this one took off the way that previous ones had not. Facebook started growing at a ridiculously fast pace, having signed up 100 million users by August 2008 on its way to becoming the second website by traffic after Google by the end of the decade. In 2005 Gina Bianchini and Netscape’s founder Marc Andreessen launched Ning, a meta social-networking software. It allowed people to create and customize their own social networks. Inktomi’s founders Brian Totty and Paul Gauthier formed Ludic Labs in San Mateo in 2006, a venture devoted to social media software for consumers and businesses that launched offerfoundry.com, talkfilter.com and diddit.com.

Last but not least, in 2006 Jack Dorsey created the social-networking service Twitter, where people could post short live updates of what was going on in their life. A “tweet” was limited to 140 characters. That limit reflected the way people wanted to communicate in the age of smart phones: very brief messages. Twitter soon became popular for current events the way CNN had become popular during the first Gulf War.

The Unix (and in particular Linux) world had been the first example of a social networking platform. It was used to refine the platform itself. Facebook and the likes simply adopted the concept and transferred to it the sphere of private life.

Facebook’s sociological impact was colossal. For example, Facebook offered a “Like” button for people to applaud a friend’s statement or picture, but did not offer a “Dislike” button. Facebook was creating a society in which it was not only rude but physically impossible to be negative. The profile picture of the Facebook user was supposed to be a smiling face. The whole Facebook society was just one big collective smile. The Web’s libertarian society was turning into a global exercise in faking happiness. After all, the French historian Alexis de Tocqueville had warned in 1840 (in his study “Democracy in America”) that absolute freedom would make people lonely and desperate. In a sense, social networking universes like Facebook were testing the possibility of introducing a metalevel of behavioral control to limit the absolute freedom enabled by the Web.

Google, eBay, Facebook and Twitter shared one feature that made them such incredible success stories was: simplicity. Initially, they all had a humble, text-only “look and feel” in the age of graphic design, banners, chat rooms, etc. All that Twitter had needed to change the world was 140 characters.

 

Facebook’s Original Building

 

Your Life Online

In November 2005 a group of former Paypal employees, all still in their twenties, got together to launch a new website, YouTube. The founders were Steve Chen (a Taiwanese-born software engineer), Chad Hurley (an art designer) and Jawed Karim (a German-born Stanford student working part-time). Based in San Mateo, they were funded by Roelof Botha of Sequoia Capital, another PayPal alumnus. The concept sounded innocent enough. It was just a way for ordinary people with an ordinary digital videocamera to upload their videos to the Web. It turned out to be the perfect Internet video application. By July 2006 more than 65,000 new videos were being uploaded every day, and more than 100 million videos were viewed by users worldwide every day. In October Google bought YouTube for $1.65 billion. 

YouTube did more than simply help people distribute their videos worldwide: it ushered in the age of “streaming” media. “Streaming” means to watch a video or to listen to a recording in real time directly from its Web location as opposed to downloading it from the Web on one’s computer. YouTube’s videos were “streamed” to the browser of the viewer. YouTube did not invent streaming, but it demonstrated its power over cable television, movie theaters, and any previous form of broadcasting videos to the masses.

Meanwhile, the first truly successful social-networking site was MySpace. It was launched in 2003 in Los Angeles and purchased in July 2005 for $580 million by Rupert Murdoch’s News Corp. In Asia the most successful of these sites was still Friendster.

Another idea that matured in the 2000s was Internet-based telephony. Skype was founded in Europe in 2003 by Niklas Zennstroem and Janus Friis to market a system invented by Kazaa’s founders Ahti Heinla, Priit Kasesalu, and Jaan Tallinn. Internet users were now able to make free phone calls to any other Internet user, as long as both parties had a microphone and a loudspeaker in their computer. The lesson learned in this case was that telephony over the Internet was a major innovation for ordinary consumers, not companies, but ordinary consumers could not afford suitable computers until the 2000s. Skype was not charging anything for the service, so, again, the business model was just to become very popular all over the world.

 

YouTube’s Original Building

 

E-commerce

The net economy was, however, recovering from the dotcom burst. For example, Amazon lost a staggering $2.8 billion between 1995 and 2001. Its first profit was posted at the end of 2001, and it was a mere $5 million. But in 2005 it posted revenues of $8.5 billion and a hefty profit, placing it inside the exclusive club of the “Fortune 500.” Iin 2006 its revenues would top $10.7 billion. In 2007 sales would increase a stunning 34.5% over the previous year. eBay’s revenues for 2006 reached $6 billion. Netflix‘s revenues were up 48% from the previous year, just short of one billion dollars, and it had almost six million subscribers.

 

Digital Entertainment

A lesson was creeping underneath the massive amount of music downloaded both legally and illegally from the Internet. In 2003 the file-sharing system Rapidshare was founded in Germany, the file-sharing system TorrentSpy went live in the US, and a BitTorrent-based website named “The Pirate Bay” opened in Sweden. In 2005 Megaupload was founded in Hong Kong. In 2006 Mediafire was founded in the US. These websites allowed people to upload the music that they had ripped from CDs, and allowed the entire Internet population to download them for free. The “fraud” was so extensive that in 2006 the music industry in the US (represented by the RIAA) filed a lawsuit against Russian-based Internet download service AllOfMP3.com for $1.65 trillion. Needless to say, it proved impossible to stop half a billion people from using free services that were so easy to use. Music downloading became a pervasive phenomenon.

Apple’s iTunes store, opened in April 2003, was the legal way to go for those who were afraid of the law, and by the end of 2006 a hefty 48% of Apple’s revenues was coming from sales of the iPod, one of the most successful devices in history. Digital videos came next, although the sheer size of the video files discouraged many from storing them on their home computers.

The lesson to be learned was twofold. One lesson was for the media company: it was virtually impossible to enforce copyrights on digital files. The other lesson was for the consumer: it was wishful thinking that one could digitize a huge library of songs and films because they required just too much storage. A different system was needed, namely streaming.

The phenomenon of music digital download was another premonition of an important transformation in computing. From the viewpoint of the “downloader,” the whole Web was becoming just one huge repository of music. Its geographical location was irrelevant. It was in the “cloud” created by multiple distributed servers around the world.

The situation was quite different in the field of books. In the late 1990s, companies such as SoftBook Press and NuvoMedia had pioneered the concept of the e-book reader. Microsoft and Amazon had introduced software to read ebooks on personal computers (Amazon simply purchased the technology in 2005 from the French company Mobipocket that had introduced it in 2000). That was at the time when there were virtually no ebooks to read. This changed in 2002 when two major publishers, Random House and HarperCollins, started selling digital versions of their titles. Amazon became and remained the main selling point for ebooks, but “ebookstores” became to appear elsewhere too, notably BooksOnBoard in Austin (Texas) that opened in 2006. In October 2004, Amazon had hired two former Apple and Palm executives, Gregg Zehr (hardware) and Thomas Ryan (software), who in turn hired mostly Apple and Palm engineers, and had started a company in Cupertino called Lab126 to develop a proprietary $400 hand-held e-book reader, the Kindle. It was eventually introduced in November 2007. The Kindle was not just a software application but a custom device for reading books. That device, conceptually a descendant of the Palm Pilot, was the device that tilted the balance towards the ebook.

 

Serving the Old Economy

Some sectors, like business software and Oracle, had little to learn from the dotcom revolution. In the 2000s Oracle represented an old-fashioned business model, the one that targeted “brick and mortar” companies. However, the Web had not slowed down the growth of software demand by the traditional companies that manufactured real products: it had increased it. They all needed to offer online shops backed by the fastest and most reliable database servers.

The escalating transaction volumes for e-business were good news for Oracle. Oracle was the undisputed leader in providing database management solutions, but these companies also demanded ERP systems and CRM systems. Oracle proceeded to acquire two Bay Area companies that had been successful in those fields: PeopleSoft (2004) and Siebel (2005). Now Oracle could literally connect the plant of a company to its corner offices and even to its traveling salesmen. In 2005 the total revenues of ERP software were $25.5 billion, with SAP making $10.5 billion and Oracle $5.1 billion. Oracle’s founder and CEO, Larry Ellison, was estimated to be worth $18.7 billion in 2004, one of the richest people in the world.

 

Oracle’s Campus in 2010

 

Robots and Avatars

Recovering from the dotcom crash, Silicon Valley was more awash into futuristic ideas than ever. In 1999 Philip Rosedale had founded Linden Lab to develop virtual-reality hardware. In 2003 Linden Lab launched “Second Life,” a virtual world accessible via the Internet in which a user could adopt a new identity and live a second life. In 2005 Andrew Ng at Stanford launched the STAIR (Stanford Artificial Intelligence Robot) project to build robots for home and office automation by integrating decade-old research in several different fields. In 2006 early Google architect Scott Hassan founded Willow Garage to manufacturer robots for domestic use.

The emphasis on virtual worlds had a positive effect on the US videogame industry. After losing the leadership to Japan in the mid-1980s, the US recovered it in the 2000s because Japanese videogames were not as “immersive” as the ones made by their US competitors. For example, the simulation game “The Sims,” created by SimCity’s creator Will Wright for Maxis in February 2000, had become the best-selling PC game of all times within two years of its release. With the exception of Nintendo, which successfully introduced the Wii home console in 2006, Japanese game manufacturers were losing market shares for the first time ever. The Wii popularized hand-held motion-sensitive controllers, which led to a new generation of videogame consoles controlled by gestures and spoken commands.

However, the next big thing in videogames was online virtual worlds in which users created “second-life” avatars and interacted with each other. Hotelli Kultakala (later renamed as Habbo Hotel), launched in August 2000 by Aapo Kyrola and Sampo Karjalainen in Finland, and Go-Gaia (later renamed Gaia Online), launched in February 2003 by Derek Liu in San Jose, were among the pioneers. They became extremely popular, involving millions of users spread all over the world. In February 2006 Alyssa Picariello even established a website to chronicle life in Gaia Online: the Gaiapedia.

 

Engineering the Future

The Web was only a decade old but high-profile critics were already complaining that it was inadequate. Berners-Lee in person had written an article in 2001 explaining the need for a “Semantic Web” in which a webpage would be able to declare the meaning of its content. In 2004 the first “Web 2.0“ conference was held in San Francisco to promote the idea that the Web had to become an open platform for application development, with such development increasingly decentralized and delegated to the users themselves. The term had originally been coined in 1999 by san Francisco-based writer Darcy DiNucci. In the beginning of the Web one could only be either a producer or a consumer of webpages. The user of a browser was a passive viewer of webpages. Web 2.0 aimed for “active” viewers of webpages. A Web 2.0 webpage is a collaborative effort in which the viewers of the page can modify it and can interact with each other. Wikipedia was an example of a Web 2.0 application; and Google’s search indirectly too, since it relied on a “page ranking” algorithm that was based on what was linked by millions of webpages all over the world.

The first widely publicized example of Web 2.0 was Flickr, a photo-sharing service that allowed users to “tag” photos (both their own and other people’s), founded in February 2004 by game industry veterans Caterina Fake (based in San Francisco) and Stewart Butterfield (based in Vancouver). Unlike Ofoto and Snapfish, whose websites were just ways to encourage people to print photos, Flickr understood that in the age of ubiquitous camera phones and social networking the real value was in sharing photos across the community. Soon people started taking pictures precisely for the purpose of posting them of Flickr, pictures that they would not otherwise have taken.

Yahoo! was the first major dotcom to invest in Web 2.0. It acquired Flickr in March 2005 and introduced in June its own My Web service, which allowed webpage viewers to tag and share bookmarks. Then in December it bought the most popular website for social bookmarking and tagging, Del.icio.us (originally started by Wall Street financial analyst Joshua Schachter in 2003). Basically, Yahoo! wanted to present itself as a “social search” that was fine-tuned by humans as they browsed the web, as opposed to Google’s impersonal algorithmic search.

The technical underpinning of Web 2.0 consisted of free tools such as Ajax (Asynchronous JavaScript and XML), a concept invented in 2003 by Greg Aldridge in Indiana (but the term was coined only in 2005 by San Francisco-based writer Jesse James Garrett). Ajax was a platform for website developers to create interactive web-based applications (essentially HTML, XML and JavaScript).

In a nutshell, the goal was simple: to allow the viewer to make changes to a webpage on a browser without reloading the whole page. This, obviously, had been done before: JavaScript, among other tools, had been available in Netscape’s browser since 1996, and web-based applications were already pervasive during the first dotcom boom but most of them went out of business quickly. Amazon allowed users to post reviews of books since the very beginning. However, Web 2.0 had a more ambitious vision: that the Web could be viewed as a platform for creating applications, a platform that would eventually replace the individual computer.

Blogging started to democratize too. Matt Mullenweg in San Francisco introduced in 2003 a new popular platform for people to create their own website or blog: Wordpress. The reason it spread like wildfire is that it was maintained as “open source” by a growing community of volunteers.  In June 2003 Mark Fletcher, already the founder of ONElist (acquired by Yahoo! in 2000), launched Bloglines, the first Web-based news aggregator.  TechCrunch was founded in June 2005 by Michael Arrington out of his home in Atherton to publish high-tech news and gossip about Internet startups.

 

Yahoo!’s Campus in 2010

 

Biotech

Biotechnology was becoming mainstream and synthetic biology was the new frontier. The first synthetic biology conference was held at MIT in 2003. A year later, Codon Devices was the first company to commercialize synthetic biology; it was founded by MIT Professor Drew Endy. In 2006 Jay Keasling inaugurated the world’s first synthetic biology department at the Lawrence Berkeley Laboratory. UCSF also became a major center of biological research. In 2003 Christopher Voigt founded a lab to program cells like robots to perform complex tasks, and in 2005 UCSF opened an Institute for Human Genetics.

The goal of synthetic biology was not clear. Yet the business behind it envisioned the possibility of building new living species (initially just bacteria) that would perform useful industrial or domestic functions. It would exist just as electronics had led to devices that performed useful industrial and domestic functions (the word “military” was carefully omitted). Therefore synthetic biology was actually not too interested in cloning existing species: why not use the existing species then? It was interested in modifying existing organisms to create organisms that do not exist in nature. Genetic engineering is about replacing one gene, whereas synthetic biology is about replacing entire genomes to generate “reprogrammed organisms” whose functions are different from the original ones (because the DNA instructions have changed).

Synthetic biology exploited the power of microbes to catalyze a sequence of biological reactions that transform a chemical compound into another compound. One example that captured the attention of the media in April 2005 was the announcement that Keasling had successfully converted yeast into a chemical factory by mixing bacterial, yeast, and wormwood genes. This “factory” was capable of turning simple sugar into artemisinic acid, the preliminary step to making an extremely expensive anti-malarian drug, artemisin. Artemisinin was commonly extracted from a plant. The goal of synthetic biology was now to create “designer microbes” by selecting genes based on which protein they encode and the path they follow. Some day, synthetic biology could even replace industrial chemistry, which relies on a sequence of chemical reactions to manufacture materials.

Venter’s saga continued. After disagreements with Celera’s main investor Tony White, in January 2002 Venter left Celera taking Hamilton Smith with him. In 2003 they synthesized the genome of a virus (just eleven genes) which, unlike the artificial polio virus at Stony Brook, truly behaved like a virus. With a keen sixth sense for money and publicity, in September 2004 Venter started his own non-profit institute in both Maryland and California (San Diego) to conduct research in synthetic biology and biofuels. In particular, he worked on building the genome of a bacterium from scratch and on inserting the genome of one bacterium into another. Bacteria are the simplest living organisms, made of just one cell. 

Bioinformatics continued to thrive. Two former Silicon Genetics executives, Saeid Akhtari and Ilya Kupershmidt, started NextBio in Cupertino in 2004 to create a platform to perform data mining on public and private genomic data.

 

Nanotech

Nanotechnology was still a mystery. While returns were low and nano startups routinely switched to more traditional manufacturing processes, during both 2006 and 2007 venture-capital firms invested more than $700 million in nanotechnology startups.

A promising avenue was to wed “nano” and “green,” a mission particularly nurtured in Berkeley. NanoSolar engineers formed the core of Solexant, founded in 2006 in San Jose by Indian-born chemist Damoder Reddy and by Paul Alivisatos, Professor of Chemistry and Materials Science at UC Berkeley. Alivisatos was also the director (since 2009) of the Lawrence Berkeley Laboratory to manufacture printable thin-film “quantum dot” photovoltaic cells using a technology developed at the Lawrence Berkeley Laboratory. This was held to be the next generation of solar technology: flexible, low-cost, and high-yield.

Other solar research was promising too. Michael Crommie, a scientist at the Materials Sciences Division at the Lawrence Berkeley Laboratory and a professor of physics at UC Berkeley, was working on solar cells the size of a single molecule. Canadian nanotechnology specialists Ted Sargent of the University of Toronto developed a “quantum film” capable of a light-capturing efficiency of 90%, as opposed to 25% for the CMOS image sensors employed in digital cameras. In October 2006 he founded InVisage in Menlo Park to make quantum film for camera phones.

 

Greentech

Skyrocketing oil prices and concerns about climate change opened a whole new range of opportunities for an environmentally friendly energy generation, nicknamed “greentech” or “cleantech.” Of the traditional kinds of renewable energy (wind power, solar power, biomass, hydropower, biofuels) solar and biofuel emerged as the most promising. At the same time, the US started investing in fuel-cell companies in 2005 with the goal of fostering commercial fuel-cell vehicles by 2020. By 2008 it had spent $1 billion. California embarked on a project to set up a chain of stations to refuel hydrogen-driven vehicles (despite the fact that the state had only 179 fuel-cell vehicles in service in 2007).

Silicon Valley entrepreneurs and investors delved into projects to produce clean, reliable, and affordable energy. A startup that focused on renewable fuels was LS9, founded in 2005 in South San Francisco to create alkanes (a constituent of gasoline) from sugar by Harvard Professor George Church and Chris Somerville, the director of UC Berkeley’s Energy Biosciences Institute; it was financed by Vinod Khosla and Boston-based Flagship Ventures.

Cars were another interesting sector. After selling their e-book company NuvoMedia, in 2003 Martin Eberhard and Marc Tarpenning founded Tesla Motors in Palo Alto to build electrical cars. In 2006 they introduced the Tesla Roadster, the first production automobile to use lithium-ion battery cells. In 2004 SUN’s co-founder Vinod Khosla, who had joined venture capital firm Kleiner Perkins Caufield & Byers, founded Khosla Ventures to invest in green-technology companies. One year later another SUN co-founder, Bill Joy, replaced him at Kleiner Perkins Caufield & Byers to invest in green technology.

Another legendary “serial entrepreneur” of Silicon Valley, Marc Porat of Go fame, turned to building materials for the “green” economy focused on reducing energy consumption and carbon emission. Some startups included: Serious Materials (2002) in Sunnyvale for eco-friendly materials; CalStar Cement (2007), a spin-off of the University of Missouri based in the East Bay (Newark) that manufactured eco-friendly bricks; and Zeta Communities (2007) in San Francisco for pre-assembled homes that operate at net-zero energy.

Meanwhile, UC Berkeley and the Lawrence Berkeley Laboratory launched a joint “Helios Project” for artificial photosynthesis, i.e. to convert sunlight into fuel.

 

The Tesla Roadster in 2010

 

Culture and Society

The arts mirrored progress in the high-tech industry. The 2000s were the decade of interactive digital art, pioneered by the likes of Camille Utterback. In 2005 the Letterman Digital Arts Center opened in San Francisco to house Lucasfilm’s lab. The first Zer01 Festival for “art and technology in the digital age” was held in San Jose in 2006, sponsored by San Jose State University’s CADRE. Stephanie Syjuco’s counterfeit sculptures, Lee Walton‘s web happenings, and Amy Balkin‘s ecological projects referenced the issues of the era. In 2000 Fecalface.com was launched to support the alternative art scene (later also a physical gallery, the Fecal Face Dot Gallery). The Adobe Books Backroom Gallery, another epicenter of new art, opened in 2001. The Mission School’s mission of mural paintings and found-object sculpture was being continued by Andrew Schoultz and Sirron Norris. Also Dave Warnke focused on stickers and hand-painted posters, Sandro “Misk” Tchikovani specialized in three-dimensional letters, and Damon Soule explored mixed media on found wood.

 

 

 

The Anthropology of Pan-ethnic Materialism

The cultural diversity of the Bay Area continued to ablate religious certainties. A person’s loyalty to her religious group was undermined by the proximity of so many other religious groups (in the workplace, at shared homes, in sport activities). This led to an increasingly higher degree of flexibility in choosing one’s faith. The new-age movement, with its syncretic non-dogmatic view of spirituality, had left its own influence on the region, even though its message was now being interpreted in a more materialistic manner. For many people religion was to be shaped by how one wanted to behave. For example, greed and promiscuity were definitely “in” for the vast majority of independently religious people. Religious axioms that constrained one’s lifestyle were not particularly popular. Religious practices that were perceived as beneficial to one’s mind and body were. Thus Zen retreats and yoga classes were popular even among people who did not believe in Buddhism.

Santa Clara Valley had traditionally been a Catholic region. It had become a unique experiment within the Catholic world: a Catholic region with sizeable minorities of other religious groups that were not poor segregated immigrants (like it was in Italy or France), but who lived on equal footing with the original Catholic families. Both the percentage and the level of integration were unique among Catholic regions.

The time and attention usually devoted to religious functions were translated to the high-tech world. The public rituals of religion were replaced by public rituals of lectures and high-tech symposia. The mass in a church was replaced by a business or technology forum. The technology being produced downplayed the cultural differences. People tended to recognize themselves more strongly as workers of a company than as members of a religious or ethnic group.

 

Re-inflating Cities

Those who had predicted the demise of Silicon Valley had completely missed the point. In 2005 Silicon Valley accounted for 14% of the world’s venture capital. San Jose’s population of 912,332 had just passed San Francisco and San Jose had become the tenth largest city in the US. The Bay Area as a whole was the largest high-tech center in the world with 386,000 high-tech jobs in 2006.

 



Table of Contents | Silicon Valley History pages | Purchase | Correspondence
(Copyright © 2010 Piero Scaruffi)