Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )


(These are excerpts from my book "Intelligence is not Artificial")

The Robots are Coming - A Brief History of A.I./ Part 9

The story of robots is similar to the story of neural networks. Collapsing prices and increased speeds have enabled a generation of robots based on relatively old theory.

The first computer-controlled robotic arm was probably the "Case arm", developed at Case Institute of Technology in 1965, with five degrees of freedom. In 1968 Jerry Feldman's team at Stanford published the "Hand-eye System", a combination of television camera and "Rancho arm", both connected to a PDP-6 computer. The arm was capable of the sorting cubes by size and of stacking the cubes on top of each other. The computer-controlled "Stanford arm", with six degrees of freedom, was designed in 1969 by Stanford student Victor Scheinman, formerly a rocket scientist who had worked on NASA's Apollo missions and was now in charge of maintaining the "Rancho Arm". Attaching a computer (i.e. software) to a robotic arm meant that the arm was capable of performing more than just one repetitive action. Another computer-controlled industrial robotic arm was the Milacron T3 built in Cincinnati by Richard Hohn in 1973. Despite the primitive state of the field, the first International Symposium on Industrial Robots was already held in Chicago in April 1970. In September 1973 luminaries of robotics met in Udine (Italy) for a symposium titled "Theory and Practice of Robots and Manipulators" which became the most important conference on robotics, later abbreviated as RoManSy (Robot-Manipulated-Symposium). Between these two events a lot had happened. Several hardware "firsts" were achieved in the early 1970s: the marvel of the Hitachi Technology Fair of 1970 was Masakazu Ejiri's robot that assembled objects based on drawings; Freddy (1971), built by Donald Michie's group at the University of Edinburgh, was the first robot using a videocamera to guide its behavior, followed by the SIRCH assembly robot developed by Alan Pugh and others at the University of Nottingham (1972) whose goal was precise gripping driven by visual feedback; Kuka's Famulus (1973), from Germany, was the first robotic arm utilizing electric instead of hydraulic drives; David Silver's arm at MIT (1974) was the first robotic arm with touch sensors; and ASEA's IRB6 (1974), in Sweden, was the first robot controlled by a microprocessor (using Intel's 8008 microprocessor). Nachi had built Japan's first industrial robotic arms in 1969, and Toshio Kono had founded the robotics startup Dainichi Kiko in 1971, but the Japanese wave began in earnest in 1974 when Seiuemon Inaba's firm FANUC (Factory Automation Numerical Control), a Fujitsu spinoff at the foot of Mount Fuji, installed robotic arms for assembly in its factory. Hitachi introduced its first commercial robot, the arc-welding Mr Aros, in 1975, one of the first controlled by a microprocessor. Both Yaskawa and FANUC introduced their first robotic arms in 1977, respectively the Motoman L10 and the Model-1. In 1973 Scheinman founded his own company, Vicarm, and designed robotic arms for SRI, Jet Propulsion Laboratories (JPL) and MIT. Vicarm was acquired by Joseph Engelberger's Unimation and remained their California laboratory. He designed for them the Programmable Universal Machine for Assembly (PUMA) commissioned by General Motors: delivered to Lothar Rossol, the man who had founded the A.I. lab at General Motors, it was deployed in 1978. The success of PUMA encouraged many to start firms to make small electric robotic arms.

Assembly robots (robotic arms for assembly) spread in every industrialized country. In Italy, for example, Olivetti introduced its Sigma in 1975 and Digital Electronic Automation (DEA) its Pragma in 1979. Then in 1978 Hiroshi Makino at the University of Yamanashi invented a new design for robots, called SCARA (Selective Compliance Assembly Robot Arm), that in 1981 a coalition of Japanese manufacturers backed as the standard for assembly robots that were to be simpler, smaller and faster. Sankyo Seiki (ironically a producer of music boxes) released the first commercial SCARA, Skilam (1981). IBM's first commercial robotic arm, the 7535 (1982), was simply the Sankyo Seiki robot. After FANUC demonstrated robots making parts for other robots (in 1981), in 1982 General Motors decided to set up a joint venture called GMFANUC (General Electric would follow suit in 1986).

Hobbyists were already building all sorts of anthropomorphic life-size domestic robots, and, if you believe the media, they were far ahead of academia and of the industry, from Claus Scholz's MM7 (1958), a Viennese robot that the press portrayed serving food and vacuuming, to Ben Skora's Arok (1975), a Chicago robot that, according to InfoWorld of 24 September 1984 (page 22), could take out the trash, walk the dog, serve drinks, and vacuum. Of course, the press exaggerated a bit back then, just like today.

The stimulus to develop mobile robots came from space exploration. In fact, a company called Space General Corporation had already built in 1961 a remotely-controlled vehicle for lunar exploration on behalf of the Jet Propulsion Laboratories (JPL). Jim Fletcher, the co-founder, would become the chief of NASA in 1971 and in 1972 in 1972 would begin the development of the Space Shuttle. NASA's Surveyor of 1966 and the Soviet Union's Lunokhod of 1970, which both explored the Moon surface (the Lunokhod drove about 39 kilometers in about five months), and NASA's Viking of 1976 that explored Mars showed the need for robotic explorers (controlled remotely from Earth). One simple problem: often it took days to simply move a rock. In 1973 the Hungarian-born Antal Bejczy at Jet Propulsion Laboratories (JPL) augmented the Stanford arm with several sensors (ranging sensors, tactile sensors and proximity sensors besides television cameras) and with an on-board minicomputer (General Automation SPC-16) connected with a remote PDP-1O over the Arpanet. Bejczy also published the equations for the kinematics of such an arm ("Robot Arm Dynamics and Control", 1974). Trivia: Bajczy in the 1990s would oversee NASA's project for robotic telesurgery, published in 1996, the predecessor of Intuitive Surgical's DaVinci, and then help design in 1997 NASA's Sojourner, the first rover to land on the Mars.

Shakey had set in motion the history of mobile robots and inspired researchers both at home and far away. Shakey was capable of moving only indoors, in a well-structured environment of straight edges. Eventually, Stanford students built a Shakey for the outdoors. The "Stanford cart" was originally built in 1961 by Jim Adams, a student coming from the Jet Propulsion Laboratory (JPL) interested in finding a way for scientists to maneuver a Moon rover equipped with a television camera; but this was not a robot, rather just a remote-controlled battery-propelled cart on four bicycle wheels. However, resurrected in 1966 by John McCarthy's student Rodney Schmidt, by 1971 it became an outdoors autonomous vehicle, albeit only able to follow a curving white line, and at a speed of about 10 meters per hour (0.01 km/h). In 1973 John McCarthy's new student Hans Moravec took over the project and eventually developed the first three-dimensional vision for the cart. After graduating from Stanford in 1980, Moravec moved to Carnegie Mellon University where he worked on a robot that came to be known as the "CMU Rover", first presented in 1982. The Hilare robot, built in 1977 by Georges Giralt's team at the Laboratory of Analysis and Architecture of Systems, was the French Shakey. In 1981 Ruzena Bajcsy's student Russell Andersson at Carnegie Mellon University built SCIMR (Self Contained Independent Mobile Robot). Incidentally, the Czech-born Bajczy had studied with John McCarthy at Stanford. In 1982 Bart Everett completed Robart I at the Naval Postgraduate School in Monterey (his master thesis). Wheeled robots were obviously easier to build than legged robots.

The first robotic hand with three fingers (each with multiple joints) was

designed in 1979 by Tokuji Okada at Niigata University in Japan. In 1981 the French surgeon Raoul Tubiana pulished a study ("The Architecture and Function of the Hand", 1981) showing that the human hand has 22 degrees of freedom that allow it to grasp objects. In 1982 Kenneth Salisbury at Stanford University built a robotic hand, a collaboration with the Jet Propulsion Laboratory (JPL), and proved the minimum number of degrees of freedom required for grasping an object: nine, that he implemented as three fingers with nine joints ("Kinematic and Force Analysis of Articulated Mechanical Hands", 1982). That started a race for more and more degrees of freedom. The first robotic arm that was actually capable of grasping an object was developed in 1984 at the University of Utah by Stephen Jacobsen (a professor and entrepreneur whose firm was building mechanized dinosaurs for theme parks), a joint project with MIT.

The first robot-assisted surgery was performed in 1985 at the Memorial Medical Center in Long Beach (near Los Angeles) using a PUMA 560 programmed by Yik-san Kwoh. In 1988 Brian Davies at London's Imperial College designed the Probot and in 1992 Senthil Nathan used it to perform the first fully robotic surgery in history.

The robotic hand designed at the National Taiwan University (NTU) by Han-Pang Huang and Li-ren Lin in 1995 had 17 degrees of freedom. In 1997 the DIST hand designed at the University of Genova by Andrea Caffaz had 16 degrees of freedom. The Robonaut hand for the International Space Station, designed in 1999 by Chris Lovchik of NASA in Texas under the supervision of Myron Diftler of Lockheed (who then became the leader of the project at NASA) had five fingers with 14 degrees of freedom. In 2011 the R2 would become the first human-like robot to become a permanent resident of the International Space Station.

If you polled ordinary people about what makes a robot a robot, most of them would probably reply that it has to walk like us. Walking is not trivial, so much so that it takes months for babies to learn it. Footstep planning for humanoid biped robots has become a separate discipline. For a long time the fundamental technique was the "semi-inverse method", or, better, the zero-moment-point (ZMP) method, invented by Miomir Vukobratovic and Davor Juricic in Serbia in 1969. In 1966 Ichiro Kato's laboratory at Waseda University had already started work on the robot series WL (the Waseda Leg) that in 1973, incorporating the ZMP method, provided the foundation for the Wabot (WAseda roBOT), the first real-size anthropomorphic walking robot. Wabot-2 of 1984 had 10 fingers and two feet, could read a normal musical score, and could play it on a keyboard. Renamed "Wasubot", in March 1985 it performed live with a symphony orchestra at the opening ceremony of the International Science and Technology Exposition in Tsukuba.

Kato's student Atsuo Takanishi at Waseda University was involved in the development of the biped WL-5 (1971), the foundation of the Wabot, and led the development of the WL-10R (1983) and of the WL-12RIII that was able to walk up a staircase (1990), and eventually, in 1996, of the family dubbed Wabian (WAseda BIpedal humANoid) with 35 degrees of freedom (the Wabian-RIV of 2007 would boast 43 degrees of freedom, of which six in each leg, seven in each arm, three in each hand, three in the waist and four in the neck). The WL-10R was the first successful implementation of dynamic walking. So far most bipeds had used static walking, which keeps the robot always balanced but it is more similar to the turning of a wheel than to the way animals walk.

Kato and Takanishi generated the gait by feedback. Hirofumi Miura and Isao Shimoyama at the University of Tokyo, instead, generated the gait by feedforward for their series Biper that led to Biper-3 (1981) and Biper-4 (1983).

Two-legged robots were too difficult to realize. Scientists quickly realized what every zoologist and pediatrist knows: that it is easier to crawl on all fours than to walk on two legs. A four-legged robot, dubbed "Phony Pony", tethered to a remote minicomputer, was walking in 1966 at the University of Southern California, built by Robert McGhee (before he moved to Ohio) and his student Andrew Frank. Ten years later, Yoji Umetani's student Shigeo Hirose at the Tokyo Institute of Technology built the four-legged robots Kumo-I (1976) and PV-II (1978) that began the Titan series of robots. Hirose had previously built the many-wheeled snake-like Active Cord Mechanism III or ACM III (1972), the first snake robot. It would be even easier to move around if one had six legs: four legs have "dynamic stability", six legs have "static stability" (there are always three on the ground that provide stability). Hence a few of the earliest legged robots had six legs, starting with the computer-controlled hexapod robot built in 1972 at the University of Rome by Massimiliano Petternella and Serenella Salinari and with the Masha hexapod robot designed in 1976 at Moscow State University by a team including the Soviet neurophysiologist Victor Gurfinkel, a pupil of Nikolai Bernstein whose book "The Coordination and Regulation of Movement" (1940) had presented a scientific theory of locomotion. Hexapod robots were shown in 1977 by Robert McGhee at Ohio State University (the "Bionic Bug") and in 1983 by Marc Raibert at Carnegie Mellon University (who in 1980 had founded their Leg Lab and who in 1992 would start Boston Dynamics). The first commercial hexapod was Odetics' Odex (1983).

The Japanese public (as well as the workforce) seemed more willing than any other country to accept the robots. While it had started by importing robots from the USA, by 1981 Japan had many more industrial robots than the USA (14,000 versus 4,200). Despite the widely publicized death of a factory worker named Kenji Urada, the second human killed by a robot (in July 1981), the gap kept widening. (The first human to be killed by a robot was Robert Williams in January 1979 at a Ford plant near Detroit). Japanese scientists were already experimenting with "domestic" robots such as Eiji Nakano's "nursebot" called Melkong at Tohoku University (first presented in 1981) and Susumu Tachi's robotic dog for blind people called Meldog at the University of Tokyo (first presented in 1981). Dainichi Kiko was even manufacturing a waiter robot for restaurants (1983), deployed by restaurants both in Tokyo and in Pasadena. In July 1983 the Seibu chain of department stores became the first to sell robots to the public (Dainichi Kiko models) and introduced robots that could follow customers and carry their purchases. In 1982 the inventor Shunichi Mizuno unveiled a life-size New Monroe, the first of his sexy robots (or "cybots"). Japan is a country that had never seen a clock until the Spanish missionary Francis Xavier brought one as a gift in 1551 (four centuries later it would become the world's main exporter of watches), and a country that remained isolated from the West between 1639 (when the emperor kicked out all foreigners) and 1854 (when the USA forced Japan to sign the treaty of Kanagawa). In between those dates the only automata built in Japan were the karakuri mechanical dolls and puppets, especially the ones used in the Bunraku puppet theater, perhaps descendants of that Spanish clock. The Takayama festival of karakuri that still takes place today was started in 1652. Nonetheless, Japan adopted robots at lightning speed: first the industrial robots (which are mostly arms), then anthropomorphic robots, then robots for entertainment (culminating in Oriza Hirata's robot theater of the 2000s). Far from demonizing robots, Japan's culture adopted them. The USA had robots in cinema, whereas Japan had them in children's entertainment. Robots appeared in comic books (mangas) such as Gajo Sakamoto's "Tanku Tankuro" (1934), Osamu Tezuka's "Tetsuwan Atomu/ Astro Boy" (1951), Mitsuteru Yokoyama's "Tetsujin 28go/ Iron Man No 28" (1956) and "Jaianto Robo/ Giant Robot" (1967). There were even Hiroshi Fujimoto's and Motoo Abiko's cat robot "Doraemon" (1969), Go Nagai's drivable robot "Mazinger Z" (1972), and Akira Toriyama's girl robot "Dokuta Suranpu/ Dr Slump" (1980). There were mangas about cyborgs, such as Kazumasa Hirai's and Jiro Kuwata's "Eitoman/ 8 Man" (1963) and Shotaro Ishinomori's "Cyborg 009" (1964). There were robots on television, both in cartoons (anime), such as Yoshiyuki Tomino's "Yusha Raidin/ Brave Raideen" (1975), "Voltes V" (1977) and "Gandamu" (1979), and in special-effects live-action shows (tokusatsu), such as Shotaro Ishinomori's "Kamen Rider" (1971), "Jinzo Ningen Kikaida/ Android Kikaider" (1972), "Robotto Keiji/ Robot Detective" (1973) and "Ganbare Robocon" (1974). Whereas inventors in the USA aimed for a (programmable) hybrid of personal computer and home appliance, such as Joseph Bosworth's RB5X (1982) and Mike Forino's Hubot (1984), Japanese companies launched the concept of robots as toys: Bandai, Takara, Namco, Tomy (the programmable Omnibot 2000 of 1984 that spoke, moved and carried objects), etc. Japanese attitudes towards robots may be influenced by their polytheistic, animistic religion Shinto (as Ichiro Kato said in interviews) or by Buddhist philosophy (as Masahiro Mori was in his book "The Buddha in the Robot").

Some conceptual arguments helped to reshape the field. The Italian cyberneticist Valentino Braitenberg, in his book "Vehicles" (1984), showed that no intelligence is required for producing "intelligent" behavior: all that is needed is a set of sensors and actuators. As the complexity of the "vehicle" increases, the vehicle seems to display an increasingly intelligent behavior. In 1985 Rodney Brooks, a young Australian-born MIT scientist who had cut his teeth on the "Stanford cart" project with Hans Moravec, presented at a robotics symposium in France a robot nicknamed Allen that used little or no representation of the world. He showed that one can know nothing, and have absolutely no common sense, but still be able to do interesting things if equipped with the appropriate set of sensors and actuators. This was proven in a more convincing manner in 1988 by the hexapod Genghis, driven by a network of 57 finite-state machines. In 1991 Brooks published an influential paper, "Intelligence without representation", that argued against the knowledge-based approach of Shakey and STRIPS, and in 1993 he launched the project Cog, a robot with 21 degrees of freedom and a variety of visual, auditory, vestibular, kinesthetic, and tactile sensors that simulated how a child acquires skills by trial and error.

Nonetheless, in 1986 Jaime Carbonell (a former student of Roger Schank at Yale but now at Carnegie Mellon University) began work on a planner called Prodigy, the first major improvement over the venerable STRIPS and a ten-year project that would involve students such as Steve Minton and Manuela Veloso. Just like STRIPS, Prodigy was applied to robot navigation. In 1994 Veloso even attempted to build soccer-playing robots.

Upon graduating in 1984, Moravec's student Chuck Thorpe was tasked with launching Carnegie Mellon University's center for autonomous vehicles. He teamed up with Red Whittaker (the brain behind the mechanics) and in 1986 they produced the semi-autonomous NavLab 1 (a Chevrolet van fitted with several computers, a camera and a GPS unit), that drove around campus at a speed of 2 km/h. Whittaker also founded RedZone whose robots helped to clean up the nuclear disasters at Three Mile Island and Chernobyl. The breakthrough came when, in 1988, Dean Pomerleau at Carnegie Mellon University pioneered the use of neural networks for road navigation, i.e. self-driving vehicles, and demonstrated ALVINN (which stands for "Autonomous Land Vehicle in a Neural Network").

In 1991 Shuuji Kajita and Kazuo Tani at the National Institute of Advanced Industrial Science and Technology (AIST) introduced the "linear inverted pendulum" method (LIPM) for planning the steps of a biped robot.

In 1986 the Japanese firm Honda had already decided to develop a walking robot. In 1993 Honda produced its first humanoid robot (the P1) and in 1996 demonstrated the P2 publicly. This was the biped robot that galvanized the community of roboticists.

It wasn't clear yet what was the best architecture for robots. The 1992 robot competition run by the American Association for Artificial Intelligence (AAAI) was won by a University of Michigan robot called CARMEL (which stands for Computer-Aided Robotics for Maintenance, Emergency, and Life support), developed by Terry Weymouth's students, followed by SRI's robot Flakey (developed by Kurt Konolige's team since 1985): the former's navigation employed a traditional hierarchical architecture whereas the latter adopted elements of Brooks' distributed architecture.

In 1993 Tom Mitchell's team at Carnegie Mellon University (featuring Joseph O'Sullivan, Sebastian Thrun and Reid Simmons) started work on Xavier, that looked like a water heater but whose tiered architecture (integrated with Manuela Veloso's Prodigy planning system in 1995) represented a major advance. Thrun went on to design the museum tour-guide robot Minerva (1998) and Pearl (1998), a "nursebot" for elderly care facilities (a collaboration with the University of Michigan and the University of Pittsburgh), before becoming famous for his self-driving car.

The old methods of footstep planning were revolutionized by the incorporation of Monte Carlo techniques that randomly explore the space of possibilities while keeping track of progress. This led to the proliferation of "probabilistic roadmap methods" (or "sampling-based robot motion planning") the Randomized Path Planner designed by Jean-Claude Latombe in 1991 at Stanford later improved by students such as Lydia Kavraki (her Probabilistic Roadmap Planner of 1996) and David Hsu (his Expansive-Spaces Tree planner of 1997). Meanwhile, in 1998 Steven LaValle at Iowa State Univ came up with Random Tree Planner, improved in collaboration with Stanford student James Kuffner as the Rapidly-exploring Random Trees (RRT) planner. Sampling-based planners ruled in the 2000s.

In 1997 NASA's Mars Pathfinder deployed on Mars the first roving robot, Sojourner. While not a biped robot at all, it injected confidence in the field of autonomous vehicles.

Others, building on Rodney Brooks' theories and Valentino Braitenberg's theories of embodied intelligence (the brain is embedded in a body, and the body is embedded in the environment), aimed at building "developmental robots", robots that grow up just like humans, acquiring knowledge as they interact with the world; notably: SAIL, built since 1998 by Juyang Weng at Michigan State University, and Darwin V, built in 1998 at UC San Diego around the neuroscientist Gerald Edelman's theory of neuronal group selection. Meanwhile, Dario Floreano at AREA Science Park in Italy, Francesco Mondada at IPFL in Switzerland and Stefano Nolfi at the National Research Council of Italy had established the field of evolutionary robotics. In 2000 Floreano and Nolfi published the book that gave the field its name.

Others, building on Rodney Brooks' theories and Valentino Braitenberg's theories of embodied intelligence (the brain is embedded in a body, and the body is embedded in the environment), aimed at building "developmental robots", robots that grow up just like humans, acquiring knowledge as they interact with the world; notably: SAIL, built since 1998 by Juyang Weng at Michigan State University, and Darwin V, built in 1998 at UC San Diego around the neuroscientist Gerald Edelman's theory of neuronal group selection. Meanwhile, Dario Floreano at AREA Science Park in Italy, Francesco Mondada at IPFL in Switzerland and Stefano Nolfi at the National Research Council of Italy had established the field of evolutionary robotics. In 2000 Floreano and Nolfi published the book that gave the field its name.

It took almost 15 years of research at Honda, but in 2000 Toru Takenaka (a student of Masahiro Mori, the first scientist to predict that robots would some day become conscious) could at last unveil the humanoid robot ASIMO (Advanced Step in Innovative Mobility). At the same time, Yoshihiro Kuroki's team at Sony unveiled the third prototype of the Sony Dream Robot project, SDR-3X, which evolved into the SDR-4X of 2002 and into the SDR-4XII of 2003, later renamed QRIO (pronounced "curio"). And Hirochika Inoue's team at Tokyo University (that featured James Kuffner and Masayuki Inaba) developed the humanoid robots H6 (1999) and H7 (2000). In 2001 Japan launched the Humanoid Robotics Project (HRP) under the direction of the same Hirochika Inoue. For example, the remote-controlled HRP-1S was designed to operate with construction workers.

Cynthia Breazeal's emotional robot Kismet (1999) at MIT and Hiroshi Ishiguro's Actroid (2003) at Osaka University in Japan were little more than interesting psychological experiments. NEC's PaPeRo (2001) and Mark Tilden's biomorphic robot Robosapien (2004) were toys and probably discovered the largest market for robots: children. Aldebaran Robotics' cute Nao was in fact conceived for education.

Meanwhile, progress in footstep planning was coming from James Kuffner, first at the University of Tokyo ("Footstep Planning Among Obstacles for Biped Robots", 2001) and then at Carnegie Mellon University with Joel Chestnutt ("A Three-tiered Planner for Biped Navigation over Large Distances", 2004). James Kuffner moved to Carnegie Mellon University in 2002 and remained the link with Tokyo University, of which the Japanese Institute of Advanced Industrial Science and Technology (AIST) was an emanation.

In 2003 Klaus Loeffler demonstrated the robot Johnnie built at the Technical University of Munich in Germany. In 2005 Jun-Ho Oh showed Hubo at the Korea Advanced Institute of Science and Technology (KAIST). Quadrupeds such as the Boston Dynamics' BigDog of 2005 were far less interesting because four legs makes them more similar to four-wheeled vehicles which we've had since the first wagon was built in prehistory.

In 2005 Toyota launched the Partner Robots project. The idea was to build robots for medical and elderly care. Unfortunately, they got a bad reputation because they debuted playing drums and trumpets at the 2005 World Expo, i.e. they were just toys; but in 2007 the next humanoid of the series, Robina (Robot as Intelligent Assistant), guided visitors through a museum and was being designed to become a medical nurse, a "nursebot". The first in-home trials took place in 2011, assisting people with limb disabilities. In 2012 Toyota renamed the project Human Support Robot (HSR) to produce robots that can assist people in their everyday activities.

This was the era of the first "domestic" robots, like Luna, conceived in 2011 by RoboDynamics in Santa Monica, and Jibo, designed again by Cynthia Breazeal.

Osamu Hasegawa's robot that learned functions it was not programmed to do (2011) and Rodney Brooks' hand programmable robot "Baxter" (2012), i.e. the first "collaborative robots" or "cobots", look good on video but still look as primitive as Shakey in person (in 2018 Brooks' company Rethink Robotics stopped operations). In 2005 the driver-less car Stanley developed by Sebastian Thrun at Stanford won DARPA's Grand Challenge, but that was in the middle of the Nevada desert.

Lola Canamero's Nao (2010) at the University of Hertfordshire in Britain, a robot that can show its emotions, was followed by David Hanson's Sophia (2015) in Hong Kong, a robot that can display more than 60 facial expressions and that in 2017 was granted Saudi Arabian citizenship.

A branch of robotics is preoccupied with the self-reconfigurable modular robot, a concept introduced by Toshio Fukuda in Japan with its CEBOT (short for "cellular robot") that was capable of reconfiguring itself ("Self Organizing Robots Based On Cell Structures", 1988). The leadership remained in Japan (for example, Satoshi Murata's modular robotic system M-TRAN of 1999) until Daniela Rus at the MIT, inspired by the art of origami and a math theory by Erik Demaine, invented a robot that folds automatically ("Programmable Matter by Folding", 2010) which led to the self-configuring "M-blocks". Rus is also working on the Robot Compiler: someday we will be able to order a robot for a specific function and the Robot Compiler will 3D-print a custom robot for us.

Manufacturing plants have certainly progressed dramatically and can build, at a fraction of the cost, the tiny sensors and assorted devices that used to be unfeasible and that can make a huge difference in the movements of the robot; but there has been little conceptual breakthrough since Richard Fikes' and Nils Nilsson's STRIPS of 1969 (the "problem solver" used by Shakey). What is truly new is the techniques of advanced manufacturing and the speed of GPUs.

In fact, nothing puts the progress in A.I. (or lack thereof) better in perspective than the progress in robots. The first car was built in 1886. 47 years later (1933) there were 25 million cars in the USA, probably 40 million in the world, and those cars were much better than the first one. The first airplane took off in 1903. 47 years later (1950) 31 million people flew in airplanes, and those airplanes were much better than the first one. The first public radio broadcast took place in 1906. 47 years later, in 1953, there were more than 100 million radios in the world. The first television set was built in 1927. 47 years later (1974) 95% of households in the USA owned a TV set, and mostly a color TV set. The first commercial computer was delivered in 1951. 47 years later (1998) more than 40 million households in the USA had a computer, and those personal computers were more powerful than the first computer. The first (mobile) general-purpose robot was demonstrated in 1969 (Shakey). In 2016 (47 years later) how many people own a general-purpose robot? How many robots have you seen today in the streets or in your office?

In June 2016 the MIT Technology Review had an article about robots that announced: "They're invading consumer spaces including retail stores, hotels, and sidewalks". Look around you: how many robots do you see in the grocery shop and how many robots do you see taking a stroll on the sidewalk? I'll take a wild guess: zero. That's the great robot invasion of 2016, which competes with Orson Welles' famous Martian invasion of 1938 (total number of Martians in the streets of the USA: zero).

Most of the robots that accounted for the $28 billion market of 2015 (Tractica's estimate) were industrial robots, robots for the assembly line, not intelligent at all. Then there are more than ten million iRoomba (the home robot introduced by Rodney Brooks' iRobot in 2002) but those only vacuum floors. Those robots will never march in the streets to conquer Washington or Beijing. They are as intelligent as your washing machine, and not much more mobile.

Willow Garage, founded in 2006 by early Google architect Scott Hassan, has probably been the most influential laboratory of the last decade. They popularized the Robot Operating System (ROS), developed at Stanford in 2007, and they built the PR2 robot in 2010. ROS and PR2 have created a vast open-source community of robot developers that has greatly increased the speed at which a new robot can be designed. Willow Garage shut down in 2014, and its scientists founded a plethora of startups in the San Francisco Bay Area committed to developing "personal" robots.

The field of "genetic algorithms", or, better, evolutionary computing, has witnessed progress that mirrors the progress in neural-network algorithms; notably, in 2001 Nikolaus Hansen introduced the evolution strategy called "Covariance Matrix Adaptation" (CMA) for numerical optimization of non-linear problems. This has been widely applied to robotic applications and certainly helped better calibrate the movements of robots.

There are more than 3,000 DaVinci robots in the hospitals of the world, and they have performed about two million surgeries since 2000, the year when Intuitive Surgical of Sunnyvale was allowed to start deploying it. But DaVinci is only an assistant: it is physically operated by a human surgeon. In 2016, however, Peter Kim of the Children's National Health System in Washington unveiled a robot surgeon, the Smart Tissue Autonomous Robot (STAR), capable of performing an operation largely by itself (although it took about ten times longer than a human surgeon). In 2015 Google and Johnson & Johnson formed Verb Surgical to build robot surgeons.

The most sophisticated robots are actually airplanes. People rarely think of an airplane as a robot, but that's what it is: it mostly flies itself, from take-off to landing. In 2014 the world's airplanes carried 838.4 million passengers on more than 8.5 million flights. In 2015 a survey of Boeing 777 pilots reported that, in a typical flight, they spent just seven minutes manually piloting the airplane; and pilots operating Airbus planes spent half that time.

Therefore robots as "co-pilots" (as augmentation, not replacement, of human intelligence) have been very successful.

The most popular robot of 2016 is, instead, Google's self-driving car (designed by Sebastian Thrun), but this technology is at least 30 years old: Ernst Dickmanns demonstrated the robot car "VaMoRs" in 1986 and in October 1994 his modified Mercedes drove the Autoroute 1 near Paris in heavy traffic at speeds up to 130 km/h. In 2012 Google's co-founder Sergey Brin said Google will have autonomous cars available for the general public within five years, i.e. by 2017. This is what happens when you think you know the future while in reality you don't even know the past. (Incidentally, Google engineers still use the "miles" of the ancient imperial system instead of the kilometers of the metric system, a fact that hardly qualifies as "progress" to me).

Back to the Table of Contents


Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact