Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email

(These are excerpts from my book "Intelligence is not Artificial")

The Robots are Coming - A Brief History of A.I./ Part 9

The story of robots is similar to the story of neural networks. Collapsing prices and increased speeds have enabled a generation of robots based on relatively old theory.

The first robotic hand with three fingers (each with multiple joints) was designed in 1979 by Tokuji Okada at Niigata University in Japan. In 1981 the French surgeon Raoul Tubiana pulished a study ("The Architecture and Function of the Hand", 1981) showing that the human hand has 22 degrees of freedom that allow it to grasp objects. In 1982 Kenneth Salisbury at Stanford University built a robotic hand, a collaboration with the Jet Propulsion Laboratory (JPL), and proved the minimum number of degrees of freedom required for grasping an object: nine, that he implemented as three fingers with nine joints ("Kinematic and Force Analysis of Articulated Mechanical Hands", 1982). That started a race for more and more degrees of freedom. The first robotic arm that was actually capable of grasping an object was developed in 1984 at the University of Utah by Stephen Jacobsen (a professor and entrepreneur whose firm was building mechanized dinosaurs for theme parks), a joint project with MIT.

The robotic hand designed at the National Taiwan University (NTU) by Han-Pang Huang and Li-ren Lin in 1995 had 17 degrees of freedom. In 1997 the DIST hand designed at the University of Genova by Andrea Caffaz had 16 degrees of freedom. The Robonaut hand for the International Space Station, designed in 1999 by Chris Lovchik of NASA in Texas under the supervision of Myron Diftler of Lockheed (who then became the leader of the project at NASA) had five fingers with 14 degrees of freedom. In 2011 the R2 would become the first human-like robot to become a permanent resident of the International Space Station.

If you polled ordinary people about what makes a robot a robot, most of them would probably reply that it has to walk like us. Walking is not trivial, so much so that it takes months for babies to learn it. Footstep planning for humanoid biped robots has become a separate discipline. For a long time the fundamental technique was the "semi-inverse method", or, better, the zero-moment-point (ZMP) method, invented by Miomir Vukobratovic and Davor Juricic in Serbia in 1969. In 1966 Ichiro Kato's laboratory at Waseda University had already started work on the robot series WL that in 1973, incorporating the ZMP method, provided the foundation for the Wabot (WAseda roBOT), the first real-size anthropomorphic walking robot. Wabot-2 of 1984 had 10 fingers and two feet and could play musical scores on an electronic keyboard.

Some conceptual arguments helped to reshape the field. The Italian cyberneticist Valentino Braitenberg, in his book "Vehicles" (1984), showed that no intelligence is required for producing "intelligent" behavior: all that is needed is a set of sensors and actuators. As the complexity of the "vehicle" increases, the vehicle seems to display an increasingly intelligent behavior. Starting in about 1986, Rodney Brooks at MIT began to design robots that use little or no representation of the world. One can know nothing, and have absolutely no common sense, but still be able to do interesting things if equipped with the appropriate set of sensors and actuators. In 1991 Brooks published an influential paper, "Intelligence without representation", that argued against the knowledge-based approach of Shakey and STRIPS.

Nonetheless, in 1986 Jaime Carbonell (a former student of Roger Schank at Yale but now at Carnegie Mellon University) began work on a planner called Prodigy, the first major improvement over the venerable STRIPS and a ten-year project that would involve students such as Steve Minton and Manuela Veloso. Just like STRIPS, Prodigy was applied to robot navigation. In 1994 Veloso even attempted to build soccer-playing robots.

In 1988 Dean Pomerleau at Carnegie Mellon University pioneered the use of neural networks for road navigation, i.e. self-driving vehicles, and demonstrated ALVINN (which stands for "Autonomous Land Vehicle in a Neural Network").

In 1991 Shuuji Kajita and Kazuo Tani at the National Institute of Advanced Industrial Science and Technology (AIST) introduced the "linear inverted pendulum" method (LIPM) for planning the steps of a biped robot.

In 1986 the Japanese firm Honda had already decided to develop a walking robot. In 1993 Honda produced its first humanoid robot (the P1) and in 1996 demonstrated the P2 publicly.

In 1993 Tom Mitchell's team at Carnegie Mellon University (featuring Joseph O'Sullivan, Sebastian Thrun and Reid Simmons) started work on Xavier, that looked like a water heater but whose tiered architecture (integrated with Manuela Veloso's Prodigy planning system in 1995) represented a major advance. Thrun went on to design the museum tour-guide robot Minerva (1998) and Pearl (1998), a "nursebot" for elderly care facilities (a collaboration with the University of Michigan and the University of Pittsburgh), before becoming famous for his self-driving car.

The old methods of footstep planning were revolutionized by the incorporation of Monte Carlo techniques that randomly explore the space of possibilities while keeping track of progress. This led to the proliferation of "probabilistic roadmap methods" (or "sampling-based robot motion planning") the Randomized Path Planner designed by Jean-Claude Latombe in 1991 at Stanford later improved by students such as Lydia Kavraki (her Probabilistic Roadmap Planner of 1996) and David Hsu (his Expansive-Spaces Tree planner of 1997). Meanwhile, in 1998 Steven LaValle at Iowa State Univ came up with Random Tree Planner, improved in collaboration with Stanford student James Kuffner as the Rapidly-exploring Random Trees (RRT) planner. Sampling-based planners ruled in the 2000s.

In 1997 NASA's Mars Pathfinder deployed on Mars the first roving robot, Sojourner. While not a biped robot at all, it injected confidence in the field of autonomous vehicles.

It took almost 15 years of research at Honda, but in 2000 Toru Takenaka (a student of Masahiro Mori, the first scientist to predict that robots would some day become conscious) could at last unveil the humanoid robot ASIMO (Advanced Step in Innovative Mobility). At the same time, Yoshihiro Kuroki's team at Sony unveiled QRIO (pronounced "curio") and

Hirochika Inoue's team at Tokyo University (that featured James Kuffner and Masayuki Inaba) developed the humanoid robots H6 and H7, and in 2001 Japan launched the Humanoid Robotics Project (HRP) under the direction of the same Hirochika Inoue. For example, the remote-controlled HRP-1S was designed to operate with construction workers.

Cynthia Breazeal's emotional robot Kismet (1999) at MIT and Hiroshi Ishiguro's Actroid (2003) at Osaka University in Japan were little more than interesting psychological experiments. NEC's PaPeRo (2001) and Mark Tilden's biomorphic robot Robosapien (2004) were toys and probably discovered the largest market for robots: children. Aldebaran Robotics' cute Nao was in fact conceived for education.

Meanwhile, progress in footstep planning was coming from James Kuffner, first at the University of Tokyo ("Footstep Planning Among Obstacles for Biped Robots", 2001) and then at Carnegie Mellon University with Joel Chestnutt ("A Three-tiered Planner for Biped Navigation over Large Distances", 2004). James Kuffner moved to Carnegie Mellon University in 2002 and remained the link with Tokyo University, of which the Japanese Institute of Advanced Industrial Science and Technology (AIST) was an emanation.

In 2003 Klaus Loeffler demonstrated the robot Johnnie built at the Technical University of Munich in Germany. In 2005 Jun-Ho Oh showed Hubo at the Korea Advanced Institute of Science and Technology (KAIST). Quadrupeds such as the Boston Dynamics' BigDog of 2005 were far less interesting because four legs makes them more similar to four-wheeled vehicles which we've had since the first wagon was built in prehistory.

In 2005 Toyota launched the Partner Robots project. The idea was to build robots for medical and elderly care. Unfortunately, they got a bad reputation because they debuted playing drums and trumpets at the 2005 World Expo, i.e. they were just toys; but in 2007 the next humanoid of the series, Robina (Robot as Intelligent Assistant), guided visitors through a museum and was being designed to become a medical nurse, a "nursebot". The first in-home trials took place in 2011, assisting people with limb disabilities. In 2012 Toyota renamed the project Human Support Robot (HSR) to produce robots that can assist people in their everyday activities.

This was the era of the first "domestic" robots, like Luna, conceived in 2011 by RoboDynamics in Santa Monica, and Jibo, designed again by Cynthia Breazeal.

Osamu Hasegawa's robot that learned functions it was not programmed to do (2011) and Rodney Brooks' hand programmable robot "Baxter" (2012) look good on video but still look as primitive as Shakey in person. In 2005 the driver-less car Stanley developed by Sebastian Thrun at Stanford won DARPA's Grand Challenge, but that was in the middle of the Nevada desert.

Lola Canamero's Nao (2010) at the University of Hertfordshire in Britain, a robot that can show its emotions, was followed by David Hanson's Sophia (2015) in Hong Kong, a robot that can display more than 60 facial expressions and that in 2017 was granted Saudi Arabian citizenship.

A branch of robotics is preoccupied with the self-reconfigurable modular robot, a concept introduced by Toshio Fukuda in Japan with its CEBOT (short for "cellular robot") that was capable of reconfiguring itself ("Self Organizing Robots Based On Cell Structures", 1988). The leadership remained in Japan (for example, Satoshi Murata's modular robotic system M-TRAN of 1999) until Daniela Rus at the MIT, inspired by the art of origami and a math theory by Erik Demaine, invented a robot that folds automatically ("Programmable Matter by Folding", 2010) which led to the self-configuring "M-blocks". Rus is also working on the Robot Compiler: someday we will be able to order a robot for a specific function and the Robot Compiler will 3D-print a custom robot for us.

Manufacturing plants have certainly progressed dramatically and can build, at a fraction of the cost, the tiny sensors and assorted devices that used to be unfeasible and that can make a huge difference in the movements of the robot; but there has been little conceptual breakthrough since Richard Fikes' and Nils Nilsson's STRIPS of 1969 (the "problem solver" used by Shakey). What is truly new is the techniques of advanced manufacturing and the speed of GPUs.

In fact, nothing puts the progress in A.I. (or lack thereof) better in perspective than the progress in robots. The first car was built in 1886. 47 years later (1933) there were 25 million cars in the USA, probably 40 million in the world, and those cars were much better than the first one. The first airplane took off in 1903. 47 years later (1950) 31 million people flew in airplanes, and those airplanes were much better than the first one. The first public radio broadcast took place in 1906. 47 years later, in 1953, there were more than 100 million radios in the world. The first television set was built in 1927. 47 years later (1974) 95% of households in the USA owned a TV set, and mostly a color TV set. The first commercial computer was delivered in 1951. 47 years later (1998) more than 40 million households in the USA had a computer, and those personal computers were more powerful than the first computer. The first (mobile) general-purpose robot was demonstrated in 1969 (Shakey). In 2016 (47 years later) how many people own a general-purpose robot? How many robots have you seen today in the streets or in your office?

In June 2016 the MIT Technology Review had an article about robots that announced: "They're invading consumer spaces including retail stores, hotels, and sidewalks". Look around you: how many robots do you see in the grocery shop and how many robots do you see taking a stroll on the sidewalk? I'll take a wild guess: zero. That's the great robot invasion of 2016, which competes with Orson Welles' famous Martian invasion of 1938 (total number of Martians in the streets of the USA: zero).

Most of the robots that accounted for the $28 billion market of 2015 (Tractica's estimate) were industrial robots, robots for the assembly line, not intelligent at all. Then there are more than ten million iRoomba (the home robot introduced by Rodney Brooks' iRobot in 2002) but those only vacuum floors. Those robots will never march in the streets to conquer Washington or Beijing. They are as intelligent as your washing machine, and not much more mobile.

Willow Garage, founded in 2006 by early Google architect Scott Hassan, has probably been the most influential laboratory of the last decade. They popularized the Robot Operating System (ROS), developed at Stanford in 2007, and they built the PR2 robot in 2010. ROS and PR2 have created a vast open-source community of robot developers that has greatly increased the speed at which a new robot can be designed. Willow Garage shut down in 2014, and its scientists founded a plethora of startups in the San Francisco Bay Area committed to developing "personal" robots.

The field of "genetic algorithms", or, better, evolutionary computing, has witnessed progress that mirrors the progress in neural-network algorithms; notably, in 2001 Nikolaus Hansen introduced the evolution strategy called "Covariance Matrix Adaptation" (CMA) for numerical optimization of non-linear problems. This has been widely applied to robotic applications and certainly helped better calibrate the movements of robots.

There are more than 3,000 DaVinci robots in the hospitals of the world, and they have performed about two million surgeries since 2000, the year when Intuitive Surgical of Sunnyvale was allowed to start deploying it. But DaVinci is only an assistant: it is physically operated by a human surgeon. In 2016, however, Peter Kim of the Children's National Health System in Washington unveiled a robot surgeon, the Smart Tissue Autonomous Robot (STAR), capable of performing an operation largely by itself (although it took about ten times longer than a human surgeon). In 2015 Google and Johnson & Johnson formed Verb Surgical to build robot surgeons.

The most sophisticated robots are actually airplanes. People rarely think of an airplane as a robot, but that's what it is: it mostly flies itself, from take-off to landing. In 2014 the world's airplanes carried 838.4 million passengers on more than 8.5 million flights. In 2015 a survey of Boeing 777 pilots reported that, in a typical flight, they spent just seven minutes manually piloting the airplane; and pilots operating Airbus planes spent half that time.

Therefore robots as "co-pilots" (as augmentation, not replacement, of human intelligence) have been very successful.

The most popular robot of 2016 is, instead, Google's self-driving car (designed by Sebastian Thrun), but this technology is at least 30 years old: Ernst Dickmanns demonstrated the robot car "VaMoRs" in 1986 and in October 1994 his modified Mercedes drove the Autoroute 1 near Paris in heavy traffic at speeds up to 130 km/h. In 2012 Google's co-founder Sergey Brin said Google will have autonomous cars available for the general public within five years, i.e. by 2017. This is what happens when you think you know the future while in reality you don't even know the past. (Incidentally, Google engineers still use the "miles" of the ancient imperial system instead of the kilometers of the metric system, a fact that hardly qualifies as "progress" to me).

Back to the Table of Contents


Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact