Robots That Jump

Robot Bodies Needed Before Robot Minds

Robots That Jump – Historical Oct 15-17, 2003

Friday, October 17, 2003
The Piltdown PDA fallacy and Robots That Jump
In the early part of the 20th century, most physical anthropologists believed a version of the evolution that could be called “man-ape” rather than “ape-man.” In this “Piltdown Man” theory, the first stage of human evolution involved growth of the brain, until a near-human brain existed in an unchanged apelike body. Because of the advanced brain, the man-apes began using tools, etc. and their bodies slowly changed to modern form.

In other words, the mind came first, the body second. The “proof” was Piltdown man – a fossil found in England during the 1920s that showed a large-brained ancestor with an apelike jaw.

Current knowledge of our ancestors has blown this theory out of the water. Piltdown Man turned out to be a forgery in the 1940s – somebody (Sir Conan Doyle?) carefully matched an orang-outan jaw with a human skull and salted the fossil dig. In retrospect, the forgery is obvious – there are signs of alteration all over the hoax fossil. However, scientists at the time ignored this – so wedded were they to the idea of brain first, body second.

A similar line of thinking, mostly in the U.S. prevades thinking about robots. U.S. research has always emphasized building an A.I. – an “artificial intelligence” distinct from a body interacting with its environment. In the robotics field, work at MIT and elsewhere traditionally focuses on building a robot brain – the body will follow.

With the rise of humanoid robots in Asia the true scope of our “Piltdown blindness” in the US is becoming apparent. Every day, Japanes and other Asian robots in humanoid and animal form become more capable, flexible, dexterous. And every day, U.S. media and computer programmers discount this work as unimportant.

With some notable exceptions, the U.S. computer industry thinks in pure Piltdown mode. Recent articles in US tech publications discount Honda’s Asimo – after all, it can’t think, can it? One article in Embedded Systems magazine shows how even programmers working on embedded systems (which frequently process environmental data) still cannot see any relevance to the existence of the Asimo.

This “Piltdown blindness” is particularly glaring, since these programmers work with chips which interact with the environment, and should understand robotics better than, say a Microsoft programmer or web developer. Apparently not. All through this article you can sense the genuine confusion on the part of the participants. They don’t know what to say.

Of course, part of it is that these programmers are “out of the loop” on advanced robotics. They aren’t leading the trend, they’re just finding out about it three years after Asimo debuted.

The last part of the Embedded article mentioned above is particularly interesting. After stumbling around trying to figure out the “meaning” of the Asimo the participants gratefully plunge back into the familiar territory of cyberspace. One speaker discusses “smart PDAs” (sort like a Piltdown man with his limbs cut off). Other speakers talk about robotic devices for the disabled – more noble, but certainly not “more achievable” than robots. That is because for the robotic limb, hand, to work for humans it has to work for robots. Robots are not the hard case.

In reality, the problem is that the US is working with an outdated “evolutionary theory” of robotics. To many US researchers, the idea of a highly capable, environmentally sensitive, and dexterous robot body seems like putting the cart before the horse. With no brain, the robot body is useless. Following Piltdown thinking, we need “smart software” before we even begin to consider robots.

But the only example we have – human evolution – does not support a Piltdown-style approach to robotics. Of course, robotics and human evolution do not have to evolve the same way, but the body-first people have one data point, the brain-first people have zero.

I suggest that the focus on brains before brawn first even applies to those roboticists trying to emulate human emotions – or rather, the facial expressions that humans use to express emotions. These systems are like the “living heads” cut from their body in old monster movies – they’re trapped and can’t do much. Comeon, Kimset was working years ago – why the continued work? A robot doesn’t have to smile to pick up things on the floor. If you have a maid service cleaning your house, do you expect big smiles from the human workers?

Finally, I should note that the Asian robotics researchers themselves constantly downplay their accomplishments in the Robots that Jump arena. I don’t think this is simply cultural. A good example was the recent CEATEC 2003 (Combined Exhibition of Advanced Technologies) tech fair described in a recent article in New Scientist. Visitors to this convention were greeted with two robots that can jump – the HOAP-2 by Fujitsu, and the Morph 3 by the Chiba Institute of Technology in Japan. These systems are capable because they have a huge number of sensors. The Morph 3 can actually do backflips! This is because it has 138 pressure sensors, 30 different onboard motors and 14 computer processors. This is a sensitive, dexterous robot body packed into an 18-inch tall system. However, researcher caution repeatedly that more work is needed.

But I wonder if a HOAP or Morph 3 could even be manufactured in the US? My point: the reason these systems can execute sumo stamps and karate moves is not an advanced brain – it is a highly capable and sensory-rich body, which is the result of extensive research and development. Any work on the robot brain can use this as a starting point.

Robot bodies are coming fast, not slow. Recall that there were no robots except the Honda P3 walking just a couple of years ago, and none that could jump. Now we’ve got jumpers and back-flippers. Sensors are doubling in number ever year (as sort of “Moore’s Law” for sensation) and robot movements are every more natural and environmentally relevant. Robots that jump are on the fast track. We’re just ignoring the oncoming bullet-train over in the US.

US computer researchers – particularly those in Silicon Valley about to be blindsided by the robot revolution – should take note: these are not toys or mis-allocated “skunk works” research. You guys are still blabbing about making your PDA smart via the Internet. It will be very difficult to make PDAs smart about what I, a “people” actually want because a PDA has no frame of reference. A PDA doesn’t have any sense of touch, smell, sight, sound. Those that have cameras and microphones are for “rich media” communication rather than intelligence. The only information the large-brained Piltdown PDA has is what people have tapped into it.

The Piltdown PDA is slightly below the evolutionary scale of young Helen Keller, who, with a human brain, couldn’t communicate without heroic training by her teacher, Annie Sullivan. In fact, she never would have communicated at all — if she hadn’t experienced a short period of vision and hearing before losing these senses.

If it was hard for her to understand people, it will be impossible for your Piltdown PDA.

In response, some might say that the Piltdown PDA only has to know about, say my business appointments. Those comprise its sensory data. But to make it smart, the Piltdown PDA has to “know” me and when I want information and when I don’t. This means it has to observe me and understand, to some extent, what I’m feeling. Fat chance. Lacking any real connection to the world, it has to rely on rules programmed into it about what people want – in other words, its knowledge is all a matter of faith in its programming. It won’t have a reality check – just what people type in.

And so, the Piltdown-PDA will swing through the trees while elsewhere, robots walk upright. The Piltdown PDS is simply another, supercharged and super-annoying Clippy. I’ll take sumo robots any day!

Wednesday, October 15, 2003
Forces leading to robotics – and away from cyberspace

1. Intellectual property – A CD, game or computer utility can be easily copied. A robot can’t – it will be many decades before “santa claus” machines allow people to “print” a robot. Robots are “hard copy.” Even though their electronic brain contains much IP value, these can be covered by paying for hardware – just like selling CD plastic used to pay for the intellectual property of music CDs.

2. Entertainment value – The moment a robot can dance well it will be a sensation. Imagine the surprise – and relief when the first robot people see is grooving to music instead of blowing up buildings.

3. Student excitement – I’ve already seen this at the Art Institute of California, Los Angeles. When I mentioned that the school was considering adding an entertainment robot program, the excitement among animation and multimedia students was amazing. Once real robots are walking in the halls of design schools, you’ll have an army of students jumping into this area.

4. People will be concerned about robots on the Internet being hacked. So they’ll be taken off the Internet. If they interact at all, they will process through humanlike senses rather than direct delivery of data. In other words, they’ll look at a screen with analog-circuit eyes. Hackers may be able to do the equivalent of a denial of service via this means (in the same way flashing lights can disable eplipetics) but they won’t be able to plant detailed programs.

5. Desire for archetypes -People may respond better to robots than believed, and the “human touch” may not be as desirable as imagined. For example, who wouldn’t want an idealized day-care nurse instead of a minimum-wage human? As Marshall Brain points out on his Robotic Nation website, students might prefer being taught by an ‘albert einstein’ robot instead of a real person. After all, the einstein robot will give a predictable result, unlike variable humans. In addition, people may prefer interacting with archetypes – cartoon characters if you see this negatively – rather than imperfect people.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: