Robots That Jump

Robot Bodies Needed Before Robot Minds

Robots That Jump – Historical Jan 6-7, 2004

Wednesday, January 07, 2004
“Fly by Wire” planes harder to control – duh!
A recent report in the January edition of the International Journal of Human-Computer Studies by Dr Denis Besnard, Dr Gordon Baxter and David Greathead at the University of Newcastle upon Tyne and the University of York, UK, demonstrate the fallacy of “virtual world” approaches to controling real-world devices. These researchers analyzed plane crashes in which pilots made the wrong decisions based on what was actually happening to their airplanes.

In the past, airplanes had direct mechanical linkages, or analog-style hydraulic linkages which allowed pilots to directly “feel” their airplane. But in the last 20 years, these direct links have been replaced by computer-driven “fly by wire” systems. Instead of “feeling” the plane, pilots have to analyze output of computers in order to control aircraft.

Dr Besnard said: “At the same time, onboard computers started to manage a number of functions and modes aiming at increasing safety and reducing workload. Unfortunately, the workload is still very high and the complexity of the cockpit has dramatically increased.”

He said onboard computers should be designed so as to anticipate problems and offer pilots both solutions and error recovery mechanisms, instead of the current situation where computers sometimes take unexpected decisions and provide raw information which the flight crew must analyse.

In other words, the computer/cyberspace approach – in effect, making machines that create “virtual worlds” with which humans have to enter in order to control these systems – is inferior to systems forcing machines to interact with our world. We have a simple, direct level of this with classic analog/direct systems, and a more sophisticated version with robots learning to interact with our world. Planes are easiest to fly when they go to one of two extremes – direct, analog physical connection to the pilot, or autonomous, robotic navigation. The intermediate case – where dierct control is replaced by interpretation of computer readouts – is dangerous.

In a world, duh…

This should also be a lesson for robotics. Sometimes our “natural” ability to move in the world is explained as ignorance – our conscious minds are unaware of complex processing going on underneath it all. This is obviously true to some extent – I don’t have to mentally sample every hair follicle on my body to know which way the wind is blowing. But it is probably untrue that there is much more than that – there is no hidden “cockpit” with readouts analyzed by my brain in fashion the pilots were expected to. Instead, I have the equivalent of direct linkages, as is shown by the structure of the brain’s sensory and motor areas. I don’t access a metaphorical description of my body (think “desktop metaphor” in a computer OS). As Rodney Brooks once said, the environment is it’s own best model. The structure of animal and human brains bear this out. The sensory areas of the brain are essentially two-dimensional maps of each sense organ, while the motor areas are stretched-out two-dimensional maps of the body.

Consider how different this is from how many robots are designed. The Mars MER rovers, send all sensory information back to earth where it is displayed on computer screens – similar to the fly-by-wire plane cockpit. And the sensation on the MERs is pretty good compared to most robots – I was particularly impressed by how the MER team determined that the robot’s high-gain radio antenna had a small piece of junk caught in it which later fell out. In a certain sense, MER “feels” its antenna, the temperature of various parts of its body, its position, and so on. MER has eight cameras, including one that can move around and inspect the body of the rover. Compared to typical robots this is very advanced – I don’t know of any that inspect themselves, thoughall animals constantly examine and clean their bodies (think of a cat-baths).

But analysis of MER’s “sensations” follows the fly-by-wire model found in the doomed planes described above. A group of scientists consciously analyze the information displayed on computer screens using some sort of visual metaphor, just like the pilots in “fly by wire” aircraft are required to do. In the case of the MER rovers, the motivation is ultra-safe operation – one goof and the party’s over. But this method of analyzing sensation is tedious and cumbersome, requiring that the rovers go short distances, stop, report every sensation, wait, and finally move again. Over the lifetime of the MERs on Mars they may travel a few miles – interesting, but hardly the cross-country trips geologists on earth regularly undertake.

Think of the layers. Primary sensation. Beam to earth. Reconstruct data in a usually visual metaphor. Input metaphor as sensation. Convert to an idea of the “meaning” of the sensation, using the metaphor as a crutch. Convert to motor action by moving the mouse and typing. Beam to Mars. Convert to motor action.

The danger in future robots is continuing down the fly-by-wire approach. People imagine that they will build future robots simply by adapting tele-operated machines. In other words, it is commonly imagined that someday we will build an artificial intelligence that “understands” the metaphors created by a fly-by-wire system and replace the human analysis. The simplest path would be to make a robot brain that analyzed the same computer readouts that the JPL scientists currently examine. It would be as if we left the plane cockpit in its current fly-by-wire state and simply replaced the confused human pilots with a robot brain looking at the same screens. It does seem to be a natural progression…

But his is probably a mistake in the long term for robotic design. To get a truly autonomous robot, one will have to dump the fly-by-wire cockpit and map environmental sensory input directly to motor responses. While the system may be less “fail-safe” for particular actions it is more likely to react better overall – just like pilots react better when they can “feel” their airplane’s behavior through their sticks and pedals. Reactions to (rich) sensory input should be simple, low-level, and direct, rather than high-level and analytical.

As mentioned above, this is exactly how animals do it – it is safe to say that animals have no equivalent in their brains to a fly-by-wire cockpit with a little homunculus trying to figure out what’s happening. They react quickly – “organically” if you’re an old hippie. There’s nothing mysterious about this – they simply don’t have a high-level analysis phase in their physical behavior. The brain is little more than a collection of maps of sensory and motor environments. Even the “frontal lobes” appear to have a maplike structure. This allows a cat that falls suddenly to act quickly without a lot of thought. The need for Mission Control analysis is very rare for cats, though my cat did flip out when it saw the webcast of the NASA MER press conference. I put it onscreen and the cat stopped as if stunned. The poor little guy sat there for about 10 minutes trying to figure out if there were little people inside the box the big hairless friendly cat hangs around. He finally walked away, constanly looking back to see if it was real. It was one of the very rare times the cat thought the way we imagine robots thinking.

A MER without “fly by wire” would be more like my cat whe it isn’t watching NASA webcasts. It would react immediately to a sticky antenna by trying to “scratch” the itchy area on its body. It would not sit and interpret workstation-style output made into some sort of metaphor. In rare cases this direct approach would lead to disaster. But in most cases it would lead to an almost immediate resolution of the problem. Seconds, rather than days would pass.

This is the speed of reaction airline pilots need, and which isn’t provided adequately in fly-by-wire cockpit models. It is also the speed of reaction needed for future robots on land, ocean, and space.

Some easy sensors for robots
One feature of most robots that bothers me is the limited number of sensors for touch. After all, even simple, brainless animals have rich tactile sensory systems, which implies we may be taking the hard road in robotics by focusing on mimicing primate vision instead of tactile sensation.

The reason appears to be hardware-oriented. While sensors comparable to human touch exist, they are difficult to implement in robot bodies. If you build a plastic shell for your robot you have made a body that is relatively insensitive to touch, since it can’t bend like animal tissue. Even if you have a sensor measuring force, you can’t localize the touch. To see this, try closing your eyes and tapping your fingernail with a pen – can you tell where on this flat, plasticlike surface you tapped? But making a robot more flexible is a big engineering challenge, as well as contrary to robot applications in nasty environments.

However, it strikes me that there are some simple, low-cost ways of making robots sensitive to touch that can be purchased off-shelf. These methods would allow a hard-bodied robot to localize touch (if not its intensity) with cheap sensors that could be easily built into the robot body.

1) RIFD tags – these tiny chips can store data and release it upon query via radio signal. They are not sensitive in themselves, but it seems that it might be simple to put basic sensors for pressure, heat, radiation, etc. on an RIFD chip. The chips could then literally be embedded in the plastic of the robot’s body. The robot would query its touch sense by sending radio signals periodically into the outer shell. In practice, there shouldn’t be any wiring required – the readout is similar to what has been envisioned for RFID readers scanning a store shelf full of products.

This looks like a no-brainer. If RIFD chips are put onto products, there will be interest in adding sensors which record information about the environment each time the chip is scanned. One could determine, for example that a RIFD – tagged freezer product had never thawed. So it is quite possible that RIFD chips with sensors will be available commercially. If not, these chips are small and simple, and advanced designs incorporating sensors are not out of the question for even a small robotics company.

2) Touchscreen technology – there are well-established, cheap touchscreens. In the versions I am familiar with, one creates a conductive grid on a surface and suspends a plastic sheet over it, separated by a tiny insulating layer of plastic dots. Touching the screen brings the plastic sheet into contact with the grid. Frankly, this seems like a no-brainer to me. It should be relatively inexpensive to create a plastic shell with touchscreen technology. The shell would be divided into lots of mini-touchscreen areas, whose size could vary depending on how important localizing touch was at that point in the robot body.

There are other touchscreen technologies – check them out at http://www.elotouch.co.uk/products/detech2.asp. Some use a grid of tiny IR detectors. It is not hard to imagine these being embedded in “sensitive” parts of the robot body. Others use piezoelectricity and ultrasonically vibrate the screen. Again, this should not be difficult to implement on robot bodies.

While these technologies do not provide human-level or even animal-level touch, it is likely they would be useful, particularly when touch data was combined with sensors measuring the positions of joints and orientation/balance. I feel that the limits we current experience in robotics are far more sensory related than computation-related, at least “brain” type computation. If the Asimo learns to sit down, let’s make sure it can feel the chair!

Tuesday, January 06, 2004
Another dumb-ass movie robot – the “I, Robot” NS-5
Quite a few robotics-related site have recently been circulating the website for the upcoming movie version of I, Robot, directed by Alex Proyas and due out in summer 2004. You can look at the link yourself at http://www.irobotnow.com. While some sites have been pleased by the apparent rise of robotics into public consciousness this website/movie promises, I’m not. The NS-5 is clearly just another in a long line of unrealistic, silly movie robots.

I know, I haven’t seen the movie. How do I know the whole thing is a piece of crap? It’s not hard. We can already tell from the website that the NS-5 is a servant, which puts it into the standard movie story of “slaves that revolt.” Not only is this not the point of Asimov’s original robot stories, but the plot description at the Internet Movie Database shows a standard Frankenstein story. To quote from IMDB:

“Set in a future Earth (2035 A.D.) where robots are common assistants and workers for their human owners, this is the story of “robotophobic” Chicago Police Detective Del Spooner’s (Smith) investigation into the murder of Dr. Miles Hogenmiller, who works at U.S. Robotics (run by Greenwood), in which a robot, Sonny (Tudyk), appears to be implicated, even though that would mean the robot had violated the Laws of Robotics, which is apparently impossible. It seems impossible because.. if robots can break those laws, there’s nothing to stop them from taking over the world, as humans have grown to become completely dependent upon their robots. Or maybe… they already have? Aiding Spooner in his investigation is a psychologist, Dr. Susan Calvin (Moynahan), who specializes in the psyches of robots…”

Of course. The only thing a robot would ever think of is taking over the world. A robot would only see itself as an oppressed slave and immediately have designs on being in the driver’s seat itself. Yawn. Also, the “hero” cop – whom we’re supposed to identify with – is robo-phobic. Of course, all right-thinking humans will be afraid that robots are about to take over the word. The directors don’t even consider that their younger audience, experienced with robots from Lego Mindstorms and FIRST robotic competitions (even Battlebots) might like robots.

Yawn.

There’s a small chance I could be wrong. It is just possible that it will work – the writer, Akiva Goldsman is also the writer of A Beautiful Mind, a pretty good film that respected the science as well as the personality of math-wiz Nash. But he also wrote the complete dog Lost in Space. The movie plot couldn’t hold a candle to the TV show, despite its cheapness. After all, one plot involved Dr. Smith, the kid, and the Robot going to hell and freeing the Devil from his Hades prison – a lot more daring than anything on TV today!

The director, Akiva Goldsman is even less promising, being at the helm of mega-Goth vehicles like Dark City and The Crow.In generational terms, this guy is pure Xer, and will probably have a black leather, pierced, edgy, grim approach to the material. In other words, he won’t understand what the hell he’s working with when he tries to adapt Asimov’s stories.

Yawn. Another goth movie full of heavy eyeshadow and somber clothes – so 1990s, dude.

The thing that robot fans have to remember is that we’re excited about real robots. These filmmakers are interested in fantasy robots that have no more connection to reality than Lord of the Rings’ elves. In their mind, a robot is a machine that has human-level intelligence but no emotions. As such, it executes cold, passionless decisions that somehow always lead to the destruction of humanity.

Yawn. Anyone who knows the reality also knows that human-level thought will be a very late feature of robots. Current robots have roughly the brainpower of an ant, and decades from know they will still likely be at the “critter” level. Useful robots can act in the world, and don’t need to have a philsophy of life.

We get a clue to how clueless these moviemakers are by visiting their Flash-heavy website, which they must have approved. Let’s leave aside for the moment how lame the Flash programming is (and I say this as a teacher of web-based programming techniques) and cut to the chase. This supposed “robot” is clearly a wet dream of some computer animator who has little or no knowledge about basic physics, much less robotics.

A close-up of the robot’s face shows eyes with square pupils. This is exactly the mistake someone trying to emphasize the “otherness” of a robot. In reality, square pupils would reduce visual acuity (a highly unlikely thing for robot designers to do). There are other ways to show “otherness” that fit physical reality. But CGI-addled moviemakers don’t bother, since they can do anything they want in Maya or 3D Studio Max. Have these chumps even seen real industrial design? Check out the QURO or Asimo – there’s a good example of first-generation robot body design. It won’t be some guy’s face pasted on a pile of wires.

But the described role of the NS-5 as servant is even more telling. These guys can’t think of anything a robot would do besides being a slave to a human. In reality, “servant” won’t be the way we think of robots when they really exist. Even in Asimov’s original “I, Robot” stories, the machines have a variety of roles, with “servants” only one of many functions they perform. But the developers of the movie can only think “servant.” Why? Because that’s the only thing robots ever do in movies. They start out as slaves, they decide to revolt and take over the world.

Yawn.

And I do have to take the website itself to task. The designers like showing off how cool their Flash programming is by making items onscreen bounce around. Why? I suppose in the future all our computer interfaces will have clickable controls bouncing around to no purpose – this is a future bogus web designers have been laying on us since 1996. The designers are trying to go for a coolness they probably sucked up from 2001 – um, that was close to 40 years ago, guy. I plan to use this website as an example to my students how to design poorly usable interfaces.

Yawn.

In conclusion, I suggest that those interested in advancing real robotics don’t point to “I, Robot” when it comes out – instead, keep repeating that this is robot fantasy put out by a clueless movie industry rather than reality. Unless there’s a miracle this will be recycled moviejunk. Don’t let this stuff affect what you’re doing – the whole “Terminator” thing is dying anyways, though I suspect the Gen-X writer and director don’t realize there’s a generational shift starting destined to sweep 1990s pop culture into the dustibin of history. Remind people that their Roombas aren’t trying to size control of the world. Keep it low-key. Check out how the Japanese keep downplaying their incredible progress in building robots – use it. Wait for the movie writer and director and finally get it.

Oh, and go see the film – you’ll have lots of jokes to tell at parties…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: