Robots That Jump

Robot Bodies Needed Before Robot Minds

Robots That Jump – Historical, July 27-August 8

Thursday, July 31, 2003
What’s so bad about Asimov’s Three Laws of Robotics?
In many recent discussions of robotics, the issue of the “three laws” by Iaasc Asmov’s classic books (e.g. “I Robot”) sometimes comes up. For those who don’t know what Honda named the Asimo for, here they are:

1. A robot may not harm a human being, or through inaction, allow a human being to come to harm.

2. A robot must obey orders given by a human being, except when such orders come into conflict with the First law.

3. A robot must protect its own existence, except when this comes into conflict with the First or Second law.

In most cases the “three laws” are quickly dismissed as being applicable to real-world robots. There are several reasons given:

First, some groups say they aren’t adequate for good robot behavior. This is difficult to answer, since it presupposes one knows what is adequate, and the extra aspects of robot morality aren’t named. However, adding extra laws (e.g. “a robot must respect Nature, except when it conflicts with the first and second law”) doesn’t seem to be a show-stopper.

Others feel that the three laws contain loopholes. In other words, the robot could pull something sneaky that satisfied the laws but violated them in principle. This is something that would need to be carefully analyzed. The “through inaction” clause in the First law would seem to catch most loopholes. Asimov in fact wrote a story about robots built with the “through inaction” clause removed from the First law which caused meglomania in a particularly smart robot.

Finally, many complain that it would be impossible to implement the ‘three laws’ as described by Asimov in real robots. For example, covering all possibilities of “a robot may not harm a human being, or through inaction, cause a human being to come to harm” would take too much programming – millions of if/then statements covering every possible case in which the laws might act. It would be all too easy for a programmer to “forget” to add a case. In an unanticipated situation (e.g. a giant falling pizza) the robot would not apply the three laws. This is true, assuming that the robot’s “mind” consists of a large number of if/then statements coded using standard programming techniques. A similar problem bedevils the massive “Cyc” common sense project of Doug Lenat. Over the last couple of decades, millions of assertions about reality have been entered into Cyc. But what is some have changed (e.g. statements about the status of women in our culture from the 1980s encoded in Cyc may now be obsolete). Cyc, when prompted, can request additional information to resolve contradictions. But an unanticipated situation (e.g. a deadly falling pizza) wouldn’t allow time for a Cyc-style program to determine whether it was relevant to the three laws.

One has to wonder why there is such a big discussion about the three laws. This has the feel of a “straw man” attacked to further a larger agenda. Usually, the author of a robot article brings them up and dismisses them, implying we don’t know how to make robots safe. Apart from the fact that entertainment shouldn’t be used to evaluate reality, there are several possibilities for the larger agenda of the “three laws” questions:

1. Some individuals may want to disassociate their robotics work from anything from the entertainment industry – meaning science fiction in books and film. Given Hollywoods stupid treatment of robots, it is understandable that they don’t want to be associated with these low-grade moron screenwriters, directors, and producers. It is rare that a movie with “robots” actually has anything to do with robots – instead the filmmaker twists a non-existent magical creature on par with elves and gnomes to the purposes of their story. Robots, like dinosaurs are used hypocritically – after hearing that we must not tamper with nature, the film proceeds to let us in the action-packed fun of doing exactly that. Real roboticists may not want their real products to be contaminated with the bogus speculations of a tech-ignorant entertainment industry.

2. Other individuals may simply not see beyond current programming limitations of our languages and operating systems. Granted, it would be difficult to make a Linux – controlled robot conform perfectly to the ‘three laws.’ One would do this by writing a program in C++ or a similar language, presumably with a huge number of rules expressed as conditionals. The chance of completely capturing all possible cases of potential harm to humans seems impossible even with a massive programming effort – as the Cyc experience shows.

However, it is clear that Nature has managed to do exactly this. All organisms show a desire to protect themselves, and survive, which is basically the Third law. They seem to show a very general ability to do this – in other words, animals typically don’t show unexpected gaps in their desire to protect themselves. Instead, even simple animals seem to be able to recognize threats to their existence that they have never encountered before – provided they can sense them. A fish may not respond to a poison that it can’t detect, but if it can be detected, the fish will react. A trapped animal will try to escape in lots of ways – it may not hit on the right one, but within its struggles one can detect creativity and inventiveness. Clearly the third law operates first and the animal tries to implement it – exactly the order seen in Asimov’s stories.

In contrast, with robots it seems that they really don’t “get it.” They react appropriately in specific situations but seem unable to generalize from a basic rule to a specific instance. Part of this I think is sensory — we’ve worked too long with disembodied intelligences which can’t feel pain and therefore don’t feel a need to preserve themselves. Once we build robots that jump (meaning they have elaborate, multimodal sensory input) the problem may largely disappear. The programmer won’t have to figure out everything in advance – a robot “feeling” damage will know that its existence is threatened without any help.

If animals were programmed similar to a Linux-operated robot, they should show gaps – not trying to survive if a particular condition wasn’t covered in their programming. A cat would run when a safe falls but fail to run if the safe was replaced with a huge falling pizza. Real cats run in either case. Some might object that the cat has better programming than current Linux boxes, but it seems unlikely that evolution has loaded its brain with millions of specific rules, many of them concerning situations never experienced by any cat. Evolution can only select in the current environment, not in future hypothetical environments. Evolution can’t put rules in place that might be needed in the future. But this is exactly what programmers try to do in traditional robot programming.

The natural instinct for self-preservation in animals is proof that general laws like Asimov’s Three Laws can be implemented in manners similar to the original stories. They won’t be a C++ program. Instead, they will arise from alternate, fuzzy computation systems like neural nets, spin-glass, etc. which don’t fit the von Neuman machine definition.

And if we can implement the Third law, the First and Second should be equally possible.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: