DARPA announces 2005 Grand Challenge, Slashdot whines
DARPA has just announced the 2005 Grand Challenge to develop driverless cars that can cross a challenging desert environment on their own without human help. Check http://www.grandchallenge.org
for details. There a a preliminary meeting for teams making robo-cars (the “Essential Informational Meeting”) Set for August 14 in Anaheim, Calif., and the actual race will be run October 8, 2005. Having followed the teams before and after the 2004 GC, it seems very likely that we’ll see significant performance improvments. Most of the failures in the 2004 races stemmed from minor problems (e.g. stuck cruise control, not revving the motor enough to get past a small rock) and are being addressed by the teams.
If there is a good performance, the bang for the buck for robotics will be double. Not only is the prize money doubled to $2 million, but the largely negative publicity surrounding the 2004 event will serve to contrast. “Robots do a hell of a lot better in desert race” seems likely news article. If someone wins, or completes the course but misses the time deadline it will be a sensation.
The 2005 GC announcement once again highlights the huge gap between “PC” (personal computer, not politically correct) and robotic thinking. Over the past 20 years we have become used to the idea that our computers are utterly dumb, and can do little more than draw pictures we can use to control them. Reacting to the more or less complete failure of ‘classic’ artificial intelligence, personal computers don’t try to think – instead they act like virtual lumps of clay (drawing programs), communication (digital music, smartphones), or run games (the entire game industry). Search engines look for keywords on the web without attempting to find the meaning of the text they’re searching. “User friendly” doesn’t mean the computer knows what it is doing and helps the user – it just means the task itself is made simple.
This complete lack of intelligence has several effects. First, PC operating systems are designed around a ‘kernel’ of essential functions. This ‘kernel’ defines the operating system as essentially a disembodied brain that ‘knows’ nothing but itself. Later, the brain is connected to the environment via the use of software ‘drivers.’ When a home computer boots it spends some time floating in nothingness and only later and belatedly discovers its connection to perpherial devices.
Second, these devices do not convey sensory information in the sense that our senses do. The first connection for a wintel computer is its keyboard. A keyboard does not provide sensory input, instead providing a stream of discrete, pre-made symbols – the meaning of each keypress is perfectly defined. Compare that the the complexity of visual pattern recognition by a robot and you’ll see how different a PC is.
Third, even when more sense-like data is input into a PC, it is not interpreted. A scanner pulls in images, but the computer simply dumps them into a bitmap for display onscreen. Webcams dump images onto the Internet, with the same failure to process them for “meaning.” Of course, some would say that extracting “meaning” is hard. However, even simple animals extract “meaning” from their environment. The point is not that PCs are stupid, but that they ignore sensory data in favor of pre-cut symbols.
To appreciate this, consider the computer you’re reading this on. Does it know you’re there? Nope.
Fourth, the passive lump of clay behavior of PC operating systems leads naturally to hacking. Since they have no smarts about the world, and receive information in the form of purified symbols (sort of like telepathy for people) it is easy to do brain surgery on them and mess them up. By its nonintelligent, passive nature a PC operating system literally invites hacks.
Finally, the closed-in world of PC programming leads people to create an artifical world inside the computer instead of connecting it to the real world. Graphic user interfaces and games both require that we in some sense ‘enter’ the world of the computer – a little logical, toy environment which even in the case of “realistic” computer games is millions of times simpler than reality. You can do things in these “toy” worlds you can’t do in the real – which in turn leads programmers and hackers to prefer it to the real one. Some even proclaim these toy environments as just as real as the real world.
In contrast, robotic technology begins with sensory processing. On “boot” a robot connects its core code to perpherials. But its subsequent actions are very different. Instead of restricting itself to a diet of occasional keypresses, it samples the environment and slowly and painfully extracts symbols from it. Instead of creating a little world inside itself that we are forced to enter under its rules, a robot plays by our rules and enters our world.
This difference is enormous, and many computer experts don’t see it. However, they do feel it in intuition, which explains why many tech-geeks continue to laugh at robots.
Case in point: When DARPA announced the new 2005 Grand Challenge, a discussion ensued on that hangout of computer users (many of whose jobs are going to India), Slashdot. While a few tried to discuss the 2005 Challenge and the success/failure of the 2004 challenge, most jumped in for another sarcastic bout of gleeful robot bashing. One poster betrayed their focus on the internal toy worlds of the PC by suggesting senses weren’t necessary at all for a robot. Just program the thing to follow GPS coordinates and smash through anything it encounters. Houses, trees, who cares? This approach was actually tried by one of the 2004 teams (Golem) and didn’t place very far up.
However, I suspect the Slashdot poster was moved to suggest this method because of their PC bias. Since the internal toy worlds of PCs are great (and robots sensing the real world are lame) why not just have your “robot” ignore that blasted real world and stay comfortably in the Matrix? The idea of a robot car smashing through things without noticing was very much like what happens in game worlds. In game worlds (console and online) you have fantasy powers impossible in the real world. It makes you feel powerful out of proportion to your puny real-world self. If you enjoy that sort of thing, a robot that smashes through the real world in self-righteous ignorant contempt seems cool. This is PC thinking (Linux is the future) rather than robotic thinking (machines that sense and act in the real world are the future).
Note that this doesn’t knock PC software or technology per se. One can use Linux or Windows to create a robot, though it isn’t very efficient. “Real time” hardware and software designed to sense and act on a constant stream of environmental data has been around for decades in “embedded” systems. It is a matter of focus. When the 2004 Challenge was announced, many wondered whether DARPA wanted a real robot, or just a car that could follow GPS signals. It is now clear that following GPS waypoints is not enough, and if someone wins the 2005 Challenge they will also need good sensory systems. DARPA wants car-bots, not car-PCs.
It will be interesting to watch the rise of robotic-style computing as its replaces personal computer-style computing. 20 years from now, kids will marvel that any computers existed that had nothing but a keyboard. They will be even more amazed to discover that people valued the toy worlds “inside” a computer more than the real world. It will seem like a strange 1990s religious practice…