Robots That Jump – Historical Mar 7-13, 2004

Saturday, March 13, 2004

Grand Challenge High School Team finds out that it isn’t a video game
The DARPA Grand Challenge was run today, and none of the cars got further than 10 miles. There may be some issue with this, since three cars made it to about the same place – 7 miles. The CMU Sandstorm car-robot broke down and apparently caused the other cars (Sciautonics II Elbit Avidor dune buggy and suprise mini-team Digital Auto Drive or DAD) to be temporarily halted. Later, these cars were disabled.

It is possible that if Sandstorm was not in the way the other cars could have kept going and the race would have run much longer. I bet lots of people will be wondering exactly what managed to confuse the robots at the 7 mile point.

The short run of the 2004 race points out how incredibly it is to create a robot that functions in the real world, as opposed to a virtual anything. A particularly interesting illustration was provided by the crowd favorite, the Palos Verdes High School Road Warriors team. This team (helped by teachers and parents) managed to get a functioning car that could steer itself and use GPS for navigation. These kids rule. However, the kids may have fooled by their favorite pastime – video games – into thinking that the virtual world of a racing game is somehow comparable to real-world racing.

In particular, during one interview of the PVHS team on television we saw two team members working on computers. One was working a driving simulation program. This student explained that driving in the videogame was just like driving a car in the Grand Challenge – the simulation was as good as the real in his mind. The student is too young to have invented this idea so it came from our general cyber-matrix worshipping culture that holds that virtual worlds in computers are good or even better than the real world.

Well, the PVHS students now know that videogames for road racing have nothing to do with the real world. They aren’t even a pale imitation of it – they are a completely different beast. In the race on Saturday, the PVHS robot car drove about a hundred feet and “gently” crashed into the press area.

Imagine the frustration – some legitimate, some not. The legit frustration is how hard the kids worked to create their robot car. The non-legit is that the real world failed to measure up to the virtual reality of computer simulation.

Hopefully, the computer whizzes at PVHS have learned a lesson: compared to hacking the Matrix inside their game consoles and PCs, hacking the real world is much, much harder – and much more rewarding. The very ease of creating virtual worlds is one of the things going against them.

Remember the scene in the first “Matrix” movie where Neo says “we need guns” and a giant rack of them zooms out of nowhere. It is easy because the guns are simulated – they don’t exist. We enjoy the scene because it sends a confusing emotional signal that the guns were really created. If a thousand guns suddenly materialized in the real world it would be quite impressive. In a computer simulation, it is trivial.

So…no matter how great the virtual driving world is that PVHS students played in before entering the Grand Challenge, it pales before the few hundred feet their real driving machine took. That slow drive into the press stand was cooler than all the Matrixes in the world.

One othe exciting feature of the race results – money wasn’t the deciding factor. Red Team went the farthest, and had the most money. But the other high-scoring teams (Sciautonics and DAD) are grassroots startups. Age didn’t matter. Some of the other, well-funded teams did as badly as the PV high school students – apparently the teens and their teacher/parent mentors matched the performance of skilled specialists with advanced degrees. This has the makings of a real revolution. Think about the birth of the personal computers – guys in garages making something new. It looks (contrary to what I’ve thought in the past) that we may repeat history.

The fact that nobody won, but some of the car-bots worked willl be irresistable to a new generation of computer-using grease monkeys that never wear Matrix sunglasses. Most of the teams at the Grand Challenge are already registered for the International Robot Racing Federation race in September 2004. In addition, teams that didn’t make the DARPA cut or weren’t ready in time will be there. I would bet the IRRF will look at the DARPA results and figure out how to make the race more visually exciting to spectators while easing some features some there’s a greater possibility of a robot finishing the race. Once we see robot cars moving at high speed on TV and passing each other, the floodgates are likely to open. Kids who made simple robots in school FIRST competitions will be itching to turn the family car into a robot. This could be the US answer to Japan – they can have humanoid robots, while we will have intelligent cars.

DARPA will probably run the race again in 2005 or 2006, and in all probability we’ll see a winner or something close to it. After all, the teams completed their robot cars in less than a year. Even by getting off the starting line they made a quantum leap over earlier wheeled robots. Only a year ago, the best robo-cars were driving at 2 miles an hour – today, some were doing 15 times that speed. If we are really at the start of a revolution we will see progress as fast as the beginning of the PC and Internet revolutions.

One comment, though…I still feel that a robot car will need to have more sensors. Most of the teams talked about the difficulty of integrating various sensory inputs into the system. However, even the most elaborate cars had only a few dozen sensors. The robo-car couldn’t “feel” its body in the way an animal would and I suspect this led to problems in several cases. I wouldn’t be surprised to find that Sandstorm broke down because it was driving too hard. An animal would feel the strains reaching the breaking point and slow down. The reason I suspect this is that Sandstorm had an earlier problems where it rolled over during a turn. I suspect that low-level senses akin to “pain” would be useful. I don’t think they would slow down the computers. After all, pain in animals is a low data-rate signal – you just know something is really wrong without knowing what it is. Animals have automatic and easily duplicated reflexes they use to react to pain. One could probably build a “pain” system using a bunch of sensors connected via low-level BEAM technology and interface it to the digital systems. A challenge, but I suspect that the more sensors are there, the better.

I think that even being able to feel the wind would improve performance – not because the computers would calculate optimum behavior in the wind, but because a wind would send a “you’re moving” signal to the car reinforcing the other calculations. The cars all had odometers measuring their wheels turning – why not more sensors to feel the warmth of the road on the tires? Again, you wouldn’t use this data at the high level, it would just “reassure” the software that everything was O.K. and remove some of the decision uncertainty.

I hope that some team in the future hooks up a huge number of sensors to a bunch of dumb computers or BEAM networks and places this low level “kinesthetic” sense underneath their AI computers managing vision – this “sensitive” car might be the one that could win a race. The same circuits would give the car a bit of “attitude” in its driving as various neural nets or analog networks tugged a little on the driving plan – critical if we want people to begin rooting for robots in the races as entities rather than driverless cars.

// posted by Pete @ 4:21 PM

Wednesday, March 10, 2004

How to make a Grand Challenge robot car a Robot that jumps
Lots of excitement has built over the DARPA Grand Challenge. This Saturday, March 12, around 20 teams will attempt to cover hundreds of miles of desert between Barstow, CA, and Pimm, NV. DARPA just added a new, real-time website at http://www.grandchallenge.orgwith regularly updated status reports and a great image gallery of the contestants.

As the race approaches, we’re really seeing a groundswell of public interest. I’ve been thinking about why this race is so interesting, and came up with a few ipoints:

First off, this is an example of real robots, not lab-rat machines used to prove a graduate thesis. No more guy in a lab talking about a future of intelligent machines, what will our philosophy be to them, etc, etc zzzzzzz….

Second, looking at pictures from the various teams competing, one sees a wonderful, grease-monkey feel to the whole thing – worlds away from the effortless high-tech of computer-generated worlds like those seen in “The Matrix” movies. Comparing a short video of the CMU Red Team’s Sandstorm to the robots in the Matrix, I just have to say…”welcome to the desert of the virtual.” The Matrix world is lame compared to this robot. The reason? The Matrix world is effortless. Want some heavy guns? Just snap your fingers and they’ll be simulated. Want a Grand Challenge robot? Work your butt off for a year. More challenging, but more interesting and rewarding.

Some day, kids won’t understand why anyone would want to “hack cyberspace.” After all, hacking the real world via robots is lots more fun!

Second, the Grand Challenge robots are allow us to explore a range of machine intelligence unavailable in other mobile robots. The reason? Power. Hobby robots are restricted to low-power, low-speed processors due to their limited battery power. Even high-end robots like Honda’s Asimo run down in less than an hour. Doubling the computing would double the energy drain, so computing tends to be limited in most mobile robot designs.

In contrast, autos have power to spare. Using a Hummer for a robot allows plenty of juice to be supplied to the comptuters – enough for a whole bank of multi-kilowatt computers in CMU’s case. Even the smaller vehicles have power available for their computers that dwarf any other robotics projects. One thing the Grand Challenge people don’t have to worry about is restraining themselves based on power – just fill up the car with a rack of computers. This allows high-speed computing to be applied to mobile robotics for the first time.

That said, there are limits. Most systems are running laser rangefinders – what the robotics community calls “2.5D” worlds. The robots aren’t preceiving in full 3D yet. What will happen when true 3D perception systems like SEEGRID’s software are used?

Are these robots robots that jump? Not entirely. Unlike an animal, a car uses mostly passive stabilization. The shock absorbers aren’t powered, and the few cars that can recover from flipping over do it by passive body design. However, this doesn’t have to be true in the future. In addition to lots of raw power, cars have the ability, using their engines, to create high-pressure air and/or vacuums. And this plays into the area that Shadow Robotics has been working on for a long time – air-actuated artificial muscles. The problem with building a robot based on Shadow technology has always (again) been the power to create the air pressures needed for their artificial muscles to work. On a “traditional” mobile robot, big problem. On a big SUV, no problem.

In fact, cars of a certain ilk already have artifical muscles – we call the “air shocks.” A car with air shocks can jump around, as seen in numerous Low-Rider competitions.

So, I’m hoping that future robot car races will increasingly feature cars using dynamic stabilization – air shocks on the major point, Shadow pneumatic muscles on the smaller areas. Sensors could be put on robot holders acting very much like animal appendages. Imagine the radar or ultrasound sensors shifting around via pneumatic muscles like a dog’s ears! The same pneumatic muscles could be used to cradle visual systems and even the robot “brains” module placed into the car. There’s plenty of air pressure to do it.

And dynamic control of the flexible parts of the car would up the entertainment value. Anyone wanting to increase the ‘brand identity’ of their car could make the arms, shocks, etc able to bounce for other reasons than road contact. Why not make the car do a little victory dance? Why not let it play music to itself while it drives, and groove to the sound? What would be cooler than a robot car roaring down the road, using some song blasting from speakers to echolocate? Add some rock and roll while it does it – it will blow away the appeal of a human-driven car. More programming, yes — but at least there’s enough power to spend on entertainment value.

One advantage of this approach – it would give robot developers a running start on using more musclelike ways of moving robot appendages – until other artificial muscle systems (e.g. electroactive polymer) become strong and durable enough to use.

Boy, lots of fun! Even more fun when you see the “sour grapes” idiots on Slashdot discussing the Grand Challenge. These dingbats all probably have pictures of Neo and Trinity in their cloth-lined Linux cubicles and wear sunglasses pretending to be VR systems while they daydream about ever more “real” virtual environments programmed into their computers. Suddenly, reality, with computers attempting to enter our world rather than make a virtual one for use to enter – have upstaged them. It has gotta to hurt for those poor slobs.

Sunday, March 07, 2004

Robots as the next computing interface
First, some good news for those who want the new mobile/autonomous robots to be a U.S. industry and not completely overtaken by other countries. While Japan is has the most industrial and service robots in use, the U.S. has become the top market for robots. According to the Robotic Industries Association, in 2003, North American manufacturing companies shelled out $877 million for (industrial) robots–up 19 percent over 2002, and the industry’s best mark since 2000. I wonder how much of the upturn in the use of simple robots is behind the near flatline job growth in the U.S.?

Second, WowWee Toys has introduced RoboSapien, a BEAM-design toy robot. Compared to earlier toys, the RoboSapien is a true robot – it senses and reacts to its environment. Because of the use of BEAM electronics developed by Mark Tilden (rather than computer programs directing motion), the RoboSapien manages to be pretty good at walking and moving without being very expensive. It’s nowhere as good as the QURO at dance moves but this may not be a problem considering it costs 1/10 as much. In the future, one expects lots of what correspond to simple reflexes in animals to be managed with BEAM Technology – another reason I think walking robots are about to become much more common than many within the robotics community expect. As a home security system, a walker, particularly a elf-sized one, can crawl around and really inspect things better than even a small wheeled system.

Another great site is the UC Berkeley robotics group, which recently showcased a very lifelike CALibot robotic fish and a working BLEEX exoskeleton. Both these systems look like more than a one-off, “grad student” project and may actually amount to something in the future. To make it happen, prehaps we need to have a robotic fish race paralleling the DARPA Grand Challenge, as well as an Exoskeleton Olympics for robot-augmented humans.

Interestingly, there was just a story about an exoskeleton comparable to the Berkeley system being created in Japan. But in Japan the exoskeleton is designed to be used by elderly people to enhance their mobility. In a remarkable article entitled Aging Japan Prefers Robots to Human Nurses with the subtitle Elderly turn to machines instead of foreign caretakers. This is a great example of what Marshall Brain has been talking about on his Robot Nation website – we do not want a “human touch” if a machine allows us to do it ourselves. Anyone worried about replacing human nurses with machines in a nursing home should think of this: would you want a human nurse wiping your butt, or have a button so you can start a machine that does the same? My feeling is if you can get greater autonomy using the machine (meaning you can control the butt-wipe) you will prefer it to a human. In a quote from the article linked above:

“Futuristic images of elderly Japanese going through rinse and dry cycles in rows of washing machines may evoke chills. But they also point to where the world’s most rapidly aging nation is heading.

This spring, Japanese companies plan to start marketing a “robot suit,” a motorized, battery-operated pair of pants designed to help the aged and infirm move around on their own. Then there is the Wakamaru, a mobile, 3-foot-high speaking robot equipped with two camera eyes. It is used largely by working children to keep an eye on their elderly parents at home.

These devices and others in the works will push Japanese sales of domestic robots to $14 billion in 2010 and $40 billion in 2025 from nearly $4 billion currently, according to the Japan Robot Association.”

All this is adding up to something – robots are the next “user interface” for computers. If you read the tech magazines you occasionally come across articles describing the next user interface for computers. While the articles tout various visual interfaces, they are painfully un-imaginative. In all the systems you still are directing a dumb machine and accessing filesystems and programs. Imagine if your car worked like this. You would have to constantly “configure” it to drive down the block, it would break down randomly and illogically, and only “power users” could drive at all. This was exactly the point that the lat MIT professor Michael Dertouzos made in his great book: The Unfinished Revolution: Human-Centered Computers and What They Can Do For Us. Dertouzos imagined smaller, handheld devices that could understand human language well enough to “think” through requests without being explicitly controlled. While I liked the idea, I feel it won’t work. The reason is that this is asking machines to fly before they walk.

The reason it is too much is that these “smart” devices are still working in cyberspace, unconnected to the real world. They smart devices described in the Dertouzos book process language, a high level symbol collection. They may hear about someones “hair appointment” but they are unlikely to understand them since they don’t have any connection to the world in which hair appointments are made.

On the other hand, a environmentally-sensitive, dexterous mobile robot would have a much better chance. Since its interactions are mostly with environmental data and the real world they come from, they don’t have to understand “hair appointment” at a high level – they can understand the lower level of getting in a car and driving somewhere. Grounded in reality, they can have an “animal” understanding of the request, which is good enough in many cases.

My guess is that people will try to make “smart” devices as the next operating system, and fail miserably. The groups that will try this will be from the current Silicon Vallye PC/networking world, and they will simply try to put “intelligence” into the PC. But “user friendly” has never worked in the PC paradigm, and it won’t now. This is because, even in the “smart device” world the system creates a little artificial world that people have to enter to manipulate. The device doesn’t have to understand our world. In contrast, robot makers are forced to make their devices understand the real world, even if it is at the primitive level of a robo-car following a road. With further development, these devices will be smart enough to do things without “understanding” what they are doing – in exactly the same way that a dog can perform in a movie without understanding that it is an actor. The “smart device” would try to process language and understand “actor.” The robot would sit, beg, and fetch.

So, the future “user interface” of computers will be robots. No screen, in most cases. The cyberspace model in vogue today will be rare, difficult, and strange to the new generation of kids. The idea of entering “cyberspace” to do work in virtual environments will seem like some sort of creepy religious experience, complete with zany rituals for accomplishing same – what we mean by learning the software.

Check out the Grand Challenge robots – one of those companies is the next Microsoft.

Published by pindiespace

See http://www.plyojump.com for more

Leave a comment