Robots That Jump

Robot Bodies Needed Before Robot Minds

Category Archives: Historical Posts

Robots That Jump – Historical Jun 10 2004

Thursday, June 10, 2004

DARPA announces 2005 Grand Challenge, Slashdot whines
DARPA has just announced the 2005 Grand Challenge to develop driverless cars that can cross a challenging desert environment on their own without human help. Check http://www.grandchallenge.org for details. There a a preliminary meeting for teams making robo-cars (the “Essential Informational Meeting”) Set for August 14 in Anaheim, Calif., and the actual race will be run October 8, 2005. Having followed the teams before and after the 2004 GC, it seems very likely that we’ll see significant performance improvments. Most of the failures in the 2004 races stemmed from minor problems (e.g. stuck cruise control, not revving the motor enough to get past a small rock) and are being addressed by the teams.

If there is a good performance, the bang for the buck for robotics will be double. Not only is the prize money doubled to $2 million, but the largely negative publicity surrounding the 2004 event will serve to contrast. “Robots do a hell of a lot better in desert race” seems likely news article. If someone wins, or completes the course but misses the time deadline it will be a sensation.

The 2005 GC announcement once again highlights the huge gap between “PC” (personal computer, not politically correct) and robotic thinking. Over the past 20 years we have become used to the idea that our computers are utterly dumb, and can do little more than draw pictures we can use to control them. Reacting to the more or less complete failure of ‘classic’ artificial intelligence, personal computers don’t try to think – instead they act like virtual lumps of clay (drawing programs), communication (digital music, smartphones), or run games (the entire game industry). Search engines look for keywords on the web without attempting to find the meaning of the text they’re searching. “User friendly” doesn’t mean the computer knows what it is doing and helps the user – it just means the task itself is made simple.

This complete lack of intelligence has several effects. First, PC operating systems are designed around a ‘kernel’ of essential functions. This ‘kernel’ defines the operating system as essentially a disembodied brain that ‘knows’ nothing but itself. Later, the brain is connected to the environment via the use of software ‘drivers.’ When a home computer boots it spends some time floating in nothingness and only later and belatedly discovers its connection to perpherial devices.

Second, these devices do not convey sensory information in the sense that our senses do. The first connection for a wintel computer is its keyboard. A keyboard does not provide sensory input, instead providing a stream of discrete, pre-made symbols – the meaning of each keypress is perfectly defined. Compare that the the complexity of visual pattern recognition by a robot and you’ll see how different a PC is.

Third, even when more sense-like data is input into a PC, it is not interpreted. A scanner pulls in images, but the computer simply dumps them into a bitmap for display onscreen. Webcams dump images onto the Internet, with the same failure to process them for “meaning.” Of course, some would say that extracting “meaning” is hard. However, even simple animals extract “meaning” from their environment. The point is not that PCs are stupid, but that they ignore sensory data in favor of pre-cut symbols.

To appreciate this, consider the computer you’re reading this on. Does it know you’re there? Nope.

Fourth, the passive lump of clay behavior of PC operating systems leads naturally to hacking. Since they have no smarts about the world, and receive information in the form of purified symbols (sort of like telepathy for people) it is easy to do brain surgery on them and mess them up. By its nonintelligent, passive nature a PC operating system literally invites hacks.

Finally, the closed-in world of PC programming leads people to create an artifical world inside the computer instead of connecting it to the real world. Graphic user interfaces and games both require that we in some sense ‘enter’ the world of the computer – a little logical, toy environment which even in the case of “realistic” computer games is millions of times simpler than reality. You can do things in these “toy” worlds you can’t do in the real – which in turn leads programmers and hackers to prefer it to the real one. Some even proclaim these toy environments as just as real as the real world.

In contrast, robotic technology begins with sensory processing. On “boot” a robot connects its core code to perpherials. But its subsequent actions are very different. Instead of restricting itself to a diet of occasional keypresses, it samples the environment and slowly and painfully extracts symbols from it. Instead of creating a little world inside itself that we are forced to enter under its rules, a robot plays by our rules and enters our world.

This difference is enormous, and many computer experts don’t see it. However, they do feel it in intuition, which explains why many tech-geeks continue to laugh at robots.

Case in point: When DARPA announced the new 2005 Grand Challenge, a discussion ensued on that hangout of computer users (many of whose jobs are going to India), Slashdot. While a few tried to discuss the 2005 Challenge and the success/failure of the 2004 challenge, most jumped in for another sarcastic bout of gleeful robot bashing. One poster betrayed their focus on the internal toy worlds of the PC by suggesting senses weren’t necessary at all for a robot. Just program the thing to follow GPS coordinates and smash through anything it encounters. Houses, trees, who cares? This approach was actually tried by one of the 2004 teams (Golem) and didn’t place very far up.

However, I suspect the Slashdot poster was moved to suggest this method because of their PC bias. Since the internal toy worlds of PCs are great (and robots sensing the real world are lame) why not just have your “robot” ignore that blasted real world and stay comfortably in the Matrix? The idea of a robot car smashing through things without noticing was very much like what happens in game worlds. In game worlds (console and online) you have fantasy powers impossible in the real world. It makes you feel powerful out of proportion to your puny real-world self. If you enjoy that sort of thing, a robot that smashes through the real world in self-righteous ignorant contempt seems cool. This is PC thinking (Linux is the future) rather than robotic thinking (machines that sense and act in the real world are the future).

Note that this doesn’t knock PC software or technology per se. One can use Linux or Windows to create a robot, though it isn’t very efficient. “Real time” hardware and software designed to sense and act on a constant stream of environmental data has been around for decades in “embedded” systems. It is a matter of focus. When the 2004 Challenge was announced, many wondered whether DARPA wanted a real robot, or just a car that could follow GPS signals. It is now clear that following GPS waypoints is not enough, and if someone wins the 2005 Challenge they will also need good sensory systems. DARPA wants car-bots, not car-PCs.

It will be interesting to watch the rise of robotic-style computing as its replaces personal computer-style computing. 20 years from now, kids will marvel that any computers existed that had nothing but a keyboard. They will be even more amazed to discover that people valued the toy worlds “inside” a computer more than the real world. It will seem like a strange 1990s religious practice…

Robots That Jump – Historical May 17, 2004

Monday, May 17, 2004

Why cellphones are hastening the arrival of robots that jump
One of the so-called ‘next big things’ that keeps coming up is wireless technology – computers and phones. Cellphones have indeed become smaller, lighter, and more capable – some Japanese systems now have 2-megapixel cameras attached. Tech pundits celebrate the arrival of 24/7 communication via this wireless technology.

But these celebrations miss the growing movement against cellphones. Several forces are currently conspiring to outlaw cellphones practically everywhere you look:

1. School – Students are increasingly being caught using cellphones to cheat, for example taking pictures of a test they are taking and sending them to their friends. This adds to the general concern about cellphones distracting from classtime.

2. Driving – The movement to ban cellphones while driving is continuing to gain steam. Most likely, in a few short years it will be illegal to use a cellphone while driving unless ‘hands free’ everywhere, and even ‘hands free’ is likely to be restricted. Can’t be inforced? Nope. Cameras at intersections are taking pictures of drivers all over the US. It won’t be long until a camera will take your picture and the license plate of your car and auto-generate a moving violations ticket. With most state in the US running serious budget shortfalls this is a no-brainer.

3. Near train and plane stations – Fear of terrorist activity (the bombs in the Madrid train attack were set off by cellphone) is causing authorities to jam cellphones at airports, train stations, and anywhere else where there is potential for attack. These areas are about anywhere.

4. Gyms and public facilities – Use of cellphone cameras to take pictures in the potty and locker rooms is likely to cause either a ban on cellphones or jamming in gyms and other public areas. You won’t be able to call someone from a restroom, for example.

5. Hacking – Wireless systems are still hack-fests and protocols like Bluetooth and Wi-Fi are like swiss cheese when it comes to security. One major hack attempt through these systems (or an IM or text-messaging based attack on wireless doggers in the UK) will stop the spread of this technology.

The end result of this is that cellphones are going to be restricted – right after their use became widespread. There will be strong pressure to figure out a way to allow connecting wirelessly. What can be done?

The answer is robots. Consider a cellphone user in their car. The reason you can’t use a cellphone is that it distracts you from driving. But what if you would switch on a robotic cruise control and have your car drive itself for a few minutes? Robot cars have been driving just fine since about 1998 on specialized environments like freeways. At some point, automakers will introduce systems that allow a robot to take over and let the driver use their precious cellphone. It might even be an either/or system – turn on the cellphone and the robot takes over your steering.

The perfect solution…robots that jump!

Robots That Jump – Historical May 13, 2004

Thursday, May 13, 2004

Multicores and Sensors – which will make robots that jump?

There are two ways to look at solving the problems in robotics. One way is to imagine that robots need really good processing of data. In this form, a robot has a few sensors and a huge back-end processing system to crunch the sensor data down to a particular understanding of its environment. Due to the high cost of sensors, this has been the main approach to robots since the 1960s. A typical research robot has a single sensor (often vision via a camera) and nothing else. Elegant software and fast processors develop models of the world and a plan for action. In this case, there’s room for hope in the recent announcement that Intel has abandoned increasing processor speeds and is not concentrating on ‘multicore’ processors – single chips with several CPUs on them. By 2010 Intel might be fitting dozens of Pentium IVs into a single chip, creating an ideal system for massive number-crunching.

The other approach – which I believe is more likely to lead to robots that jump – s a sensor-rich approach. In this form, a robot has a huge number of sensors. Sensors are redundant (there are many touch sensors) and sensors are of different types (vision, hearing, smell, touch, inertial, magnetic, pressure, temperature, GPS, you name it). The robot does not have a huge back-end processing system for the data, but does organize all the data into a combined map.

Perception in each kind of robot is different. In the case of few sensors, the robot creates a single “sensor world” with defined objects out of data from that sensor. Thus, a robot only using visual data would create a map of objects perceived by light. An ultrasound system might create and auditory map in the same way.

In the second kind, there is a more complex map. Instead of identifying objects within lidar-world or ultrasound-world, the system overlays all the sensor data along the primary categories of perception (according to Kant, that is) of space and time. Objects may not be well-defined in this map, however, the set of attributes (values for a particular sensor at that point) are well-defined at a particular point in space and time.

Traditionally, robot designers have been leery of systems using a lot of sensor data, since it appears to require exponentially-greater computing. In this model, defining an object in terms of multiple attributes derived from many sensors is just too hard. It is much easier (according to this line of thought) to restrict sensation and coax as much as possible out of smaller amounts of sensory data.

While there may be situations where a low-sensor robot is useful, in most cases it seems unlikely. The reason is that biological systems never take this approach. No animal – or plant (which do a surprising amount of computing) has ever evolved with a tiny number of sensors and a large brain. In contrast, the opposite is always true – animals with tiny brains always have comparably rich sensation. Even a bacterial cell has thousands of sensors, and hundreds of unique types. An animal with a brain the size of a thumbtack, e.g., a squid, has an eye comparable to our own and additional senses for temperature, pressure, touch, electric fields, and more. Since evolution is a random process, it would expected to pick the simpler of two solutions – and the rich sensor/small brain model wins every time.

What does this mean for robots? As mentioned before, there may be cases where a robot can get by with limited sensation. Tasks performed in restricted, simple environments should probably emphasize processing. The extreme example of this is a virtual robot, where (unfortunately) much robot research occurs. In a virtual robot, say a game character with “artificial intelligence” sensory input is incredibly limited. There may be a list of objects nearby, and a primitive “physics” governing motion in the environment. Since the world is simple, robots can compete based on their smarts.

In contrast, the real world – even relatively simple environments like a highway or hospital corridor – is hugely more complex. The environment varies along a huge number of parameters, and the root cause of variation is buried at the atomic level. There’s no escape from crunching huge amounts of sensory data to navigate in the real world.

However, most of this crunching is not high-level. What a sensor-rich robot needs instead is a huge number of low-level, parallel processors converting primary sensor data into a useful form. The form need is a map of the data in space and time. This form of low-level processing might be performed by DSPs and other analog/digital chips. For example, researcher as U. Penn have created a mostly analog artificial retina which does the basic processing of images at incredible rates. Instead of dedicating a single computer to crunch visual data you use one of these sensory chips to make things work.

At higher levels, the sensory data does require more elaborate computing, but I would maintain that the increase in power needed is linear rather than exponential. The low-level processing extracts space/time information for the particular sensory data. A little higher up, additional parallel processing extracts a few useful and elementary “features” for the data. The goal of the high-level routines is simply to overlay the mix of sensory features into a common space/time world.

Such a system can react to its virtual world model in two ways. Similar to a low-sensor system, it can extract “objects” based on sensory data and place them in the space/time model. This kind of processing might continue to use one or a few sensors. However, a second kind of processing would be to measure unusual combinations of information from each sensor in the space/time model. For example, focusing on visual appearance would tend to result in a 3D model of rocks nearby the robot. However, this would result in numerous false positives, as is seen in current-day robots. But attribute analysis using many sensor types could catch these problems. For example, a medium “rockness” value in the 3D map would mean something unique if it was paired with high temperature or rapid movement. In a shape-detecting map a bush might appear similar to a rock – but pair that shape with color, internal versus translational movement and one could find “tree-like” regions without having to perfectly find the shape of the tree object. I suspect that with a huge number of unconventional sensors (e.g. a Theramin electrical sense, for example) most objects could be recognized by attribute superposition.

One problem with this is determinating what unique combination of attributes signal the presence of a particular object. It might be very hard to figure this out a priori so training would be required. One can imagine taking the robot out for a stroll in the part, and “telling” it when it was nearby a particular object. A neural-net type system might be enough to create a high-level feature detector using this method. This contrasts with single-sensor programming, where one might try to figure out a priori sensory data for particular objects and hard-code them in.

To date, the company that seems to be thinking this way most closely is SEEGRID, a company co-founded by Hans Moravec to commercialize long-standing research. SEEGRID software is supposed to allow information from multiple sensors to be fused into a single space-time map. This map in turn can be used to reliably control navigation. At present, SEEGRID software is just hitting the range in which several commercial PCs could run it in near real-time in a robot. This is too big for hobby robots or humanoids but fine for that other class of robots that jump – cars. A robo-truck using hundreds of sensors, dozens of DSP processors, and several high-end PC systems might be enough to try attribute-based perception.

Robots That Jump – Historical May 3-5, 2004

Wednesday, May 05, 2004

Robots and the unbearable blindness of the computer industry
A few interesting articles and then onto the continued, almost comical blindness of the computer/internet industry regarding robots.

First, from Government Computer News, a comment that DOD is expanding its investment in robotics:

http://gcn.com/vol1_no1/daily-updates/25774-1.html

This article describes recent acquistions of UGVs (Unmanned Ground Vehicles) by DOD. At present, these UGVs, like pilotless aircraft (e.g. Predator) are remote-controlled and have little autonomy. But with steady progress engendered by the DARPA Grand Challenge and the International Robotic Racing Federation this will change over time. Current UGVs are created by companies with sweetheart deals with the military – including:

“The $18 million procurement will buy Remotec?s mini-Andros II; the Packbot from iRobot Corp. of Burlington, Mass.; the Vanguard MK1 from EOD Performance Inc. of Ottawa; the Talon from Foster-Miller Inc. of Boston; and the Mesa Associates Tactical Integrated Light-Force Deployment Assembly (Matilda) from Mesa Associates Inc. of Madison, Ala.”

My guess is that these groups will be swiftly eclipsed by the numerous startups creating driverless cars for the Grand Challenge. After all, despite the supposed failure of robots in the 2004 race, several beat any previous autonomous UGV effort by these companies. One, Digital Auto Drive did it on a shoestring $50,000 budget. We’ll probably witness a grassroots robot revolution comparable to the PC rerevolution of the 1980s, putting unknown startups (Apple, Microsoft) ahead of old school companies (IBM, DEC).

Which brings us to Microsoft. The company is turning 30, and like all those Gen-Xers hitting the big 3-0 these days Microsoft is finding that it no longer represents cutting-edge youthful thinking. A major article in Time magazine spells it out.

Is Microsoft a Slowpoke?

On the plus side, Microsoft has huge amounts of money – over $50 billion in the bank, and it saw revenues grow 17% last year. On the minus side, Microsoft has delayed its next-gen operating system Longhorn (again) until 2006 and has said it will be minus many of the “cool” features originally promised. Despite this, computers will need to be far more powerful to run it. According to Microsoft Watch, Longhorn will require an amazing level of computer power to run. An article entitled Longhorn to steal limelight at winHEC says you’ll need the following:

“…the ‘average’ Longhorn PC (should) feature a dual-core CPU running at 4 to 6GHz; a minimum of 2 gigs of RAM; up to a terabyte of storage; a 1 Gbit, built-in, Ethernet-wired port and an 802.11g wireless link; and a graphics processor that runs three times faster than those on the market today.

So how come? What exactly will Longhorn do better? I suspect it still won’t know I’m there when I sit down at my PC, which even a simple robotic device in a bathroom urinal can do.

However, one thing is sure: Microsoft isn’t getting the future. In 1995, Microsoft managed to reverse course and quickly adapt to the Internet, after being blindsided by work on Windows ’95. In doing so it avoided being swept into the dustbin of history by the Internet. But it doesn’t look so good for the future of robots that jump. Microsoft is still chasing the last revolution:

1. Microsoft is expanding into console gaming with its Xbox selling at a loss. Gaming is growing, but it is not a startup, next-gen industry. Game titles are not “new” anymore, and typical titles have a “III” or “IV” after them. The cost of creating game titles is rising and the number of game companies is dropping. This sounds like a maturing industry unlikely to show growth in 2010.

2. Cellphone technology – Microsoft has tried several times to create cellphones using a version of Windows (just the brand, really) and has not become a major player. The Time article implies it is due to Microsoft software being overkill, but I doubt it is the reason. The real reason is that Microsoft thinks of PCs when it sees cellphones, whereas the real purpose of cellphones is communication. I’m not “watching” cellphones or trying to use “productivity software” – I’m trying to communicate. Microsoft’s instincts are just wrong in this area. And by the way, cellphones are another maturing industry, not the exciting startups they were in the 1990s.

3. SPOT – this is one-way, digital information sent on FM signals to watches and refrigerator magnets – yet another incarnation of “push” technology which flopped in the 1990s. For some strange reason Microsoft thinks that we want to reduce our interactivity and get passive one-way communication. In this area the company is making the same mistake as the Hollywoood idiots who think we want to watch music videos on our cellphone. Dumb.

The unifying principle of these investments in the “future” by Microsoft is that they are driving by your rear-view mirror – they are chasing the last revolution, not the next one. Cellphone, PCs, and gaming were all developed in the 1980-2000 timeframe. They are growth industries but growth is slowing as they reach saturation, as PCs and the Internet have.

And increased speed doesn’t do much for these maturing industries that Microsoft is running in reverse to “embrace”. Double the speed of a game console and the game characters get slightly more realistic. Who cares? Double the speed of your PC and you’ll be able to run Microsoft Office under Longhorn. It’s difficult to see the change at all. Double the speed of your cellphone and you get – well, nothing at all except a shorter battery life. It already works fine.

To summarize the effect of a 6Ghz computer:

  • Gaming – small differences
  • PCs – none
  • Internet – none
  • Cellphones – none

    In contrast, robotics is about to take off, and it can use the speed. For a robot developer a supercharged computer like that needed for Longhorn is really useful. Double the speed of the computer, and the robot gets twice as good. An Aibo robot dog running at 5Ghz would be able to process visual information much better than the current versions, and make more intelligent decisions when playing robotic soccer. A robot car using Longhorn-capable PC hardware would be able to shift form “2 1/2 D” to full 3D scene processing. This would have a gigantic impact on the capabilities of mobile robots.

    Anyone remember the 1980s? This was the era when each increase in computer speed brought radically new software to the consumer. When the Mac doubled its speed, it went from black and white to color. When chips got faster digital phones got practical. Each time a game console got faster there was a huge jump in the games. Doubling computer speed in the early 1990s allows them to browse the web efficiently. Today, there’s little effect of comparable increases in hardware.

    So, I’m not going to bash Microsoft for the usual reasons. But I do think they are becoming about as interesting as the power company. After all, in the 1920s tech boom power companies were the dotcom darlings. Today, nobody thinks twice about a wall socket. We all use it, but ignore its wonders. The same will happen to Microsoft, unless it wakes up to robots as the next tech boom.

    By the way, this critique extends to the whole PC industry. Linux is not the future the way some imagine – replacing Windows with Linux is like changing the wall socket from 115 to 120 volts. It may have positive effects, but it is same old, same old… Strange how the tech pundits don’t even see this. Strange how the “top 10” lists of tech trends still don’t mention robots, despite commercial products like the Roomba and a rapidly expanding hobby community. Check out Servo for the Byte magazine of the robotics revolution – it’s coming, and it is coming fast. A few companies (e.g. VIA) know this. Even Microsoft will likely send a few reps to the upcoming Robonexus conference late in 2004. But it will be too little, too late. This guy quoted from Microsoft will be wrong, quite wrong…

    “We have a treasure chest of technology that allows us to be very agile,” says Rick Rashid, Microsoft’s senior vice president for research. “If the world changes, we can change with it.”

    But so what – Microsoft is helping robotics by forcing hardware advances! I’ll just dump Longhorn and use that 6GHz computer to make my car drive itself home.

Monday, May 03, 2004

Robots rise in Osaka while science falls in the U.S.
An interesting pair of articles today illustrates why the Robots That Jump will come from Asia, in particular Japan. The first of these articles shows a steady decline in the U.S. in the sciences relative to the rest of the world.

U.S. losing its dominance in the sciences (registration required)

This article has a bunch of excellent charts showing a serious decline in U.S. science, around 1995, the number of physics articles published in Western Europe and the world began to exceed the U.S., and this trend has continued for several years, resulting in about a 30% drop in relative US physics publishing since 1990. Another chart shows that U.S. patents have declined from about 60% in 1980 to 50% in 2003. Patents fell in Germany as well. There was a major rise in patents from Japan and other Asian countries, especially Taiwan and South Korea. More bad news includes the U.S. moving to number 3 after Europe and Asia in granting engineering degrees, and a declining number of doctoral candidates from outside the U.S. electing to stay in the U.S. after graduation.

A telling quote:

“‘”We are in a new world, and it’s increasingly going to be dominated by countries other than the United States,’ Denis Simon, dean of management and technology at the Rensselaer Polytechnic Institute, recently said at a scientific meeting in Washington.”

Another one:

“”It’s unbelievable,” Diana Hicks, chairwoman of the school of public policy at the Georgia Institute of Technology, said of Asia’s growth in science and technical innovation. “It’s amazing to see these output numbers of papers and patents going up so fast.'”

Soooo…what are the scientists and engineers in Asia and Europe doing? One answer is robotics. Here’s an article about the rise of Osaka, Japan as the “robot capital” of that company – a sort of Robo-Valley

Osaka Emerges as Japan’s Robotic Hub

This article notes that Osaka is actively supporting the migration and/or startup of robotics related companies, and that 154 of these companies have robot-related patents. This comes from over 20,000 small/medium-sized businesses in the area which specialize in manufacturing.

Some telling commentary from the article:

“OSAKA – Mr Yohei Akazawa’s 25-man company turns out precision parts for Airbuses and rockets, but his latest passion is making robots, which he believes could be the mainstay of his business in future.

Osaka is home to not only many world-famous Japanese electronics companies but also about 20,000 small and medium-sized enterprises (SMEs) like Mr Akazawa’s that are all skilled in one or more aspect of manufacturing. Among them, 154 can boast of robot-related patents.

‘We ourselves are striving to become a company that can make any kind of robot that is put to us,’ said Mr Akazawa, whose firm is often asked to produce prototypes for bigger companies.

He is also in close touch with researchers at Osaka University, the city’s top tertiary institution and a hotbed of study into next-generation robot technology. It was recently ranked No. 1 in the nation in terms of engineering research and development by the leading Nihon Keizai Shimbun business daily.”

In other words, Osaka is putting together thousands of manufacturers with world-class university support to make mass-produced consumer and service robots a reality. Nothing remotely like this is going on in the U.S., where science is on the decline and the preferred careers are in areas like finance. The ‘high-tech’ industry in the U.S. continues to chase the last 1990s boom, promising faster computers, 24/7 computing, better Internet, and so on. In the meantime, robotics in the U.S. – though clearly growing – is not a priority.

What’s the consequence? The U.S. is not going to lead the robotic revolution. Even with money coming in from the military, most of the technology is going to come from offshore. And around 2012 we’ll see real robots walking off the container ships in Long Beach, CA, even as the dopes in Hollywood are using computer animation to create some lame virtual robots for “I, Robot VI: the Grinding”.

There is a bit hope in two places in the U.S. The first is in robots that the public will accept. Due to Hollywood scare-mongering by idiot screenwriters (who didn’t pay attention in their science class) the country will never accept developing humanoid robots (we will import them, however). But people in the U.S. don’t have the same – entertainment industry-induced reactions to robot cars. For this reason I feel that the DARPA Grand Challenge (where robot cars that jump compete for a $2 million prize in 2005) is more than a military thing. Instead, I imagine that driverless cars could be the point of entry of robots into the U.S. Due to the DARPA challenge, there are now over 100 small groups working on robot cars – the kind of engineering innovation the U.S. is supposedly famous for. In a decade, we may be seeing driverless cars racing in NASCAR and acting as “designated driver” ferrying drunken oafs home from their parties. See my article on The future of robotic cars here.

The second place for hope is the new, “Millennial” generation (born after 1982) that is just starting to enter college. Unlike the earlier GenX/Y cohorts, Millennials are being exposed to robots in school via programs like the FIRST robotic competition. They are being groomed for real-world science instead of virtual world cyberspace as was typical in the 1980s and 1990s. We could be training a new generation of innovative real-world engineers to replace the previous two generation’s focus on virtual world cyberspace fantasy.

That being said, I’m not very optimistic about the long-term prospects in the U.S. During the next several years the country is going to be preoccupied with other issues ranging from security to the housing bubble and will likely ignore robots except in B movies. Assuming we pull out of the mess by the end of the decade, we’ll arrive just in time to witness the robot revolution, courtesy of Asia and Europe.

Robots That Jump – Historical Apr 27, 2004

Tuesday, April 27, 2004

The guy making I, Robot can’t read so good
There is a video “featurette” on the website for the movie “I, Robot” with director Alex Proyas. In the video he describes the movie “I, Robot” as a “documentary of the future.” Certainly, some aspects of the movie are likely to show what the world will be like in 30+ years – robots will be everywhere. But this is an accident – it is clear that this movie is just about as wrongheaded as can be.

Here’s the key. During the interview, Proyas indicates that he read the original “I, Robot” book by Asimov. He then goes on to describe what the I, Robot stories are about, and dumps a howler that left me speechless.

According to Proyas, the Asimov stories are about these robots with “absolute, unbreakable laws” – but the “point” of the stories is that the “robots always manage to break them.”

Frankly, I don’t think this bozo (or anyone else on the movie) ever read the books. Proyas’ description of “I, Robot” is exactly opposite what the I, Robot stories were about. If you don’t believe me, read them. Hopefully, you can read better than Proyas.

The point of the Asimov books was to show that the robots were following the dictates of the three laws perfectly even when it initially appeared they were not. At no time anywhere in any of these books does a robot ever “break” or “circumvent” the laws – instead they follow them to the letter. The detective-story interest of “I, Robot” is in figuring out how a given robot’s strange behavior can always be deduced from the Three Laws.

Sorry, Proyas, the robots never break or circumvent the laws in the classic series. There is one uppity robot described in the story “Little Lost Robot” who has a weakened version of the Second Law. But even here, the human heroes find the robot by applying the Three Laws to predict its behavior. In later stories written after “I, Robot” a few robots get around the First Law – but only by creating an even more powerful Zeroth Law that forces them to guard all of humanity.

The reason Asimov took this approach was specifically to counteract the exact thinking behind such “mad robot” stories like the forthcoming “I, Robot” movie. When Asimov began writing the “I, Robot” stories in the 1940s, he wanted to explore what robots would really be like if created. He assumed they would be machines obeying their programming with behavior predictable from their programming. They would not be driven by human interests. They would not “circumvent” their programming. In creating “I, Robot” Asimov was fighting against the legions of pseudo-robots churned out by entertainment media in the 1930s whose interests matched those of criminal human males: meglomania, sexual appetite, and a desire for violence. These pseudo robots were just “bad boys” in metal suits, a sort of gangster in a sardine can.

Asimov is currently spinning in his grave at high speed because of this dumb-ass “I, Robot” movie. This movie is a mockery of what Asimov tried to accomplish with the “I, Robot” stories. Asimov wanted to show what robots might really work like. Instead, his anti-“mad robot” stories have been co-opted into a “mad robot” story. Sigh…

I should feel sorry for Proyas. Either he can’t read (unlikely), can’t comprehend what he is reading (unlikely) or was told by the publicity people to jack up the “mad robot” angle even if it violates the letter and spirit of “I, Robot” (most likely). I liked his other films (e.g. “Dark City” and “The Crow”) but this demolition of Asimov’s work is going to suck. Of course, the Hollywood distributors, publicists, agents, and other peabrains with their lips spraying their cellphones with saliva don’t have the slightest idea there is any other kind of robot other than a mad robot. This is because robots have been fantasy for 100 years and Hollywood implicitly feels that they somehow “own” the concept of robots. And the only Hollywood concept for a robot is a Frankenstein monster.

Contrast this drivel with the behavior of real robots, e.g., the Sony QRIO conducting a symphony orchestra, industrial robots making cars, or a robo-lawnmower. It is laughable.

One final thing: I note that the “I, Robot” movie trailer displays some of the worst aspects of computer animation (routine violation of the laws of physics in a virtual environment). This is supposed to be a “documentary of the future.” But no robot will be able to jump hundreds of feet and land in a concrete-shattering splatter – any more than robots will be able to casually knock each other through building walls a la Terminator 3. No robot will be able to jump and hang in the air for dramatic effect – the real world has gravity. These false images come from the “no rules” world of animation, not the real-world potential of robots.

You know, the “I, Robot” movies thus far looks oh so 1990s – jack up reality via “Matrix” style special effects and “sledgehammer” the audience into submission. Throw in a lot of dark scenes with edgy Goth leather to to appeal to Gen-X. Tell them there’s a conspiracy. Indicate that robots are something a nameless “they” are creating to force upon us in a dark future.

But we are in the 2000s now, and robots are becoming real at the same time that audiences are increasingly bored with no-effort, X-treme computer animation. Gen-X is giving way to Millennials, a puzzle-solving Harry Potter generation. As the tweens and teens today grow up making robots in school via their participation in real robotics contests like the FIRST robotics competition, they will make a very different future. They have a “reality check” and see robots as something they create, rather than something the nameless “they” impose on them. The dark meanderings of “I, Robot” will seem like something the older generation was suckered with. They will reject this X-Files stuff as something only old people are dumb enought to believe.

In 35 years real robots will be everywhere, last-century fantasy will be out, and the “I, Robot” movie will seem like it was made a decade too late. Why worry about cool graphics when you can work with the real thing?

Possibly, some people in the coming robotic age will even be motivated to go back and read the original “I, Robot” book – and see how Asimov was thinking much closer to their mindset – instead of the mad robots churned out by “rebellious” Baby Boomers in the movie industry. Hopefully, somewhere, Asimov will be able to see his vision restored…

Robots That Jump – Historical Apr 20, 2004

Tuesday, April 20, 2004

The convergence patterns for robotics
During the dotcom hype of the 1990s, a frequently touted future involved “convergence” – the integration of radio, television, telephone and the Internet into one meta-network. As it stands in 2004, this integration is proceeding more slowly than expected. Cellphones using 3G technology (so we can supposedly watch music videos beamed to our phones) has stalled, and Internet traffic in 2004 had dropped below 2003 levels, a remarkable (and unremarked) trend. Home entertainment centers have proliferated, but the percent that double as personal computers is extremely small. It appears that cyberspace convergence will happen less rapidly – and is less interesting – than once thought.

One of the reasons that cyberspace convergence hasn’t happened is that it is more useful to the producer than the consumer. Mixing a home theater with a PC has marginal value for consumers, who tend to use their PCs for work and gaming and their home theaters for movies and television. But linking these two technologies makes the entertainment industry slobber, since “networked entertainment” would allow Digital Rights Management extending to pay-per-view DVDs, spyware monitoring home user’s activities, and the like. The value proposition for convergence helps business, and restricts the consumer.

Ditto with 3G. Most people use cellphones for communication. When they send pictures and/or video they are typically exchanging pictures of themselves or their friends. In contrast, 3G’s so-called “promise” is to beam one-way video from content providers to consumers and charge them for it. High-bandwidth 3Gs would also allow higher charges on phone use, since more bits are being sent. All this for the dubious pleasure of watching MTV during a commute or at the beach (yawn). Again, convergence has a very small value for the consumer, but a big one for the entertainment company and phone company.

Finally, consider the repeated, expensive, and pathetic efforts to mix TV with a computer in the so-called “interactive television” (ITV) model. In theory, interactive television allows you to have a super-sized remote with additional buttons like “buy” and “charge credit card.” When a commercial runs on-screen, you can press the “buy” button. Also, one could have movies on demand. However, with a good TViO-style system and recorder I get the video on demand via intelligent time-shifting. And as for shopping – there are shopping networks that do quite fine without that extra “buy” button on the remote. The ITV industry has seen repeated failures wasting billions of dollars, whie TViO systems are taking off. Why? ITV was “convergence” that benefited business at marginal value to the consumer. TViO systems maximize benefit to consumers – we don’t want to push the “buy” button during a commercial, we want to erase it!

Is really a wonder that “convergence” touted by the cyber-pundits hasn’t happened?

The real era of “convergence” for cyberspace was the close of the 1970s. In that era, microprocessors and a few other technologies (small floppy disk drive controllers) led to the birth of the personal computer. Mixing digital and analog technology resulted in camcorders and VCRs. Mixing scanners and printers resulted in low-cost faxes. These were examples of technology convergence that benefited the consumer and spawned a revolution. In the 1990s, the real convergence was graphic computer displays and the Internet. The Internet had been around for a while (along with text-only BBS systems) but it didn’t take off until the Mosaic, the web browser created by Marc Andressen, put text and color pictures downloaded from cyberspace into the same window.

So, there are two types of technological convergence. One fuese technologies to create new products which increase consumer power and choice. The other maximizes business profit by increasing control of consumers. Which one do you think tends to happen?

Robotics is about to undergo the first, “good” convergence. This convergence is not the wishful thinking of cyberspace alway-on always paying for it, but a real convergence resulting in the creation of new products. Here are several of the fields/technologies that are coming together to create the robot revolution:

1. Industrial design – new true 3D design CAD/CAM systems with “physics” modeling are making it possible to rapidly design robot bodies. An example:

“UGS PLM Solutions, the product lifecycle management (PLM) subsidiary of EDS (NYSE: EDS), today announced that SANYO Electric Co., Ltd., Japan’s leading home appliance company, used UGS PLM Solutions’ NX portfolio to reduce development cycle of the latest version of the “Banryu” (guard-dragon) utility robot by 50 percent.”

In a few years it will be simple to model out obvious mistakes in complex, multiple degree-of-freedom robots.

2. Behavioral design – This area is just beginning. Currently, most robots are programmed in a hard-coding fashion in assemlby or C. However, new software appearing at the hoby and research level allows users to define robot behaviors and then watch a virtual robot execute these behaviors. Not a substitute for the real world, but like 3D CAD/CAM, it allows obvious mistakes in creating robot “personality” to be ironed out before actually trying the robot. Here are some examples:

Mobotsoft – http://www.mobotsoft.com – Behavior authoring environment for Khepera, Hemisson and Koala robots. The system creates a graphical environment which writes (“scripts”) the BASIC commands needed for these robot’s controllers.

Cyberbotics – http://www.cyberbotics.com – Webbots provides more sophisticaled 3D modeling system for robot behavior creates simulated robots. Quite a few robot bodies (e.g. the Aibo robot dog) are in the software already, and new ones can be defined. The system also includes a large number of robot sensors modeled for the virtual environment. Real-world physics is simulated using the Open Dynamics Environment (ODE). Multi-agent (e.g. a robot soccer team) systems can be simulated. Once tested, a completed C or Java robot program can be directly downloaded to the real robot. This system still requires programming (rather than true authoring) of robot behaviors, but the testing happens at the authoring level.

Evolution Robotics ERSP 3.0 – http://www.evolution.com/product/oem/software/ – Software aimed at the consumer robot market, it allows Evolution’s excellent vision recognition and VSLAM navigation technology to be quickly implemented in any consumer device. The software features high-level object recognition (aroun 85%), tele-operation, and autnomous navigation which solves the “kidnapping” problem (e.g. a kid picks up their robot toy and puts it in a different room). In addition to low-level programming APIs the system includes a high-level graphic authoring interface for recognition and authoring. A remarkable example of this is the “behavior composer” which provides a “drag and drop” interface for creating behavior networks. Using this technology, a non-programmer in theory could create and integrate robot behaviors – truly a breakthrough product!

Here’s a great screenshot of the Evolution Robotics behavior-authoring interface:
(image no longer available)

3. Animatronics – Traditionally, animatronics involved “puppeting” an elaborate robot body via human interaction. The contribution to robotics is a group of people used to building “bodies” who want to do more. Currently, the biggest thing in animatronics is “untethered” systems – in other words, those walking and moving on their own. There are essentially remote-controlled mobile robots.

4. Sensors – In the past, sensors were expensive, and robots typically only had a few, which greatly limited their ability to function in the real world. Now, sensors for motion, force, temperature, pressure, light, sound, radiation, and more are being produced as MEM (Micro Electronic Machines) largely for the auto industry. These low-cost sensor on a chip systems allow recent mobile robots like the Banryu and the Aibo to have nearly 100 sensors. Another 10-fold and these systems will begin to match the sensor number of simple insects. Vision is also greatly improved – cheap, solid-state webcams are now commonplace. This allows creating robots with multiple, low-res eyes looking in every direction rather than an elaborate TV camera.

5. Fuel cells – For mobile robots to be useful, they have to have enough power, and batteries aren’t enough. Work on creating small-scale fuel cells will direclty benefit mobile robots. Of course, some robots will puff like the Tin Man and others will have to use the little robot’s room to vent their waste water, but they’ll be able to do real work.

6. Small single-board computers – Desktop computers are bulky and power-hungry, and optimized for things not useful for robots. However, companies like VIA are developing single-board computers with speeds up to 1 GHz allowing the construction of “PC-bots” using desktop software, bluetooth, wi-fi, TCP/IP and more. This immediately allows mobile robots to do anything a cyberspace computer can do. While not ideal, it allows hobbists to break out of the BASIC stamp “trap” they’ve been in for two decades and create more powerful mobile robots.

7. Three-dimensional printing – Printers which can create a 3D part in plastic have dropped into the $30,000 range. The secret is creating ink-like sprays that use metal, plastic, or semiconductor particles that are later fused in an annealing step. This allows a mobile robot to be designed and have its parts created simply by pushing the “print” button. Again, very rapid prototyping of robot bodies is supported. In just a few years it may be possible to create articulated parts with these printers, and incorporate different materials into the 3D printed product, even metal.

8. New chips – New Digital Signal Processing (DSP) and Digital/Analog chips are allowing rapid pre-processing of raw sensory data. Instead of devoting expensive time in the robot’s “brain” to low-level visual and auditory processing, DSPs allow this to be done in massively parallel, sometimes analog circuits almost instantaneously. This approach was used by Digital Auto Drive (DAD) to create a snap-on robot module for their truck used in the 2004 DARPA Grand Challenge. In contrast to the traditional software and CPU approaches (many which didn’t get out of the gate) DAD managed a very respectable performance at a fraction of the cost. More chips are appearing, allowing massively parallel simulations of neural net and genetic algorithm programs, certain to boost robot performance in the sensory and motor control area.

All of these technologies allow the creation of “startup” robot companies quite literally in garage-level environments. Imagine an office combing authoring software, 3D CAD/CAM, and 3D printers. A small group can quickly devise and test mobile robots at a fraction of the cost of big military and government contractors. They can substitute pre-written software and parallel DSP chips for massive programming projects. They key will lie in figuring out the type of robotic products consumers want. But these ideas will flow from true robotic convergence rather than being a profit-margin fantasy characteristic of today’s cyberspace convergence.

Robots That Jump – Historical – Apr 13, 2004

Tuesday, April 13, 2004

The real vs. the unreal
Two interesting movies of humanoid robots are available on the web today. The first is of Sony’s QRIO robot conducting a symphony orchestra to Beethoven’s 5th Symphony, 1st movement. The movement and fluidity of the robot are amazing. Most importantly, it is a real robot. Currently, you can see the video via RealVideo at this link.

The second video shows a humanoid police robot “on patrol” in a dangerous-looking Third-World city getting in gunfights. This video may be viewed at Tetra Val. The movement and fluidity of this robot is amazing. Most importantly, it is a fake robot.

The distinction is important. One robot is real and one is not. What is the real one doing? Conducting an orchestra – something that was never even dreamed of in 50 years of Hollywood robot films. Reality trumps fantasy. What is the fake one doing? Being a scary Frankenstein robot – like every other robot fantasy of the last 50 years.

Which of these do I believe? Amazingly, many people will take the fantasy film as “proof” that robots are dangerous, and not be impressed by the movie of the real Sony QURO. What an incredible up-ending of logic.

In all probability, the real Sony QRIO tells us something about the future. We can look at it and predict that in a few years, little gnomelike robots will be running around entertaining us.

In contrast, the fake robot tells us nothing. For one thing, as pointed out by Marshall Brain’s weblog, a real security robot would not have a human head with two eyes – instead it would have dozens of eyes looking in every direction at once. The fake robot in the movie above moves its head to show human intent, and tells us nothing more than any other movie character. It also shows incredible intelligence, and can engage in gunfights and be repaired quickly. Real robots are very fragile and this is unlikely to change quickly. In fact, the fake robot has delicate neck struts. Comeon! This is artistic, not related to reality. It would be easy to knock its head off. Also, during the movie, someone talks about how the robot never gets tired. At present, slowly walking humanoids have power supplies lasting less than an how. How does the phony robot get its power? This, of couse, is something they’ll just “figure out.” I suspect that a robot that could jump tirelessly for longer stretches than a human would need nuclear power to work. Is this likely? Finally, the robot is easily repaired in an environment that looks like a second-rate auto body after being shot. Does the mean the robot is simple?

Not only is the robot fake, it is utterly unrealistic at almost every level.

Consider the phony robot’s mission as a cop. The reason the “Third World” envrionment looks scary is that it is very difficult for a human-level intelligence to handle such situations. A robot would have to be at least as smart as a human, if not smarter, to move in these environments. This is a loooonnnnnggggg ways off.

The real reason the fake robot exists is to stir up our emotions. The robot is just a costumed stand-in for a bad cop, one that polices a group of downtrodden people in the service of an evil corporation or empire. One could have a caveman, evil parrot, vampire, or demon recruited from the nether depths replacing the phony robot in this movie and get the same effect. The robot is nothing but a shell for a (pseudo) political statement.

(yawn).

In fact, the politics is just secondary to the real reason for making a phony robot video – to show off the special effects of a company called The Embassy Visual Effects They are just showing how good their virtual world generation abilities are, and how nicely they can blend in a non-existent fantasy robot with video footage. It (I repeat again) has nothing to do with real robots.

I’m certain that the designer staff at the Embassy doesn’t have a clue about the capabilities and limitations of real robots, much less any connection to the emerging robotics revolution. But they do have 3D Studio Max and Maya software. Sadly enough, their Flash web programming (with its “mystery meat” navigation characteristic of Web Pages That Suck) leaves something to be desired…

“Ooooooo…But couldn’t robots do this someday?”

I suppose so – but I look to real robots like the QRIO conducting a symphony orchestra to make my predictions, rather than a fantasy film designed to get design jobs for a graphics company. I would ask the people at Sony who developed the QRIO, not the ad agency people at The Embassy about the robotic future. One had done something real, another has re-hashed a long-standing fantasy in cyberspace. One has demonstrated a robot that jumps, the other doesn’t even know what a robot is…

Robots That Jump – Historical Apr 4-8, 2004

Thursday, April 08, 2004

Sensors, sensors
I was thinking about the fate of CMU’s Sandstorm in the 2004 DARPA Grand Challenge. After hanging up on an embankment, the robot car continued to spin its wheels, causing the rubber to burn off in a cloud of smoke and flame.

Hmmmm…for all the speed and power of that system (and it looks by most competent while driving) it couldn’t tell it was stuck. To me, this indicates that the robot is paying too much attention to the environment without checking its own internal state. The thing at issue here is sensors. Any biologist will tell you that simple animals don’t have elaborated brains – but they have a huge load of sensors. A single bacterial cell has thousands of protein-based sensors – measuring temperature, pressure, salinity, light, and chemical composition. This complex sensor array is coupled to a few simple outputs, like whether to divide, swim, tumble in space. This patternis repeated with more complex organisms – very rich sensor nets coupled with simple processes at the decision-making level and motor output level.

How would this work in a robo-car like Sandstorm? Well, I have a smoke detector above my computer which is relatively cheap and works fine. If Sandstorm had a few of these in its wheelwells it would have sensed the problems with the spinning tire and shut off the engine.

I suspect that a lot of the “stupid” behavior of mobile robots is simply due to their extremely limited sensor set. An insect has tens of thousands of sensors on its body recording a variety of stimuli (light, chemical, temperature, pressure). The very advanced QRIO robot that Sony produces probably has about 100. As the number of sensors increases we’ll see a version of “Moore’s Law” for robots – double the sensors, double the performance and apparent “intelligence.”

One could argue that this can be done in software. For example, if you program a routine that detects tires spinning without forward motion of a robot car you have de facto “sensed” a tire spinning against the ground. Fair enough. But this approach means that one has to figure out a huge number of environmental states in advance and figure out how to measure them in software using a limited sensor set.

Frankly, a smoke detector in the wheel wells seems easier.

One thing I’ve noticed about robotics work is that everyone seems to work with a small number of sensors. IR, ultrasound, laser rangefinder – we see them again and again. Nothing wrong with that. But this means that anything not easily processed by these systems in the environment has to be recognized by a software program.

The fact is, there are a huge number of sensors out there. Many are special-purpose devices designed for industrial robots and are very specific. There are sensors for heat, fire, rotary motion, proximity of metal (too close to another robo-car) windspeed using ultrasonics, temperature, humidity, strain, hydraulic pressure, ionization, low-level acoustic (detect change in standard sounds like an engine), magnetic, compass, a variety of gases like CO, O2, CO2, and biogenic gases, leak detectors, movement, radiation, micro cameras, sudden motion (airbags), moisture in oil. and more. To see examples of this go to the Direct Industry website and search for “sensors.” You’ll get a huge amount of information. A lot of these sensors are relatively small – too big for a hobby robot, but fine for a pickup truck.

I wouldn’t say that the extreme focus on robot vision for the fast 40 years is misplaced, but it is inadequate. During the 2004 Grand Challenge, Digital Auto Drive performed beautifully in termse of vision – but got stuck (lightly) on a rock. The robot sat there revving its engine but not enough to free itself. If it could “feel” the rock it could have determined its state and possibly got free. A human driver got the DAD car free in a couple of seconds becaus they could “feel” the rock themselves through the vibrations of the car transmitted to their body. They didn’t have to see the rock. Vision is not always necessary to solve problems. The right kind of sensor can pick up very specific information that can be handled as a low-level reflex rather than requiring high-level computing.

My image of an “alternative” robo car would be to cram the system with as many unique sensors as possible. Put in a few obvious low-level reflexes (e.g. tire smoke in the wheelwell tries to shut off the engine). Use a subsumption-style architecture to allow override in special cases – but do that later. Don’t worry about the complexity explosion of monitoring large number of sensors – tie them up with simple reflex circuits. The “brains” of the robo-car would normally only get the message that nothing was wrong.

I’m guessing a sensor-rich carbot would have a better chance of completing the course than a comparable system with super-vision processing and few sensors.

Finally, trying out some of these strange, alternative sensors that mobile robotics groups don’t use today might lead to some insights. Just possibly, there’ too much focus on a few well-known sensors at the expense of the others. So dig through a list of every sensor out there and try to think of creative ways to use them. If your’re a hobbyist, do the same – dump the standard IR or sonar sensors for something really different. By doing so you’ll be bringing us closer to robots that jump.

Sunday, April 04, 2004

Robot Cars are Important – They Can Jump!
I just had an article published on Robotics Trends (http://www.roboticstrends.com entitled “Why Robot Cars are Important.” The exact link to the article may be found here. So why tout wheeled vehicles on a blog devoted to robots that jump? Well, if you think about a car as a robot body you’ll see it is far better than most of the bodies constructed in a one-off fashion in academic research:

  • Robot cars are sensitive – modern cars are loaded with sensors, mostly of the MEMS variety. In addition, a sensor network connects the sensors to low-level regulatory brains. So a robot brain can immediately take advantage of the car’s built-in sensors.
  • Robot cars are powerful – A huge problem with mobile robots is power. The Asimo can only run about a half an hour between recharges, and the new Toyota trumpet-playing robots have similar limitations. Hobby robots run with ancient microprocessors dating from the 1980s in part because these sloooow chips consume very little power, allowing the hobby robot to run with batteries. In contrast, a robot car has power to burn. The engine can supply lots of electrical power, or a second generator may be placed in the car. A fully-fueled robot car can run for 10 hours or more while moving down the highway, at the same time powering several multi-gigihertz computers. If a network of low-power PC boards like those from VIA are used, you can put a supercomputer in an SUV.
  • Robot cars can use standard computers – The power problems on mobile robots have led to custom hardware solutions. In contrast, a robot car can use a computer you get from Best Buy. This allows hobbyists to experiement with high-powered computing for their robot at a relatively low cost.
  • Robot cars have standardized bodies – Unlike a “one off” robot body, cars are mass produced. If you design a robotic system for one car, you can use it in thousands of others like it.
  • Car races are a proving ground for social robots – In a race, both competition and cooperations are required. A car passing another must cooperate or they’ll crash, but must compete to win. This is a simplified version of real-world social systems, ideal for early robots to master.
  • Cars don’t have the “terminator” stink – Humanoid robots invoke a Hollywood-conditioned fear response, at least in the US. In contrast, we don’t have strong feelings about the good or evil of robo cars. There are some negative images (“Christine”) but some fun positive ones (Knight Rider’s KIT, Love Bug). And who wouldn’t want their car to have a personality.
  • People love cars – The fact that millions of people go to see car races means that there’s a market for building expensive car robots outside the military – NASCAR. We already have some early robot races going. In a decade, the cars will have electronic “personalities” like pro wrestlers. Who wouldn’t love it if a “monster truck” really wanted to crush small cars with its huge wheels? The car is a celebrity today like the driver. Tomorrow, the car alone with be enough.
  • Robot cars can jump – Unlike other wheeled robots, cars are tough and reliable. They can drive a high speed, spin out, and – of course – jump through the air. With the use of pneumatics (powered by the car’s own engine” one can put in airshocks, even Shadow air muscles and give the robot car a musculature. Unlike weak metallic muscles, pneumatic muscles are strong. Imagine a robot car jumping up and down on its air shocks for the fun of it, or to “intimidate” another robot car?

    Calling all robot – people – get outside and start hotwiring your car into a robot!

    Also check out the International Robot Racing Federation at http://www.irrf.org.

Robots That Jump – Historical Mar 24, 2004

Wednesday, March 24, 2004

Sony’s Intelligent, Remote-Control QURO
An article on asahi.com describes Sony’s new project for its running and ball-throwing QURO. Sony Executive Vice President Toshitada Doi (who headed the Aibo project) will be running the Life Dynamics Laboratory with 10-20 researchers who will be allowed to test the latest theories in artificial intelligence.

The plan calls for getting past the current problem with robots that jump: their brains are too limited. Power requirements and historical precedent in embedded devices has limited mobile robots to processors running 1/20 the speed of a fast consumer PC. In such a limited environment, comparable to desktop PCs in 1994, it is difficult to produce advanced behavior – just getting the QURO to walk with such limited brainpower was a mind boggling achievement.

To solve the problem, more than 100 high-performance PCs will be linked into a parallel computing network to analyze sensory data coming from a QURO body and direct its actions. This is a 1000-fold increase in the processing power available to this robot. Sony hopes that by applying the latest theories of brain research the remote-brained QURO will show autonomous and “intellectual” behavior.

Good luck to these guys. The choice of the QURO is inspired, since it is arguably the most advanced humanoid robot body out there. But I hope the “modifications” mentioned in the article add more sensors to the body than the 60 or so currently present, enough so that the augmented QURO has the sensation of a simple insect, which has thousands of touch sensors. Without the extra sensation, the remote brain will have the same problems controlling the QURO body that human operators have controlling the Predator unpiloted aircraft – in other words, extremely difficult. Predator pilots find control incredibly demanding, partly because so little sensory feedback comes back to them in their remote location from the aircraft.

// posted by Pete @ 6:49 PM

Tuesday, March 23, 2004

Robots – let’s take back our good name
It’s pretty clear to most people in personal, service, and entertainment (even industrial) robotics that the robotics industry is on a roll. Sales of robo-vacs and entertainment robots (e.g. Sony’s Aibo) are going great guns, and recent events like the Robo-Olympics are causing “convergence” of the battle-robot and autonomous robot worlds. Despite the weak showing of auto-robots in the DARPA Grand Challenge, it is clear that cars will be “robotic” in just a few years. Honda, Sony and now Toyota all have created and demonstrated advanced humanoid robots. The robots are truly rising. In short, robots are fast moving from fantasy characters in movie CG animations to real-world creatures we interact with in our daily lives.

Now that we’re real, it is time to take back our good name from those who have hijacked it into cyberspace.

Recall that the point of many articles in this weblog is that a good definition of a robot is a machine that uses sensors to build a perception of the real world, and interact in it. Robots must have robust sensation of the natural environment (vision, hearing, radar/lidar senses, touch, chemical senses) to be of value. Robots are uninteresting if they can only interact in virtual environments, and become more interesting as they interact in less controlled environments. A robot working in a hospital is surrounded by artificial constructs which easily translate to symbols and doesn’t have to be too brainy. A robot in the desert is surrounded by a completely natural world without human constructs, and must deal entirely with raw sensory data instead of easily understood human symbols (e.g. stop signs). People interacting with robots do so largely via the “real world” – being seen by the robot, speaking to it using natural language, picking it up and putting it somewhere, and so on.

The alternate use of computers today is to create “cyberspace” – essentially the polar opposite of a robot. A computer creating “cyberspace” uses its power to bring a synthetic reality into being inside itself. People interacting with computers following the cyberspace model have to enter the machine’s world and abide by its rules. The cyberspace world is made up not of real “things” but of symbols for them. A computer game may create a “realistic” 3D character or a simple Space Invaders icon but at the bottom both are the same – they are a symbol for something rather than being something in themselves.

To summarize, cyberspace is a symbolic world created by a computer, while robots are computers attempting to enter our real world (Matrix nonwithstanding).

For this reason, “virtual” robots are not robots. A virtual robotic car might do quite well in a videogame simulation of driving, but fail miserably if confronted by a real-world robot. It’s cheating to call it a robot. Virtual game characters run and jump with ease – but the same software controlling the game character could not control a real-world humanoid body – it would fall over in an instant.

The same goes for the so-called software ‘bots that are supposed to monitor searching engines and report interesting content. These same ‘bots are utterly confused by natural language. They are working in cyberspace – a purely symbolic environment.

And…the same goes for software ‘bots pretending to be people in instant messaging systems. Studies show that ‘bots aimed at teens and tweens don’t fool kids. After a short conversation, the kids continue to interact with the ‘bot – but their intent is to pick it apart and expose its machine-ness. In effect, the ‘bot becomes a type of puzzle for them to solve. This is not a robot.

None of these things are robots. However, the tech media frequently calls them robots. Why? Because real robots are cool. By calling these cyberspace constructs robots, the cyberspace pundits steal some of robot coolness for their own pale simulated world.

It’s time to stop letting cyberspace rip off the excitement of robots. Real robots will be more exciting than anything cyberspace can generate. So let’s begin pointing that whe we talk abour robots we are not talking about ‘bots – we’re talking about the real thing. Insist in differentating your robotic creations from virtual ones – there’s no comparision.

At the same time, don’t feel that you have to “justify” a robot by linking it to cyberspace. Sure, a robot that can log into the web and read your email is interesting, even useful, but this is hardly a reason for having a robot. Television does not say it is just as good as radio – it is a different medium that doesn’t need to justify itself through another one. Likewise for robots – they’re interesting for the unique things they do, not for their ability to link to an older media.

Take back the name “robot.” Let cyberspace fend for itself – declare the true meaning of robots.

Robots That Jump – Historical Mar 16, 2004

Tuesday, March 16, 2004

“I, Robot” trailer is up – and it stinks
A recent article on Slashdot notes that the movie trailer for “I, Robot” is up at this link. Well, I’ve seen it, and it confirms my worst fears – this robot movie will be a stinker! Here’s a few of the howlers:

1. Asimov used the “three laws” as a way of developing mostly nonviolent plots. In contrast, this movie says “laws are made to be broken.” This sounds like how Hollywood conducts business rather than anything to do with robots.

2. The animation is lame and second-rate. Apparently, someone reading Asimov picked up the 1950s vibe and made the animators create pale (literally), delicate plastic robots. In spite of this, they can jump hundreds of feet through the air and cause the concrete to shatter when they hit. The jumps are clearly puppeted rather than physical. What a laugh – a virtual machine (animation program) trying to imitate a real machine (the robot).

3. In one scene, a guy is blasting robots with a shotgun which are springing on him like crazed panthers. Don’t remember that scene in “I, Robot” – hmmmm.

4. Will Smith is wearing a lame hat.

All in all, it is utterly depressing to see the incredible lack of imagination in this movie. The plot of the original stories has been twisted to the one-millionth rendition of “Frankenstein.” Why is it that Hollywood only knows a single plot for robots? Compare it to Japanese animation – robots may appear as menaces, but more often are children, guardians, mentors, or innocent creatures.

One can only hope that the upcoming Millennial generation, which is fascinated with the Japanese robot vision, will increasingly reject these stupid robot notions of the last century. Let’s hope that “I, Robot” makes less money than “Here’s Pat” did – and that it is quickly swept into the dustbin of history in favor of better kid’s robot movies coming out later this year.

“I, Robot” trailer is up – and it stinks

Monday, March 15, 2004

These robots need to think like my cat
So today we’re seeing media pooh-poohing the DARPA grand challenge. The really dumb media thinks it was a failure because nobody got the prize. The sophomoric media (‘sophomore’ means seemingly wise but incredibly stupid) like Slashdot, The Register, and other burrows of the increasingly threatened cyberspace crowd has quickly reassured themselves that robotic technology is, once again, a very long way off, and that Linux will be the next big thing in tech for decades to come.

However, one has to think about how hard this challenge was. All the teams tried to create new vehicles using technology that had never before been integrated, and run their systems 10 times faster than the beast autonomous car-bot to date. They did this in about 9 months. The results indicate that robotics is here, rather than “a long way off.”

Heres why: if the teams that were best-funded, either from established universities or heavyweight defense contractors had posted the best performances robotics might indeed be a long ways off. But the most remarkable part of the results is that grassroots teams like Digital Auto Drive matched CMU, and beat a variety of other teams with more history, contractor clout, or apparent talent. In other words, a few people in a garage can match the best. This is a huge step forward. It also means that grassroots development of the sort that created the personal computer is beginning in the robotics world.

Programmers of the virtual world should be shaking in their boots…

Analyzing the results posted by DARPA and at Mobile Robotics indicate to me that many of the non-mechanical problems appeared because the systems didn’t have any way to respond to problems. In other words, they were designed on the assumption that everything would go right. In the case that they went wrong, there was no “Plan B.” This is something that future robots will need in spades – they’ll need a “Plan Z.” These plans don’t have to be specific. If we look at animals in strange situations they have a pretty standard set of ways of reacting. So, below I’m listing the results by vehicle from DARPA, along with how I think my cat, Squeek, or Toonses the “driving cat”, would have responded had his brain been guiding the robot:

DARPA – Vehicle 22 – Red Team – At mile 7.4, on switchbacks in a mountainous section, vehicle
went off course, got caught on a berm and rubber on the front wheels caught fire, which
was quickly extinguished. Vehicle was command-disabled.

My comments – My guess was that Sandstorm was set to drive too fast. The earlier roll that almost destroyed the car the week before indicates that the team was trying to tune it for maximum speed, instead of maximum length of travel. My cat would probably have gotten scared when the turns on the switchback were near the limits and started driving more slowly. If the car had jumped the berm, Squeek and/or Toonses would have shifted into reverse and tried to pull away, stopping if they smelled rubber burning. I suspect Sandstorm just sat there spinning its wheels until it caught fire. The designers didn’t make it able to determine if it was “caught” and try a different strategy. Also, it didn’t have the senses to feel that it was caught or sense the “pain” of a burning tire. Low-level reflexes responding to these environmental changes might have make Sandstorm act more intelligently. If Sandstorm had the programming of Terramax so it backed up (see below) things might have been different.

DARPA – Vehicle 21- SciAutonicsII – At mile 6.7, two-thirds of the way up Daggett Ridge, vehicle
went into an embankment and became stuck. Vehicle was command-disabled.

My comments – This plucky little dune buggy had the same problem as Sandstorm, apparently. The robo-car couldn’t sense that it was caught and didn’t try to back up. Also, it had no way of raising its body (air shocks!) to clear the obstacle that it was caught on.

DARPA – Vehicle 5 – Team Caltech – At mile 1.3, vehicle veered off course, went through a fence,
tried to come back on the road, but couldn?t get through the fence again. Vehicle was
command-disabled.

My comments – My cat probably could have found the hole it left in the fence by vision. This reminds us that the robo-cars are driving with vision far inferior to that of an animal. However, even if Squeek couldn’t see the fence, he would have tried to retrace his movements and run the crash-through in reverse. Apparently, “Bob” knew it was off-course but couldn’t figure out the fence problem. This might be more than simple reflexes, and require a high-level understand of the notion of being trapped to be overcome. Not easy to do.

DARPA – Vehicle 7 – Digital Auto Drive – At mile 6.0, vehicle was paused to allow a wrecker to
get through, and, upon resuming motion, vehicle was hung up on a football-sized rock.
Vehicle was command-disabled.

My comments – This is the saddest one. If the DAD vehicle hadn’t paused it might have gotten a lot further than Sandstorm. Here, it is obvious that a dynamic support system (e.g. air shocks) would have helped a great deal. My cat is pretty fat and could get his stomach hung up on a rock, but he would immediately stand up higher and arc his back to clear the obstacle. What DAD needed was a “low rider” mechanic on the team to add air shocks. In any case, dynamic shocks will be a must on future robo-cars – in effect they will be part of their “musculature”.

DARPA – Vehicle 25 – Virginia Tech – Vehicle brakes locked up in the start area. Vehicle was
removed from the course.

My comments – This is a great example of a machine doing a classic computer thing and acting very unlike animals. My cat, if he had brakes, would have sensed pain from the locked breaks and released them, independently of any other control program. He would have also stopped trying to move and just sat there. After a short time, he would try to move again. In other words, these cars will preform much better if they have sensors wired for pain. This isn’t impossible, and is much simpler than using touch to identify an object, which is a common AI goal. All you need here is a way of sensing damage appropriate to the part of the robo-car concerned. Instead of imaging the robot “thinking” about the situation, consider what would damage that part – heat, grinding, etc., and have a sensor detect that. Then have a “cautious” response – inibit everything. This is exactly what animals do when they feel pain.

DARPA – Vehicle 23 – Axion Racing – Vehicle circled the wrong way in the start area. Vehicle
was removed from the course.

My comments – Axion may have been on of the systems that wasn’t a real robot and simply trying to treat the course like cyberspace. Theoretically, one could make a robo-car follow the GPS signal without doing any real sensor work and make it tough enough to survive smashing into things. Axion and Palos Verdes High School appear to have taken this approach. When the systems couldn’t get the signal, they failed. My cat would have realized something was wrong because he would have noticed that the other cars were going the opposite direction. This is a bigger problem for robo-cars, since to duplicate my cat’s performance the robo-car would have to have some high-level concept of the race and what it was trying to do. It is fair to say that none of these cars “knew” they were trying to complete a race. In future systems, it may be that one will need to “stack” a rule-based system that “knows” it is in a race and can evaluate its performance at a very high level. The robo-cars may need a little box that understands the abstract goal of the race.

DARPA – Vehicle 2 – Team CajunBot – Vehicle brushed a wall on its way out of the chute.
Vehicle has been removed from the course.

My comments – This sounds like a system with almost no ability to sense its environment, and no simple reflexes to respond to running into stuff. Robo-cars will need simple reflexes, Roomba-style bump-plates, as well as high-level navigation. If the navigation appears OK but you are brushing things it is probably wrong and you should slow down. The cat response would be to go slow, adjust to prevent brushing against things (reducing pain), and see if the earlier goal of following the GPS path could be maintained.

DARPA – Vehicle 13 – Team ENSCO – Vehicle moved out smartly, but, at mile 0.2, when making
its first 90-degree turn, the vehicle flipped. Vehicle was removed from the course.

My comments – Sounds like hubris laid low from a big government contractor. The design of this vehicle showed that a lot of effort was put into it – but it may have lacked the testing carried out by a grassroots team. It clearly had a very high center of gravity. Clearly, the vehicle wasn’t “afraid” at a general level of pitching over. A simple bump-plate level circuit should be cutting in anytime a flipover seems likely and inhibit the whole system – stop it or slow it down.

DARPA – Vehicle 4 – Team CIMAR – At mile 0.45, vehicle ran into some wire and got totally
wrapped up in it. Vehicle was command-disabled.

My comments – This is an example of sensors not being good enough. Wire is hard to see with human eyes, more so with the limited “2.5D” perception of robo-cars. Based on this, and the performance of the robo-cars on the shorter obstacle course, it is clear that there weren’t any basic visual reflexes operating. Animals have a center in their brain prior to the cerebrum that scans the visual field for anything that is moving rapidly on a “collision course.” Animals don’t stop to analyze what is coming at them, they just turn so they don’t hit it. It isn’t a matter of encoding every kind of obstacle into a robot so it can recognize wire – it is a matter of making it “afraid” of any obstacle, whether or not it “understands” what the obstacle is. The trade-off: the machine will be too cautious, like Terramax was (see below). In this case, an animal that is cautious and is repeatedly thwarted in its goal will over time become more bold again.

DARPA – Vehicle 10 – Palos Verdes High School Road Warriors – Vehicle hit a wall in the start
area. Vehicle was removed from the course.

My comments – I suspect this system was simply trying to follow the GPS and had little or no ability to sense its environment. GPS is not an environmental signal – it is an artificial series of symbols that allows navigation across an abstract world. The real world simply happens to be similar to the virtual one. As I mentioned in an earlier post, the kids working on the car probably thought that making a robo-car work would be similar to making a good run in a videogame road race. The reality check will be good for them.

DARPA – Vehicle 17 – SciAutonics I – At mile 0.75, vehicle went off the route. After sensors tried
unsuccessfully for 90 minutes to reacquire the route, without any movement, vehicle was
command-disabled.

My comments – The “without any movement” is key here. My cat would have tried first to retrace its steps from memory. If this didn’t work, it would start some sort of search pattern. Both of these wouldn’t be that hard for a robo-car to implement. The car should store a recent history of its travel in terms of odometer readings, turns, gear shifts, and so on. If it can’t get a signal it would just try to reverse its course. Failing that, it would begin looking for the course in a simple spiral pattern. This is only a little more sophisticated than a Roomba – in other words, this is basic reflexes rather than complex computations of road versus bushes in a video processor.

DARPA – Vehicle 20 – Team TerraMax – Several times, the vehicle sensed some bushes near the
road, backed up and corrected itself. At mile 1.2, it was not able to proceed further.
Vehicle was command-disabled.

My comments – This machine was too shy. After a while it ran into what it interpreted as a wall of impassible bushes and stopped. My cat would have reacted the same way – at first. But after a while the desire to achieve its “final goal” would override its caution and it would try to proceed with relaxed caution. Gingerly, it would have tried to go through the “bushes” and succeeded. The Terramax strategy sounds better than most – it actually appeared to have some way of handling temporary failure – it just didn’t go far enough.

DARPA – Vehicle 15 – Team TerraHawk Withdrew prior to start.

My comments – None.

DARPA – Vehicle 9 – The Golem Group At mile 5.2, while going up a steep hill, vehicle stopped on the road, in gear and with engine running, but without enough throttle to climb the hill.
After trying for 50 minutes, the vehicle was command-disabled.

My comments – This was an interesting entry – too bad they don’t have a website! According to the Mobile Robotics eyewitness, this robo-car took off straight and didn’t bother following the road. Interesting alternative strategy. It kept going until it ran into something it couldn’t get over because its engine wasn’t powerful enough. My cat would have reacted in a similar way at first. But he would not have tried the same thing endlessly. After a while, he would have stopped. Then he would have done something that resolved the two “pulls” – a pull to driving rather than being stuck, and a pull to the final goal. The net result would have been a move along the hill sideways rather than directly over it. In robo-car terms, this means integrating a secondary goal (being able to move) that somewhat accomplishes the primary goal (not moving in the correct direction). If this failed, a Terramax-style backup would have been in order. Failing this, a retrace of the course by memory backwards, then forward movement deliberately taking a different path would have given the robo-car a better shot.

DARPA – Vehicle 16 – The Blue Team Withdrew prior to start.
My comments – It was pretty sad watching this robo-motorcycle go about six inches and fall over on national television – doubly sad because Blue Team had video footage of the motorcycle balancing and even driving in a circle. But it drove on grass and this was hard desert ground. My cat would not have gotten on the motorcycle.

Conclusions: It is clear that future robo-cars need two things they don’t have yet – simple yet detailed sensation, and some high-level AI understanding that they are in a race. Of the two, adding the reflexes should be the easiest. The teams should research ways to completely stud the cars with sensors and have them linked in a simple pain/inhibition reflex. In addition, the cars need a “memory” of their recent moves so they can retrace their travels. At a high level, the cars need to understand the concept of a race so they can notice if they are doing something really stupid (eg. driving in the opposite direction from all the other cars). All of these seem possible with a few years of tinkering and integration.