Robot Bodies Needed Before Robot Minds
Robots That Jump Historical – May 5-7, 2003
November 11, 2011Posted by on
Robots that Jump
Wednesday, May 07, 2003
More on the Behaviors Per Unit (BPU) for Robots
In an earlier blog, I introduced the idea of rating the “power” of consumer robots with a metric comparable to the “clock speed” used for personal computers. Since speed doesn’t tell you much about a robot’s ability (it may just fall down faster) the alternative is to list the number of distinct behaviors the robot can initiate. Some of these behaviors are primitive (e.g. turning a joint) while others are more comples (kicking a soccer ball). Sensory “behaviors” including low-level (sensing touch across the body) to high-level (sensing airflow past the robot body). The more behaviors the robot has, the more interesting its interaction with the world.
A few more points about behaviors:
- They consist of computation with strong sensory inputs, or strong motor outputs. Pure computation does not constitute a behavior without environmental data.
- The substrate of a behavior (e.g. classic programs versus neural nets) is not considered
- A complex behavior that is a result of a single nonlinear system (neural net again) may be decomposed into multiple distinct behaviors in the description of the robot BPH.
- Behaviors might be ranked on the basis of whether they access real-world or virtual data. Virtual data behaviors would have lower rank. For example, a behavior that caused orienting to a global positioning (GPS) signal might receive low rank. This is not because the GPS signal is simple or predictable – it is because GPS is a human-made, abstract model of the world that only secondarily connects to reality. In contrast, sun orientation, while also simple and predictable, uses real environmental cues that are far more complex to interpret.
- Repetitive behavior applied to different data is a single behavior. For example, a voice recognition module that recognizes 10,000 words is still one behavior. One that can differentate comments versus commands has multiple behaviors.
Examples of virtual behaviors:
* Accessing email
* Web access
* Movement based on wireless communication with transducers
Examples of real behaviors:
* Speech recognition
* Determining orientation using sight, touch, balance sense
* Reading road signs
* Movement based on visual/auditory/touch obstacle avoidance
Given this split, how would we rank IRobot’s Roomba vacumn cleaner?
Virtual behaviors (vBPUs):
1. Control from remote
2. Not crossing room boundaries set up by wireless transducer “virtual wall unit”
3. Cleaning time clock
Real behaviors (rBPUs):
2. Wheel turns
3. Search pattern after obstacle collision
4. Spiral pattern on start
5. Obstacle avoidance from map built from collisions
6. Stair avoidance
7. Stop on being picked up
8. Try to find wall after spiraling for a while
9. Follow wall after contact
10. Criss/cross room after finding walls
10 rBPUs, 3, vBPUs = 13 BPU robot
Note that the actual process of vacumning is not a behavior, unless the Roomba is sensitive to taking in particles, fullness of vacumn bag, touch of brushes against objects, etc.
More behaviors can be described or imagined, and the designer/programmers would have the best shot at this. Possibly the best way to describe BPU is for the developer to list all behaviors, with industry reviewers/critics accessing them to make sure they aren’t redundant.
– posted by Pete @ 9:01 PM
It’s hard to convince U.S. audiences that the Asimo is real…
According to an article by Byron Spice in the Pittsburgh Post-Gazette, audiences in the U.S. find it hard to believe that walking robots like the Asimo are real.
“‘Getting a machine to walk like a human is a technological breakthrough,’ said Jeffrey Smith, leader of the North American Asimo Project, and the only way to appreciate it is to see it live.
People are so accustomed to computer graphics and special effects that they often question whether Asimo is real when they see it in Honda television commercials. That’s why Honda has taken Asimo on the road.
‘We’re saying no to Leno and Letterman and all that,’ Smith said. “We want to make absolutely clear that this is not fake.”
That’s one reason he insists on using a raised stage that the audience can see under, proving nobody is manipulating Asimo from below.”
Amazing thing. US audiences are so strongly attuned to the virtual world of computer graphics, television, web pages, computer games, and the like that they are more skeptical of the real than the fake. One more reason that robotic technology is likely to advance elsewhere while the US wallows in the Matrix.
– posted by Pete @ 7:50 PM
Rating robot power
During the PC era, one of the features that was used to measure the power and performance of computers was clock speed in the central processing unit (CPU). Starting with 4MHz computers in the early 1980s, today we have 3GHz computers – nearly a 1000-fold increase in clock speed. While professionals in the industry have used other criteria (e.g. MIPS) and caution that clock speed does not necessarily match performance in real-life situations, the obsession with speed has been continued.
Using CPU speed as a measure of progress is best reflected in periodic industry announcements that their new systems match or exceed Moore’s law. This “law” is actually an empirical observation that the speed of computing has been doubling at a regular rate since the 1960s. Without a theory describing the reasons for this growth, it cannot be a law. But many in the industry treat it as a sort of 10 commandments for the computer world, and desperately strive to make their products match the holy writ.
Despite problems with this, the rise in computing speed has provided a point of reference for consumers trying to understand the steady rise in computing. So what reference would apply to robots? We’re not talking about the detailed, complex specs engineers will use to measure their performance – instead, we’re considering a general, public measure that can perform a function similar to that of CPU speed.
One very interesting possibility for this is a list of “behaviors.” Ever since classic artificial intelligence had its mini-revolution in the 1980s (courtesy of Rodney Brooks and others), roboticists have moved away from a single, top-down control program for controling hardware. Instead, numerous small programs encoding simple “behaviors” (e.g., the ability to turn left) are linked in a hierarchy, grid, or other network to produce emergent motor behavior. Sensory systems are linked in a similar way, with elementary “feature detector” programs combining their input into higher-level perception. While not really a motor behavior, we will call these “sensory behaviors” here.
Since biological systems clearly do something similar to bottom-up, behavior-based computation (the vertebrate visual system is a good example), this trend is likely to continue in the robot world.
In other words, as robots become more powerful and capable, they will include more elementary behaviors. Brooks’ early robots typically had a few behaviors. In 2003, the Honda Asimo was using upwards of 70 or so elementary, real-time sensory and motor behaviors. To do this, it used 4 PowerPC chips for processing, along with several special-purpose chips (speech recognition for example). In such a parallel computing system, measures of clock speed are irrelevant. Even measuring instructions per second doesn’t really capture any of the complexity of emergent behaviors based on networks of elementary behaviors.
So here I’d like to propose a new metric for robots: Behaviors per Unit (BPH). This is a simple list of all the behaviors – primary sensory/motor, as well as higher-level behaviors which use output from the elementary ones. It is a simple-minded counting of all distinct behaviors. It doesn’t capture the detailed complexity of a given robot, but it does provide a general yardstick of behavior.
Clearly, a robot with 200 behaviors is likely to be more powerful than one with 100 behaviors. In contrast, a robot doubling clock speeds may or may not be any more intelligent – it may simply make the same mistakes faster. BPU is therefore a better way to measure robot performance.
How would BPU be used? In the professional robot world, it may be unnecessary, just as CPU speeds aren’t the main thing. For consumers however, it could be a boon – reducing a long list of robot features (each requiring individual explaination) to a simple numerical rating. One can imagine robots of the future listed with their BPU ratings in advertising, just like PC clock speeds are listed today.
The BPU may also allow a robotic version of Moore’s law. Hans Moravec has already noted that increases in computer speed have only recently benefited robots. Despite the rise in clock speeds during the last 20 years, there has never been enough computing power for robots to actually work. Now, a commerically available CPU can calculate fast enough to allow voice recognition, obstacle detection, and motion planning in near-real time. It stands to reason that as processing speed rises now there will be a rise in robot performance.
How will this speed be used? Most likely, a robot with double-speed computers will have double the number of behaviors. This is because current robots still only have a fraction the number of parallel processes characteristic of animals, and robotic performance can therefore be improved simply by increasing the number of behaviors. A robotic plot of Moore’s law might list the time needed to double the number of behaviors coded by distinct programs/circuits in the system. While this is speculation, it seems probably that somewhere around 100,000 high and low-level behaviors will be needed to make an advanced robot. An interesting project would be for roboticists to tally behaviors for advanced robots of the last decade, and see whether behavior-doubling is an adequate description of their increased performance.
– posted by Pete @ 8:25 AM
Monday, May 05, 2003
More on the Anti-Matrix
In recent weeks, due to the modest postwar rise in the stock market, various portions of the US tech industry have sounded the note for recovery. By recovery they don’t simply mean that layoffs end and business picks up – they’re waiting for the ‘next big thing’ to drive society. Predictably, this is imagined to be an extension of cyberspace. According to a recent article in the Houston Chronicle entitled Debate Rages on the Future of the Software Industry, many hope that standard software companies will achieve 10% growth during the next several years. Hope is being placed on “web services” – essentially a plan to convert current PC programs to Internet-based systems. For business, hope is placed on software that is not “intelligent” in the classic AI sense, and is often some sort of elaborate statictical trend-spotter (“business intelligence”). Similar hopes are being expressed for wireless, with a recent Wired magazine going so far as to tag wireless as an explosive growth generator comparable to the Internet itself.
However, this “visionary” work is hardly visionary – it is simply an extension of ongoing trends to put everything into a worldwide virtual network. Because of this, at best the new services will add some value to something that is rapidly becoming a commodity. When implemented, these services will simply be extensions of existing trends. It is stilly that unfriendly virtual world requiring you to learn arbitrary, confusing and decidedly un-intelligent rulesets to accomplish anything. Once again, it is dedicated to creating a virtual world that people can enter. Today, virtual, self-referential worlds created out of concepts and electrons abound – everything from an online game to the elaborate financial derivatives network.
In contrast, the Anti-Matrix is making rapid strides in the real world. Robotic technology is moving to the level where it can perform useful tasks. Currently, robots are climbing a “wall of disbelief” similar to the early stages of an economic bull market. Nobody predicts that robots will be part of the tech industry recovery. Articles on robotics for the tech industry ignore, make fun of, or brush off the import of real robots.
So robots are rising, and the virtual world of cyberspace is waiting for its next “close up” while it stumbles near the edge of a fall. What will precipitate a wide-scale rejection of cyberspace?
I believe that the financial derivatives I mentioned earlier might be the catalyst. A derivative is basically a bet that the value of an abstract financial entity (e.g. interest rates on loans) will vary in a particular way. Originally designed as a way to apply insurance to financial speculation, the derivative market has taken on a life of its own. Being built on abstract financial entities, complex and interactive, they are prime candidates for a sudden crash that would seriously damage the world’s economy. If this happens, blame will be placed on the excessively abstract nature of derivative-space. How could electron patterns cause people to lose their paycheck? Distrust will grow of derivatives, and will then spread to all abstract entities, finally embracing software and the Internet itself. Software bugs, viruses, computer scams – currently tolerated – will become fuel for a downward spiral. At the end, cyberspace will be a suspect, discredited notion – useful for some tasks, but not a world-saver. It will be tied to the go-go 1990s, which increasingly will be viewed in a negative light. Technology will have to prove its “reality” in order to be accepted. In such an environment, the robotic model – typing software in a strong loop to the real-world environment – will rise while virtual reality, Matrix-style, will fall.
– posted by Pete @ 8:31 PM