Robot Bodies Needed Before Robot Minds
Robots That Jump Historical – May 18-21, 2003
November 11, 2011Posted by on
Robots that Jump
Wednesday, May 21, 2003
Robotic researcher humility – sign of a turnaround?
In the stock market, bull markets end when just when everyone is convinced they will go on forever. On the day in March 2000 when the NASDAQ reached its peak, practically everyone was convinced that stocks were a good deal. In reality, the complete acceptance of the bull market by the public happened just as it began to fall. In a similar vein, bear markets in stocks end just when everyone is throughly convinced that stocks are a bad investment. Social and pop culture trends often show similar effects. After rising in popularity in the 1950s and 1960s, classic rock reached its peak in the 1970s. Just as it seemed to be everywhere, new styles including punk, rap/hip-hop and new age emerged. Tatoos and piercings are apparently on the decline in younger teens – just as everyone has them in slightly older generations.In short, when everyone is convinced of one thing, we may actually be at the cusp of a major trend reversal. Considering this, we may be on the edge of a major advance in robotics. In contrast to the overblown expectations for artificial intelligence, today’s robot researchers display surprising humility. Instead of predicting superhuman intelligences in a few years, most researchers characterize their robots as insect-level at best. Most predict that it will be many years before robots do anything practical. One is struck by the constant refrain that robots are too primitive to do much yet – while behind the researcher, the robot is doing something never before seen by a machine. A good example is the RoboCup competitions. Progress is expected to be slow, and many people working on soccer robots doubt that the stated 2050 goal of intelligent machines is possible. But watching of the Sony 4-legged league at the American Robocup 2003 one wonders if they’re being too modest.
Possibly, the near-uniform humility of roboticists worldwide (as opposed to literary speculators like Kurzweil) indicates that we’ve “bottomed out” on robotic intelligence. Expectations were raised in the 1960s (HAL), early 1980s (early microprocessor-based robots) and again around 1997 (“Rise of the Robots” TV shows) about the capabilities of robots – unrealistically. Now we have machines playing a real-world version of soccer (as opposed to a simulation) and a biped robot that can recognize and follow its human owner (asimo). We may be ready to climb the “wall of disbelief” as robots improve rapidly while being poo-pooed by the public, Hollywood, and the computer/tech industry dedicated to wi-fi and the X-web. If so, major opportunity may lie just ahead for the advancement of robots that jump!
– posted by Pete @ 9:52 AM
Robotics vs. ‘computer friendly’ environments
The computer/high-tech industry has been full of ideas on how to integrate computing more throughly into everyday consumer lives – in the process selling more hardware and software. Adoption of personal computer technology and the Internet has essentially stalled in the US since 2000, according to a recent Pew study. While there are several reasons for this, a primary one is the continued unfriendly-ness of computers. Despite increasing in speed 1000-fold since the early 1980s, personal computers are no more “intelligent” and remain difficult to operate. Attempts to introduce “smart” technologies like voice recognition have largely failed.There are two ways to address this problem. One, currently favored by the tech industry, is to make our world more computer-friendly. In this model, people adjust their behavior to a more stereotyped pattern easily recognized by a dumb computer. An example of this was given by a recent suggestion by HP researchers for voice recognition. Since computers have trouble understanding our speech, why don’t we change it? The way we speak words would be adjusted so that speech recognition programs have an easier time.
A similar push is found in recently introduced online software which automates resume submissions. Since computers find it difficult to read human-generated resumes these systems skills and experience according to rigid criteria. Individuals who might have been asked to interview by a human reviewer are now automatically rejected because they don’t meet a specific skill programmed into the system. At the same time, resume writers try to overcome this by copy/pasting the exact wording of the job posting into their resume.
Such moves would allow more computing in everyday life – just so long as we run our lives in ways easily understood by stupid computer programs. In short, they create rigid, stereotype, “computer friendly” environments by adjusting the behavior of people. The end results, while they may boost PC sales are not good. Little by little, we dumb down our everyday behavior so it can be understood by machine. In some cases (e.g. ATMs vs live tellers) stereotyped interaction has been beneficial, but increasingly computer-friendly environments are more about sustaining the “tech boom” than improving our own lives.
The alternative is robotics. Robotic systems in my definition process massive amounts of environmental information with less devoted to building an internal rule-based cyberspace world. Instead of generating virtual environments, robots try to live in our own world. By forcing the machine to work in our own complex, unstructured world we require that it be smart enought to help us – rather than adjusting our own behavior down to the stupidity level of the computer.
A good example is speech recognition. Current systems use statistical methods to understand spoken words and handwriting. This is because they aren’t really “in the world” – all they are are aware of is a stream of voice and handwriting. As such, their attempts to be smart are amazingly stupid – e.g., the irritation of auto-correct functions in word processors.
In contrast, a robotic system would know it was in a car being queried by someone in a business suit. This context would allow it to process speech and/or handwriting from this environment intelligently. It would score higher on speech recognition because it was actually listening to the conversation, noting the time of day, calendar date, who was holding it, and so on. For example, if the system knew that “businesspeople” were talking to it would tend to process “chack” as “check” rather than “chick.” But if they were poultry manufacturers it would tend to process “chick.” It would maximize the use of all available environmental information, rather than force the user to adopt a stiff, stereotyped method of speaking. The key is processing environment, and using the world as “its own best model.”
Note this is not simply “artificial intelligence.” Much of classic AI was devoted to machines building models of reality rather than interacting with reality. Robotics is about understanding the world, rather than creating abstract models — which in turn are fed back as rules for human behavior!
It may take longer to achieve robotic behavior but it is a far better alternative to restructuring our society to conform to arbitrary and ultimately stupid rules allowing “dumb” computers to operate more efficiently. Our efficiency, rather than computer efficiency, should be the goal. Making robots move intelligently in natural environments is a good first step.
– posted by Pete @ 9:35 AM
Monday, May 19, 2003
The post-room vacuum robot: Robo garden-tenders
There have been several press releases detailing new robotic cleaners during the last few weeks. Karcher has introduced the RoboCleaner into wider distribution, and Electrolux has released its Trilobite. These $2,000 models join the much cheaper US IRobot Roomba system.The vacumn cleaners are the next step in commercialization of robotic technology, in each case dealing with an increasingly complex (though still human-created) environment. At the end of the 1990s, robotic pool cleaners appeared. The inside of a swimming pool is a series of static walls – just about the simplest environment imaginable, so it’s no surprise that robots appeared there first. There was also an real advantage beyond the ‘gee whiz’ effect, since the robots could do a better job.
Vacuums work in a more complex environment where the walls are static but moveable furniture, people, and animals are present. The current systems adapt by being hyper-sensitive to contact and by map-building within rooms. All have touch sensors, most have ultrasonic sensors, and a few can even detect light. Though it’s a bit surprising, these items may be headed into a real consumer niche. There is also a reason for increasing their intelligence, since it will improve performance each time their robot IQ is raised. Things like voice commands, detecting the “clean-ness” of a floor and safe encounters with pets are all in store for smarter robot-vacs. It is likely that the robo-vac may in time combine with a home-security system.
The question is what’s next after this in the household. Some things, such as cooking, are too human-centric to allow robots to do a good job. Other things, like turning on lights in evening are too simple. I’d like to propose a possible task intermediate between these two: watering gardens. This is a different class than robo-lawnmowers (which haven’t caught on) and requires greater, but not unreachable levels of artifical intelligence.
Here are some key points motivating robo-watering of plants:
a) Many people keep elaborate gardens filled with a variety of plant species. At any one time, plants are being grown, pruned and transferred.
b) People with higher incomes are more likely to keep gardens. This high-income group could afford an expensive robot.
c) ‘Automated’ sprinklers using simple timers don’t cut it. Many homeowners have had the experience of their sprinklers turning on in a thunderstorm, oblivious to the water falling everywhere. A robotic system could be aware of weather, actively measure soil moistness, and be given watering plans by their owner.
d) Many garden-tenders are elderly, and may have limited mobility. This provides a niche for devices that can move around on lawns, steps, etc.
e) Robotic pattern-recognition is probably good enough to recognize plants showing wilting, as well as signs of disease.
f) Factual information on plant watering needs is available, so a rule-based robot would have access to useful information via the Internet or elsewhere.
g) Garden environments provide intermediate protection between the inside of a home and outside on public streets.
Taking this into consideration, one can imagine a future a robo garden-tender. The complexity of landscaping would make wheeled vehicles less useful, so reliable legs would be needed. one can imagine a descendent of the Sony SDR robot walking tromping a garden, checking each plant on its list. It could access embedded sensors for soil condition, and inspect plants visually. While using a hose might be too hard, it is easy to imagine it filling up water buckets and bringing water to plants. The same system could turn sprinklers on and off depending on whether it was really needed. It could beam the results of its surveys into home office computers available to the owner.
A robo garden-tender also has a natural path for upgrades. Double the intelligence, and you double its capacity to do its job. A smarter system could monitor the health of individual plants by inspection. A really elaborate and larger system could use hoses and adjust sprinkler positions. Pruning plants would be the ultimate challenge – recognizing dead stems and regions of a plant that should be cut, using sharp objects, depositing them in trash, etc. There is even room for asethetic appeal – can a robot design pleasing garden arrangements?
All in all, a research project into a robo garden-tender seems worthwhile as the next step after robo vacuums.
– posted by Pete @ 12:23 PM