Robots That Jump

Robot Bodies Needed Before Robot Minds

Monthly Archives: October 2015

Robots that Jump, Instead of Robots that Pretend to Think

A string of articles in the media about teaching robots by knocking them over:

http://www.technologyreview.com/news/542481/an-algorithm-helps-robots-fall-safely/

http://www.cnet.com/news/teaching-a-robot-to-fall-over-without-making-a-fool-of-itself/

The key problem has been engineers misunderstanding the problem of walking. In a typical engineering style, the participants at the DARPA Humanoid Challenge emphasized keeping their robots upright. They expect that if the right algorithm is created, the robot will be able to stand upright. The problem, for them, is standing upright.

But in nature, this is NOT the problem solved by agile animals. Instead, they are trying not to fall down and damage themselves. The goal is a higher level. Animal is actually trying to survive. Survival may require walking or running, but that is secondary.

In particular, not being damaged, and trying to break your fall if your (sic) perfect algorithm fails is what animals do. Compared to designed robots, animals always have a plan B, C, D…

So, it might even be that the robots could end up walking with better balance than humans someday. That’s their goal. But that doesn’t solve the problem of an agile robot. The problem is getting from here to there and not getting damaged. A person ultimately might be more clumsy on their feet than a future robot, but unless the engineers change their goals, the person will do better, since it is also adapted to fall down gracefully, walk funny if requested, even crawl if that works better. These are not “fails” but part of the locomotion process.

Engineers have to get away from thinking that falling down is a failure of their algorithm.

As might be expected, the Falling Down work was done at Georgia Tech, known for its ongoing interest in Robots That Jump versus Robots that (sic) Think.

This work is coupled with a story about Honda. The company wants to make a version of its Asimo robot that is agile (instead of a big electric puppet) that can climb stairs and crawl. Now, Honda has a pretty interesting show-robot with about 15 years of experience now. However, I wonder if they are seeing the problem correctly, and will include “falling down” as an active engineering goal, rather than one to be avoided by their algorithm.

https://www.extremetech.com/extreme/216188-honda-designing-new-asimo-style-robot-for-disaster-response

This is good, since to date the Asimo has been a marketing tool rather than a truly agile robot.  Despite its friendly shows, it was useless when disaster struck a few years back, and a robot able to walk, fall down, and get up again in the tangled wreckage of a nuclear plant would have been great.

Finally, as a bit of a counter-trend, a report that people want “flawed” robots.

http://www.techtimes.com/articles/95172/20151015/people-prefer-robots-that-are-flawed-just-as-they-are-study.htm

This is dumb. It is sad to see journalists regurgitate this stuff without applying the critical thinking they supposedly learned in school!

The idea here – quite wrong – is that robots will act perfectly, and have to be “broken” somewhat to be acceptable to humans. This is nonsense. The problem is the pointy-headed, beard-scratching engineers who defined what “perfect” behavior was in the first place.

Who put them in charge? The following quote from the Tech Times article above is telling…

At the first part of the interaction, the robots showed off their flawless capabilities. Afterwards, Erwin committed errors in remembering facts, while Keepon exhibited extreme sadness, happiness and other emotions through sounds and physical movements.

The result was that the respondents preferred it when the robots seemed to possess human-like characteristics such as making mistakes and showing emotions. Researchers concluded that a companion robot should be friendly and empathic in a way that they recognize users’ needs and emotions, and then act accordingly.

In fact, the problem is not that the robot was too perfect. Instead, the robot has no understanding of its social context. It acted incorrectly for the situation. Leave it to an engineer to see autistic-level social ignorance as perfection! The research then faked things by breaking their robot. It made the people sympathetic, but didn’t actually improve the conversation.

It is more like trying to get people to accept phony robot minds by creating pity for them.

What is interesting is that this IS a change from the other “big idea” that roboticists have had – if robots can’t carry on human interactions, make people become more machinelike so robots can understand them. If robots can’t recognize people, wear a tag with code it can access. If it can’t talk, have people say only certain things. If it can’t walk (typical of many robots) have people carry it through areas require walking.

All over the world, you can see the trend caused by the failure of Artificial Intelligence – make people act more like machines so machines can understand them. If we broaden the definition of robot to ‘bot, as is often done, we can see this in action.

For example, web pages have to have lots of extra code added (“called Search Engine Optimization or SEO) for the search engine spiders to read. This is because no machine can actually read a web page intelligently, much less figure out if two people in a picture are wearing regular clothes, or actors wearing costumes in a play. So, if we are “writing for the web” we must actually write for people, then turn around and write for machines. Otherwise we have a lousy score in Google. Now, if we would only communicate in a way that machines could understand…that would be “perfect.”

So, the article about making robots flawed is actually about creating the right kinds of “trick” to make people accept already flawed robots. Clearly, according to this, a robot that acts like the one in the Tech Times article is NOT behaving perfectly. It doesn’t understand the environment it is in, since the social environment that humans create with their brains is too complex for it. It talks in a stupid way, which is redefined by engineers as the Platonic, “perfect” way to relate. It does not have any sort of ideal or perfect intelligence.

This goes to the deepest problem with many robot designers. They think they can create a being superior to humans, which in turn is defined as a collection of “ideal” behaviors they’ve cobbled from half-remembered liberal studies courses. All the science fiction they read imagined robots as superintelligent, superstrong godlike beings. Therefore, they try to make robots that walk “perfectly.” If that fails, they re-declare human behavior as “flawed” and “break” their robot so people feel enough pity for them to interact.

Here’s a great example. An engineer would see a flaw in the image below. A graphic designer would see a clever idea:

ordianry

But here is a real mistake, that probably cost someone their job.

get_buy_one

Is there any “intelligent” robot assistant that could tell the difference? Nope. Their apparent math-style “perfection” is actually a deep, deep flaw. Trying to “trick” people into accepting such an inferior intelligence by having it “fumble” to create human pity is immoral.

For thez guyz trying to create the perfect robots of imagination, people are the problem, and the “beautiful mind” robot has to be dumbed down to relate to mere mortals. This attitude has to change, or Robots will Never Jump.

I’ll take a bunch of dumb-bunny robots falling over for real – not “faking” it – any day. It is an honest, rather than a sneaky flaw. That is REAL imperfection, not the phony kind. At least our laughter at the comic robots at DARPA is genuine.

https://www.youtube.com/watch?v=g0TaYhjpOfo

 

 

Advertisements