Robots That Jump

Robot Bodies Needed Before Robot Minds

This Robot is NOT “Giving a TED Talk!”

What a bunch of dumbass.

Robot “Gives a TED Talk”

Screen Shot 2015-11-13 at 7.45.33 AM


This robot is not giving a speech. A human is giving a speech, and operating an electrical puppet in order to present themselves as something other than human.

While the tech itself might have some uses (e.g. deep-sea tele-operation) we already have LOTS of tele-operated devices.

And, we have had puppets speak for us for a LOOONGGGG TIME…



Now. Punch is more interesting to watch than the droning robot above, and in addition, conveys nuances of emotion lacking in the floppy arms of the robot. However, both Punch and talking robots are both injecting nasty messages. In the case of Punch, it is domestic violence, and (as we see) disrespect for authority. In the case of Robot, it is the illusion that there is a robot that could give a Ted talk.

There is nothing like this. Even Watson, IBM’s system that comes closest to being an actual artificial intelligence, could not create, much less present a “talk.”

So, what do we have. In Punch, the creators encourage us to laugh at somebody gettin’ a beatin.’ In the case of Robo-puppet, we have someone encouraging us to believe the live of intelligent robots “ready to take over.” Sheesh.


Robots that Jump, Instead of Robots that Pretend to Think

A string of articles in the media about teaching robots by knocking them over:

The key problem has been engineers misunderstanding the problem of walking. In a typical engineering style, the participants at the DARPA Humanoid Challenge emphasized keeping their robots upright. They expect that if the right algorithm is created, the robot will be able to stand upright. The problem, for them, is standing upright.

But in nature, this is NOT the problem solved by agile animals. Instead, they are trying not to fall down and damage themselves. The goal is a higher level. Animal is actually trying to survive. Survival may require walking or running, but that is secondary.

In particular, not being damaged, and trying to break your fall if your (sic) perfect algorithm fails is what animals do. Compared to designed robots, animals always have a plan B, C, D…

So, it might even be that the robots could end up walking with better balance than humans someday. That’s their goal. But that doesn’t solve the problem of an agile robot. The problem is getting from here to there and not getting damaged. A person ultimately might be more clumsy on their feet than a future robot, but unless the engineers change their goals, the person will do better, since it is also adapted to fall down gracefully, walk funny if requested, even crawl if that works better. These are not “fails” but part of the locomotion process.

Engineers have to get away from thinking that falling down is a failure of their algorithm.

As might be expected, the Falling Down work was done at Georgia Tech, known for its ongoing interest in Robots That Jump versus Robots that (sic) Think.

This work is coupled with a story about Honda. The company wants to make a version of its Asimo robot that is agile (instead of a big electric puppet) that can climb stairs and crawl. Now, Honda has a pretty interesting show-robot with about 15 years of experience now. However, I wonder if they are seeing the problem correctly, and will include “falling down” as an active engineering goal, rather than one to be avoided by their algorithm.

This is good, since to date the Asimo has been a marketing tool rather than a truly agile robot.  Despite its friendly shows, it was useless when disaster struck a few years back, and a robot able to walk, fall down, and get up again in the tangled wreckage of a nuclear plant would have been great.

Finally, as a bit of a counter-trend, a report that people want “flawed” robots.

This is dumb. It is sad to see journalists regurgitate this stuff without applying the critical thinking they supposedly learned in school!

The idea here – quite wrong – is that robots will act perfectly, and have to be “broken” somewhat to be acceptable to humans. This is nonsense. The problem is the pointy-headed, beard-scratching engineers who defined what “perfect” behavior was in the first place.

Who put them in charge? The following quote from the Tech Times article above is telling…

At the first part of the interaction, the robots showed off their flawless capabilities. Afterwards, Erwin committed errors in remembering facts, while Keepon exhibited extreme sadness, happiness and other emotions through sounds and physical movements.

The result was that the respondents preferred it when the robots seemed to possess human-like characteristics such as making mistakes and showing emotions. Researchers concluded that a companion robot should be friendly and empathic in a way that they recognize users’ needs and emotions, and then act accordingly.

In fact, the problem is not that the robot was too perfect. Instead, the robot has no understanding of its social context. It acted incorrectly for the situation. Leave it to an engineer to see autistic-level social ignorance as perfection! The research then faked things by breaking their robot. It made the people sympathetic, but didn’t actually improve the conversation.

It is more like trying to get people to accept phony robot minds by creating pity for them.

What is interesting is that this IS a change from the other “big idea” that roboticists have had – if robots can’t carry on human interactions, make people become more machinelike so robots can understand them. If robots can’t recognize people, wear a tag with code it can access. If it can’t talk, have people say only certain things. If it can’t walk (typical of many robots) have people carry it through areas require walking.

All over the world, you can see the trend caused by the failure of Artificial Intelligence – make people act more like machines so machines can understand them. If we broaden the definition of robot to ‘bot, as is often done, we can see this in action.

For example, web pages have to have lots of extra code added (“called Search Engine Optimization or SEO) for the search engine spiders to read. This is because no machine can actually read a web page intelligently, much less figure out if two people in a picture are wearing regular clothes, or actors wearing costumes in a play. So, if we are “writing for the web” we must actually write for people, then turn around and write for machines. Otherwise we have a lousy score in Google. Now, if we would only communicate in a way that machines could understand…that would be “perfect.”

So, the article about making robots flawed is actually about creating the right kinds of “trick” to make people accept already flawed robots. Clearly, according to this, a robot that acts like the one in the Tech Times article is NOT behaving perfectly. It doesn’t understand the environment it is in, since the social environment that humans create with their brains is too complex for it. It talks in a stupid way, which is redefined by engineers as the Platonic, “perfect” way to relate. It does not have any sort of ideal or perfect intelligence.

This goes to the deepest problem with many robot designers. They think they can create a being superior to humans, which in turn is defined as a collection of “ideal” behaviors they’ve cobbled from half-remembered liberal studies courses. All the science fiction they read imagined robots as superintelligent, superstrong godlike beings. Therefore, they try to make robots that walk “perfectly.” If that fails, they re-declare human behavior as “flawed” and “break” their robot so people feel enough pity for them to interact.

Here’s a great example. An engineer would see a flaw in the image below. A graphic designer would see a clever idea:


But here is a real mistake, that probably cost someone their job.


Is there any “intelligent” robot assistant that could tell the difference? Nope. Their apparent math-style “perfection” is actually a deep, deep flaw. Trying to “trick” people into accepting such an inferior intelligence by having it “fumble” to create human pity is immoral.

For thez guyz trying to create the perfect robots of imagination, people are the problem, and the “beautiful mind” robot has to be dumbed down to relate to mere mortals. This attitude has to change, or Robots will Never Jump.

I’ll take a bunch of dumb-bunny robots falling over for real – not “faking” it – any day. It is an honest, rather than a sneaky flaw. That is REAL imperfection, not the phony kind. At least our laughter at the comic robots at DARPA is genuine.



Dumb Robots try to (Really) Walk

The recent “robots falling down” videos from the DARPA Humanoid challenge are fun to watch, especially since I fell myself last week and managed to sprain a wrist. Compared to the old “Asimo climbs stairs” videos, there has been some progress (though, alas not as much as most people think) in making functional humanoids. Unlike the stage shows for the Japanese robots, these machines are trying to function in a more natural environment.

Oopsie for the Hubo!

Ouch! Fortunately, these robots don’t have a lot of sensors, so taking a fall isn’t so bad for them:

Strangely, these machines seem most real when they are “unconscious” and being carried away in a stretcher – for once they don’t seem “mechanical.”

As usual, the results (which are interesting, but very far from useful) are hyped relentlessly:

Human-Level Intelligence? I’d hate to meet this moron.

Considering the sidebar of “related videos” on YouTube, the public’s imagination has already jumped ahead to robot sex slaves. But…if they are soooo clumsy at walking, what will they be like in bed?  Ouch, get that metal limb out of my face!

Wired says not to laugh at these robots. Well, well, I am laughing!

Any robot that was truly intelligent would laugh as well.



Google’s Robodog – False Fear

Google has been showing off a lighter version of BigDog, the robotic beast created by Boston Dynamics (before it was purchased by Google). A great video showing the new dog (electrical) compared with the old dog (reving gasoline engine and quite loud).

The lame part is the wrapper pages on various news organizations

“Robo-Dog” brings us one step closer to the end of humanity (Time)

Robo-Dog too real for comfort (Mashable)

Haunts our dreams (Engadget)

Geez, even the UK Tabloids do a better job of reporting this. Daily Mail UK provides a much better discussion, including an analysis of the military uses (which is what is driving the project, after all)

These examples clearly show the difference between “old media” professional reporting (even scandal sheets) and the amateurish new media blogosphere. The old-school reporters give us the information with analysis, while the new-era can only pretend to be “afraid.” Unfortunately, Time mag online has joined the new era in is silly statements.

I say “pretend” because all these tech blogs are actually huge fans of robots. “We should be afraid” are actually code words for “dig in, it’s awesome.” If there was actually a robot takeover on the horizon, these blogs would welcome it while pretending to be fearful.

This is the exact attitude we saw in the original Jurassic Park movie. After a stern lecture on the dangers of technology, the rest of the movie glorifies the result – DINOSAURS. Same thing here. All these blogs do is religion. They say a false prayer to “thou shalt not tamper with Nature” before digging into the glutton feast of robot fun.

It’s interesting to imagine this as an actual religion, complete with rituals. Worship of robots involves a short kneel-down to become humble and “fearful before the Lord,” in this case the machine-gods that the coming Last Days (Singularity) are supposed to unleash. It is touching to see people clutching their smartophones and tablets like Rosary beads, watching the spectacle at the altar (screen) and raising their voices in fearful prayer to a robot dog.

Future generations will marvel at this stupidity.


Occulus Concepts – 1939 and 2014

Here’s a current image of the virtual reality system by Occulus




Here’s a concept model for a similar system from 1939.



Probably, there are no cathode ray tubes small enough in 1939 to create a mini-3d TV system…but we can dream of someday….tech changes, but ideas (like personal “full vision” media experiences) remain more constant.


Atlas Shows Greater Agility

The move to more agile robots continues, and more importantly, the belief that a useful and/or “intelligent” robot must be agile.

Somewhat crazed reporting by the normally sober Daily Mail (UK)

The numbers begin to look right for Atlas (DARPA/Boston Dynamics) – 28 joint, hydraulics instead of electric motors (hopeless for natural joint movement). The robot needs a tether (it can’t run on internal power), but its software (Florida Institute for Human and Machine Interaction) is geared to agile motion rather than engineer-style motions.

The Daily Mail article even makes the agility connection, which is unusual. Most media coverage of robots assumes superhuman strength and motion, and focuses on the (apparently) evil minds of our “future overlords.” Instead, the articles show a link to a karate contest, which, like a plyojump, is an impossible thing for all robots to date to perform. Most stories about robots try to imagine the dark thoughts in their (nonexistent) brains. In contrast, Atlas is touted as being able to get into a car and drive it.

Despite this inchwise progress, we are still mostly stuck in the robot fantasy vs. reality. The robot is called a “he,” despite is lack of genitals and ability to reproduce. And (sigh) we are supposed to be scared of this “terrifying” robot, which looks like a bunch of picnic baskets welded together. I doubt that Atlas could win a karate match, or triumph over a 6 year old with determined fists. The tech-religion aspect of robotics continues, despite moves to the contrary…


Robots that Shuffle

Progress on the development of agile robots, with the Boston Dynamics/DARPA humanoid robot Atlas dragging a heavy object. In particular, the heavy truss being dragged is “unmodeled” – in other words, the experiments didn’t fix the results by pre-coding the dimensions and weight of the object into Atlas. Dynamic adjustment = one small step for Robots that Jump!

Compare to related videos by the Atlas group:

While this demonstration is impressive (given the incredibly lame performance of humanoids over the last 20 years) it still is far from a “robot that jumps.” In particular, the robot had internal sensors but no external sense. The slow, old-man shuffle the robot displays prevents the truss from banging into its legs. At the speed the robot is moving, you can see the truss tapping the lower leg on a regular basis.

While animals may be clumsy, they are not clumsy in this way. The reason is that they are “sensor-first” and “sensor-dense.” Classic robotics emphasized exhaustive computation from a few sensors, rather than shallow processing from a very large number of sensors found in living things. The result is robots that can do old-man tasks, but have trouble displaying the dexterity often touted for our so-called replacements.

Where will this work go? Given the interesting work Boston Dynamics did on multi-legged robots, one has some hope for Atlas. But it runs the risk of being another grad student demo, despite the fact that the methods of coding Atlas responses seem closer to biology than is typical in these projects.

But, as usual, this is lost on the tech industry, whose “reporters” still pad their stories by implying the robot is much more than it is…

It’s a good thing robotics engineers haven’t figured out how to make humanoids disgruntled yet.

Gizmodo (of course)

Moving smoothly in the world is a arthropod-level function. There are about 50 more levels necessary on any robot (even if Atlas moves faster than a super-old man someday) before we get to “disgruntled.” And it is NOT scary to see a machine do this – instead, it is a bit pathetic. Our machine overlords turn out be be shuffly old men.

Why has implying robots can do more than they can turned into a cute way to end a tech story? This is not the same as saying robots may be dexterous someday. And why do we need to be “scared?”

Frankly, I’m more frightened of super-sized cellphones.


Bionic trunks and robotic food

An example of an unconventional robot motion system from a very interesting company, Festo.

This mechanical version of an elephant’s trunk (or octopus tentacle) could change how robots are used.

This same company is experimenting with a robotic kangaroo, whose leg mechanism can recover energy during hops like a real kangaroo.

Finally, their eSpheres project tries to implement a clould of gnats, which circle in a defined space of air without colliding:

These are very interesting projects, though they also lead to creepy future visions of robot gnat drone swarms annoying us, possibly even including man-machine music from our fave robot band, Kraftwerk.

Also, the drone swarms might not just circle, but fly like birds…liike those seen in the COLLMOT Robotic Research Project:

Meanwhile, the non-jump robots advance in triumph everywhere, in people’s minds if not completely in reality. Here’s an update of the “Robot Nation” argument posted by Marshall Brain a decade ago on the end of fast food jobs.

Obviously, the CNN journalists (sic) didn’t know and/or mention this work.

A companion article on Brain’s site discusses Robots in 2015. Since this was written a decade ago, it will soon be possible to test prediction vs reality in this case. Brain suggested that it might be hard to create true Robots that Jump, and suggested an intermediate state was a undereducated dumbass with a headset strapped on, ordering them to do “agile” things in the fast-food kitchen and bathroom. This would allow automation of the business model (e.g. a central computer treating all the fast food places as a gigantic machine) while cheaply automating the parts that are hard for robots to do.

As usual, things are being oversold. After all, food automats have been around for a long while, but we don’t automatically glom onto their machine-like method of food delivery. But we assume that people will want the high-tech version of fast food because, it’s well, “progress.”





Some Resources

It’s always worth looking through aggregrators for stories on robotics. Unlike the one-off in the general tech press, they are often worth reading:

Robotics Business Review

Robotics Trends

Meanwhile, the desire for (and to be) a lame sex doll continues unabated:

(the picture of the humanoids is priceless!)


A Bit of Reality on The Verge

It’s wonderful to see a tech blog question its own quasi-religious assumptions – that humans have already create (it’s a conspiracy) a race of humanoid robots that will (1) revolt against us (2) have sex with us (3) bring us a beer. The actual state of humanoid robots is pretty lame compared to the movies, but you wouldn’t know it from CES last week. But here is The Verge pointing out that “booth bots” like the similarly hyped “booth babes” are part of the larger flapdoodle that characterizes our murky take on the future of robotics.

In the article by Russell Brandom, the notion that we are about to have humanoid robots in our society is skewered for what it is – tech bible (the Revelation part).

CES’s robotics booths have a surprising number of anthropomorphic bots, and most of them seem indifferent to the latest displays and processors. They’re working another angle, something much closer to kitsch. Human robots are fascinating, but their fascinating quality doesn’t have much to with the technology at work behind the scenes. It’s aesthetics, not technology


Robothespian, whose antics are described by the article is doing EXACTLY what Electro was doing at the 1939 World’s Fair.

film-from-1939-Worlds-FairAnother great quote from the article:

In the modest goal of tempting people into your booth, these robots are doing better than a lot of the more impressive tech on the floor. As it turns out, that game is more about the human reaction than anything that happens on a circuit board.

In other words, our desire to believe that Robots That Jump actually exist is fueled by our perception that “technology is getting so much faster every day” and that means there must be real robots running around. But this desire was the same in 1939 as it is in 2014. And nobody can really show what people want to see, so there is a quick retreat into computer-generated animation fantasy.

Computer-generated fantasy robot punches kid

Now, if you really have to make a big metal puppet, how about going beyond this idea? You can find exactly the same scenes in the (surprise) 1939 movie “The Phantom Creeps?

Bela Lugosi in The Phantom Creeps

(go about 3 minutes in to see the robot, who is concerned about household neatness)

The conclusion here is that our desire to have robots is is consistent, but our ability to make them, even today, is big metal puppets. There has been some progress in 80 years – witness the December 2013 DARPA Robotic Challenge – but it is nowhere near our fiction.

And what we want the robots to do (crack jokes, fight, play guessing games) is something a Neanderthal 40,000 years ago would have no problem understanding.