Robots That Jump

Robot Bodies Needed Before Robot Minds

Driverless cars don’t jump, but drones might…

2013 was the year that robotics began to heat up (yet again), but not robots that jump. There was the usual show of humanoid puppets with promises of future beer delivery…but progress to functional bipeds remained stuck in first gear. The antics of Asimo and related humanoids seems like it was in 2004. Media hype remains alive, like this example in the New York Times. Apparently, even if the robots aren’t intelligent, they will be made more faux-human to make the more acceptable in non-industiral environments. The article points out that someone kissed a robot, but kids (an more than a few adults) kiss their stuffed toys all the time. People kiss their TV screens. Progress.

The most obvious aspect of this is the humanoid-ization of industrial robots, e.g. Rethink Robotics Baxter. From the videos, Baxter seems like a typical industrial robot with a moderate level of sensory-driven actions. But the form factor has been changed, so the robot seems less of a threat on the shop floor. The design is pretty good, avoiding the “uncanny valley” and making a friendly-looking metal (in a body-builder sort of way).

Another interesting area centered around the DARPA Robotics challenge. The media made a big show of the humanoid Atlas body, but the big deal was having people compete to make a virtual robot work in a simulated environment, and then try to transfer it to a real-w0rld robot. The results are summarized in a video (with a very strange choice for soundtrack) on Youtube.

While it seems unlikely that these systems are truly agile, at least DARPA is trying to compare the virtual to the real. The problem (as always) is limited sensory input. It is worth stressing that living organisms always have very elaborate senses, even if they have hardly any brain. But humanoid robots continue to be brain-heavy and sense-deficient. The humanoids here still seem very tentative due to their lack of sensation. No wonder Atlas can’t jump!

Hype is endless. Motley Fool treats the Atlas, built by Boston Dynamics, now acquired by Google, as a disaster-bot. One look at this robot shows that it would be part of the disaster in a real-world environment. Hopefully, this won’t lead to another Bitcoin-style bubble in robotics.

In contrast, “driverless cars” became big news (yet still also) again, after a long winter following the end of the DARPA Grand Challenge in  2004-2005. The model for the driverless future is clearly a “robot-ized” environment instead of an intelligent agent. While the robot cars demonstrated some sensing of their environment, the thing that will let them actually work is (1) GPS providing precise locations, (2) map files linked to the coordinates, and (3) a network between the cars adjusting their behavior relative to each other. The network aspect is often not appreciated, but necessary. If all cars “talk” to each other, they can negotiate how to drive. It is much harder to mix driverless cars with those driven by humans for this exact reason.  A car that actually processes its environment (meaning it would make decisions more like a person) remains off the radar.

However, this is not a problem for techies – rather than making robots driving natural environments, they call attention to the value of making the roads more machinelike. Instead of jerky starts and stops with an emotional human driver, we can convert driving to an operation with precise, predetermined steps.If roads and street signs are too complex for robots, the solution is now to make the roads and street signs robot-friendly. This will “make driving safer” according to techies.

Sure, this would work. However, what we end up with is no longer driving – it is more like an old “slot car” system where the environment prevents the unintelligent vehicle from deviating from its path. It is very much like the “world of the future” I read about in grade school in my 1960s “Weekly Reader.” In that world (1979) the roads had wires that sent messages to the cars so the operated without crashing. A central controlling computer kept traffic flowing without jams. Despite the Cylon-type expectations of robots, our coming driverless age will be more like a railroad or streetcar system. If you add in the concept of “shared ownership” (an oxymoron if there ever was one) the transformation is complete.  Non-owned cars, by definition are less subject to user control. It makes more sense to make them part of a dance-like systems run by standard computer software. But we won’t be any closer to the Asimov-ian car-bots of his story, Sally. Asimov describes Robots that Jump in his cars, which don’t need all the GPS and network connections in the modern “vision.” Instead, they seem more like critters, voiceless but communicating with honks and doors slamming.

Driverless cars as envisioned today are definitely NOT Robots that Jump. The models currently out have very limited sensors and rely on artificial signals (e.g. GPS) along with precomplied map data. Detection is limited to collision avoidance.  They might be a good place to start, but it won’t happen.

One last point to consider is how Google has it tentacles into both driverless cars and humanoid robots. As mentioned earlier, Google grabbed Boston Dynamics, creator of Atlas and various four-legged robots like the famous Big Dog and Cheetah, along with several other companies including Schaft. Schaft has been working on stronger, capacitator-based robotic actuators, which can provide the torque needed to keep robots upright. Current servos just aren’t strong enough to respond to the forces generated when a robot slips on a banana peel.

It was refreshing to see The Guardian Online look at Google robotics not as the rise of the machines, but the rise of a corporation comparable to the big trusts of the early 20th century. For the near term, we won’t have Robots that Jump. We don’t have to fear robots. Instead, we have to worry about companies making the world safe for their as yet clumsy, dumb robots. The worst case would be a world that is robot-friendly, but highly structured and controlled in human terms.  The problem would be that the world would be less interesting, when make more accessible to dumb robots.

On the other hand, flight drones might be closer to true robots. Unlike driverless cars, their senses more or less match the available environment information when flying. Next post.

Advertisements

4 responses to “Driverless cars don’t jump, but drones might…

  1. John Nagle January 2, 2014 at 6:53 am

    Schaft is on the right track with power storage. Both Big Dog and Atlas have powerful but inefficient hydraulic systems. Neither has any significant energy recovery. One might expect hydraulic accumulators, but no; the hydraulics are traditional, although there’s a nice two-speed arrangement in the actuators to allow either fast big motions or slow powerful motions. Schaft is overpowering electric motors and liquid cooling them to get high power. Tesla does that for their vehicle motors. That gives them more options when they need a temporary power boost.

    The DRC teams have very slow, semi-teleoperated control at this point, but they only had three months with the real robots, and the Gazebo physics simulator wasn’t very good. Both of those problems will be fixed by next year. Assuming, of course, that Google doesn’t dump the DoD business to focus on something with more volume potential.

    • pindiespace January 2, 2014 at 7:35 pm

      This is right on track. If you’ve read my blog I think the “problem” of robotics has been less about Ai, and more about lousy robot bodies.

      Thus, huge amounts of work on wheeled robots (because they’re easy to build) so that researchers can focus on building a brain. The connection to Tesla is awesome – a vechicle optimized along one line of performance (monster engine torque) making the rest of the car impractical except for extremely short trips. I knew Big Dog had hydraulics, but the lack of efficiency is a real problem. I’ve never seen a movie robot with a roaring engine – the assumption is that there is some magic “power cell” that we can put into small bodies. The article was particularly good at pointing out that the way animals recover from slips, mis-steps, etc. requires a lot of fast power. Current robots don’t do this, and despite work with air and water pressure, we aren’t there. In fact, the description of the electric muscles in the Martian fighting machines by H.G. Wells still sounds hi-tech (an elastic sheath with micro-electromagnets).

      What Google does will be interesting. They are of course a business, but, retaining founder management they might make decisions that don’t work in a strictly business sense. I suspect they’ll work with robots for quite a while, for PR value if nothing else. Look what little drones did to make Amazon the talk of the tech world. Also, they have true “believers” in the quasi-religious sense that feel it is their duty to usher in the robot age. Volume be dammed if we can have our own Area 51/Skunk Works on a barge in SF Bay.

      • roboticist January 13, 2014 at 6:57 pm

        hydraulics is exactly the wrong way to go, it loses a lot of control bandwidth and efficiency. Impulse power for fast reaction is required yes, but it’s entirely achievable with electromechanical systems.
        Traditional motors, shape memory actuators etc are all up to the task, they just have not been utilized very efficiently.
        Take a look at projects like DLR third generation dexterous hand – it’s amazing how robust they have managed to build this thing – all thanks to high control bandwidth and optimized actuator usage. They have a side project where they use elastic tendons to improve even further.

  2. roboticist January 13, 2014 at 6:51 pm

    Actually, wrt sensory inputs.

    I’m not 100% sure about SCHAFT and its HRP project heritage ( its running a derivative of OpenHRP stack ) everyone building a well articulated robot these days has made big leaps in factoring in very significant sensory input – what the joints themselves are telling you.

    Smart servos like dynamixel or it’s other competitors basically have one fast Cortex-M MCU controlling every joint, and they have very high fidelity feedback on the torque ( current ), velocity , and load resistance thanks to integrated current and temperature sensors and super high resolution miniature MEMS absolute position sensors. Note that the low or “reflex” level control is basically moved close to the joint, and essentially there is a hierarchical computing architecture where central brain sends high level controls and receives high level feedback but does not concern itself with hundred kiloherz servo current loops.

    This enables to build and update very high quality full kinematic model of the robot in the “brain” part, enabling much better active control of the robot. Both heavy robots like DRC ones, and smaller RoboCup classes are moving rapidly along on that front.

    Fundamental advantage that SCHAFT put into their robot was using ultracaps as impulse power source and add liquid cooling to joints, to back much more peak specific power into the same weight and space. That’s exactly what is needed, as average joint loads are far far below what is needed for impulse power for moves like jumps, quick balancing etc. The downside of this is , if the joint has to deliver high power longer than designed, the capacitors will be empty and heat will hit the limit – i.e. joint will get tired just as humans do …

    Note how cheap MEMS sensors, cheap distributed computing by 32bit MCUs, ultracapacitors basically enable a fundamentally new capability here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: