The never-ending drumbeat for the rise of the robots imagines them super-strong, super-fast, and able to recover from massive injury and keep on going – sort of like mechanical zombies. In reality, robots today can’t jump because they are so fragile. This must explain the puzzlement techno-freaks have over why Japan, with its huge robot efforts, couldn’t field any of these “ready to take over” models in the Fukushima nuclear disaster. In fact, the best that they could do were tethered tele-operated machines.
“According to experts, the biggest reason Japanese robots such as Honda’s Asimo were not used early on was their vulnerability to high radiation levels, which could easily damage their integrated circuits.
The domestic robot industry, in fact, had stopped working on ways to shield robots from extreme radiation around 10 years before the Fukushima crisis, and manufacturers and institutes were caught completely off guard, experts said.”
This story shows how little robotics has actually advanced in decades, in terms of real-world useful devices. The robots that actually went in site were US company iRobot Packbots.
These robots are not much different (though cheaper and sporting better sensors) than bomb-squad robots produced 20 or more years ago. In fact, I remember robotic, tele-operated arms being used in nuclear reactors in the late 1950s. They, and Packbot were built from a “body” perspective – a Robot That Jumps – though they were, and are, dumb as a stump.
The advanced Japanese “domestics” could not enter the reactor area for the same reasons that humans couldn’t – radiation. As microprocessors have become smaller and more complex, they have become ever-more sensitive to radiation. An Asimo entering the reactor area would have instantly fried its circuits. This combines with the sensitivity of robots to Electro-magnetic pulses, or EMPs, to paint a picture of a fragile lab experiment.
So, it’s not a surprise that “radiation hardened” robots like the PackBot are really tele-operated, with minimal autonomy. It is just too hard to make a complex brain in silicon that can also ride on a mobile robot body. The way we’re making more complex brains makes them more and more delicate.
NASA, which operates robots in environments with high levels of radiation, handles the problem simply – use old microprocessors. The Mars MER rovers Spirit and Opportunity used scaled-up versions of CPUs used in Macintosh computers from the early 1990s. The clock speed is 33MHz, about 100 times slower than a fast desktop today. My understanding is that the chip is radiation-hardened by making it much larger (with thicker wires) than the standard chip (a full discussion of radiation-hardening at http://en.wikipedia.org/wiki/Radiation_hardening). The take-home is that you can’t just put a shield around a modern chip – you have to redesign it, and go back into the past.
A modern 2012 Macintosh, or a comparable Windows system, would fry in the Martian radiation environment – and it’s not really that bad compared to Earth. As it is, the MER rovers have had several instances of memory corruption, despite radiation-hardened memory – an errant cosmic ray required a scary reboot of the system.
We could see a similar idea at work in the Space Shuttle, which had the luxury of being below Earth’s radiation belts and relatively protected. For many years, the Shuttles continued to use 386 processors (pre windows 95 stuff). The latest and greatest, at least the computers you would would get at Best Buy, and whose microprocessors are the same as most robotic efforts, were just to fragile. While radiation was less of an issue, fragility probably was. It would be very interesting to find out if the laptops the astronauts bring up are subject to problems, even in low Earth orbit.
Why are we at this stage? Part of the problem is the very technology enabling advanced robotics – silicon-based processors. If you make the wires small, they’re vulnerable. To reach the complexity required for a robotic brain, the chips become fragile. As we iterate through the final shrinking of this technology to wires a few atoms wide, we will end up with exquisitely sensitive chips. The will be powerful, but will require massive shielding to work in any real-world environment.
So, unlike the dorks at Slashdot think, the robot race isn’t about to take over yet – instead of a shotgun through the head like a zombie, just get a radiation source and – zap! Fascinating that Asimov had his positronic robots vulnerable in the same way – in more than one story, a gamma ray blast was able to destroy the robot’s brain. Strange that we’ve forgotten this (or don’t want to remember) and movies are made with robots that can’t be destroyed.
I suspect that robot’s still won’t be ready for the next natural disaster – at present, the human body, or simple teleoperated devices, are the way to go. However, it is interesting that the Japanese are going to try automated farm machinery in the area damaged by the tidal wave.
The best irony – instead of showing us what an automated farm machine might look like, we get a picture of the Asimo with the morning $5 double extra frapmeister chinuscope coffee, as if a robot that can’t even go outside is going to farm. This is the silly gap between robot fantasy and reality.