Robot Bodies Needed Before Robot Minds
Monthly Archives: December 2011
December 26, 2011Posted by on
Robot scientists have long known about “the uncanny valley” – a place between a real human face, and a cartoon robot one.
Cartoon-y robots that display basic human features, but keep it basic (or look like a funny animal) are accepted. But as you get too close to human, the like turns to disgust. We’ve had some recent examples of this effect – the Final Fantasy: The Spirits Within springs to mind, as do latter efforts like Beowulf and The Polar Express. But everyone likes talking trains, or walking french fry packs. The Uncanny Valley explains why current CGI characters are almost entirely humanoid “aliens” or “creatures” instead of virtual people.
A robot, even a primitive one like Electro, built for the 1939 World’s Fair, is not really scary – it’s as cool as any robot today:
Here’s a YouTubevideo:
The popularity of zombies and vampires comes from the opposite direction – they embrace the Uncanny Valley. They are “almost human”, but that little difference makes them monsters.
They keep trying to make humanoid robots in Japan which cross the “uncanny valley”. Unfortunately, every effort I’ve seen lands square in the middle of horror.
Even more awful – youtube video of a robot zombie:
My suggestion: Anyone trying to make a zombie movie should rent this robot. Instead of rice falling heedlessly from this metal puppet’s mouth, it could be brains.
I suppose robot fanatics, uber-futurists, and transhumanists try to tell themselves these things are cool. They’re not, and their uncritical, reality-denying boosterism of such electric puppets undermines their broader belief system. These things are horrible. They hardly inspire the rest of us to embrace a future full of them.
A recent PNAS article (cited at this link) demonstrated that the Uncanny Vally exists for non-human primates. It is something we acquire as we grow up and become social – very young infants don’t show the disgust seen with older children. So, this is not just our “prejudice”, and it won’t go away with the younger generation as they grow up with robots. It will take genetic engineering to eliminate the Uncanny Valley in humans.
But one of the most fascinating aspects of the Uncanny Valley is that artists can make images of robots, even strongly humanoid ones, that don’t disgust. Rather than put one here, I suggest doing a Google search on “robot girl” or “robot boy” to see what I mean. Most images are erotic, but the fact that they are erotic means they overcome the Uncanny Valley. Apparently, artists can create images that, unlike photographic ones, don’t have that “dead” quality. Great photographers can do the same.
What’s interesting is that the Uncanny Valley doesn’t apply to the autistic. Observers of Second Life, the reference “Virtual World”, have noted that many of the most passionate members suffer Aspeger’s syndrome, a mild form of adult autism. Here’s a slide presentation on the topic, noting the potential for doing research on autism in virtual worlds:
Here’s an older article on the topic:
This article makes an interesting point about the difference between virtual worlds and standard games. In short, the non-repetitive interaction of a virtual world is better for autistics than the repetition of a game. Some might argue that it is true for everyone(!) :
Now, research is being done with deliberate “uncanny valley” puppets and autistic kids.
In this concept, a robot firmly in the uncanny valley is given as a playmate to autistics. The kids are nasty with the robots – initially doing stuff like trying to gouge their eyes out. But the robot does not provide negative feedback (!) from this behavior, and the autistic child soon changes their behavior in a more positive direction – after the robot demonstrates that the aggressive behavior “hurt” it.
But here’s my own take on the “uncanny valley”, tying it into Robots That Jump. The near-human robots look like dead people that are animated – in fact, they are zombies. In the image above, I immediately detected that the machine doesn’t care about how its hair looks – it is being styled by an outsider. The machine doesn’t know its hair is wrong, because it doesn’t have sensors and doesn’t react to its appearance.
The problem is that most robot designers and CGI artists focus on making their creature look as real as possible. The last part of the “uncanny valley” is behavior and movement – even the slight motions we see when a real person holds still for a portrait. Susan Sontag wrote a book on photography which mentions this effect. Nested deep in the political dribble (obviously my opinion) she described how photographs create “dead people”, since the highly realistic images aren’t aware of themselves, and can’t react to their surroundings. Timelessness, which is also true of “models” in CGI, and designed robots, is the root of the uncanny valley. It doesn’t bother very young children, who have not developed a model of others being aware of themselves. It doesn’t bother autistics as much, since they experience subtle behavior indicating self-awareness as complex data that they have to laboriously process.
The lesson for robot designers, is that behavior should come before appearance. Making a nasty humanlike puppet with herky-jerky motions is the source of the uncanny valley. Awareness of physical space is a must. Traditional puppets (rather than the metal monster puppets created by robot designers) don’t have this problem, since their operators infuse them with lifelike behavior – they feel as if they were alive, aware of themselves, and reacting to their surroundings.
Designers creating humanoid robots have put the cart before the horse – making them look good, with nothing inside. This is the exact description of an “undead” creature. Little wonder we hate them, and wonder at the Singularians who dream of being one of them, downloaded into immortal (clumsy) robot bodies.
I’d much prefer a cartoony Robot that Jumps to an anatomically-correct, lurching zombie sex-slave.
December 20, 2011Posted by on
Like the rest of Asia, China has been working for many years on the development of humanoid robots. Their models seem similar to earlier Japanese designs, with the shuffling “bent knee” walk that Robots that Can’t Jump typically display.
Case in point:
There is more detail at http://www.plasticpals.com/?p=28937
The robots are functional, and are being supplied for publicity and promotional purposes, similar to the PR provided by Japanese robots. As far as practical…well, another decade, no Robots That Jump. I’m beginning to feel we are in the robot version of Zoolander.
Here’s a link to the robot in a video:
It’s also ironic how the robots – this one, and all the other huanoids since the 1990s – are previewed with dance music. It would be more interesting if the robots were getting something from the music. People use music to sync their movements in social interaction, and “feel the beat” as a guiding forces. No other animal, except possibly Parrots, “feel a beat”. A robot that wanted to dance when there was a heavy beat in the background would be very interesting. Instead, we play the music so people feel the beat, and possibly forgive the robot’s clumsy locomotion as dance.
Here’s another video link showing (slow) dance music with clumsy robots:
It would be much cooler to see someone get the sense-motor feedback loop working, even if the robot fell down constantly, compared to these windup staged events.
In their current form, humanoid robots are doing the same function (and not all that differently) that mechanical Temple Gods performed in the Roman Empire. The public has their religion (Terminator), and then sees some toys in a temple that mimic or imply the presence of the real (Terminator) god. The fact that most posts on the Chinese robot reference Skynet, and imaginary movie thing-e, proves my point. This is not the positive role that similar “hopeful” devices have in stimulating research – it is simple affirmation of religious faith in our goddess Techna.
Let’s make one listen, jump, and fall down.
December 2, 2011Posted by on
This one is just too cool for words:
With a diagram showing some components of the Analytical Engine design
December 2, 2011Posted by on
There are two definitions of robots. One is the movie definition of humanoid or animal-oid creatures displaying intelligence and intent. The second is a machine operating in the real world (as opposed to standard computer virtual worlds) which reacts to sensor input. Intelligence and intent is lacking. By this second standard, we have growth in the robotics industry:
The article discusses the growth of the service robot industry. Frankly, that isn’t news. The growth in robotics, aside from research, has been in service for many years. These kind of robots still follow the ‘industrial’ model – meaning they are not autonomous or intelligent. The factors allowing their expansion have been (1) making the environments they work in more “machine-like”, and (2) a desire by businesses to ‘shed’ employees during the last few years.
In other words, there is growth, but it is mostly hard times – companies experiment with alternatives to human workers to save costs. Earlier, this trend may have contributed to making ‘machine-like’ work environments – you can drop the skill, and paycheck of workers you hire if your make the work more “mechanical”. This point was discussed at length by Marshall Brain in his “robot nation” articles from the early 2000s.
The other factor causing growth (or at least reported growth) in the next few years is conflation of mobile devices into the “robot” category. The vision of a software agent like Siri – which takes your spoken commands and “does stuff” for you on your mobile iPad or android is often seen as robotic. In fact, these devices are robotic only in how the movie showing them in operation makes them look like sneak machine intelligences. It’s helped by the huge number of new mobile users mistaking a puppet show for real artificial intelligence. The ability of software agents to use online resources is helped by (surprise!) making the web more “semantic” – meaning we formalize the structure of chaotic web into formats that machines lacking intelligence can understand. So, having Siri look up the best airline tickets requires common, formal standards for looking up airline tickets. Compare that the the more clumsy, but effective, way that people search for these things online.
The fact that the “semantic web” hasn’t been implemented is a testament to how useful natural intelligence, coupled with a messy, chaotic web architecture and interface has been.
So, I interpret the current growth of the industry as following the pattern from the late 1990s onward – pretty much the same kinds of machines are slowly expanding their use. It is evolutionary, rather then revolutionary. Revolution requires a shift from “artificial intelligence” and pick and place machines to robots that can function effectively in “natural” environments without a lot of central planning.
It’s true that the robot walkers are getting better – check out this biped from Boston Dynamics, creator of “Big Dog”
…but notice that the guy doesn’t push the biped very hard, compared to the kicks received by “big dog”
…and a ‘weaponized’ big dog can fight back, though it’s even dumber than a bull…
To date, NONE of the “growth” of the robot industry, except in research, has come from these kinds of robots. When it does, we will have our “next big thing”.