Robot Skin and Computational Overload

There’s a long history of announcements from the robotic community, claiming that “robot skin” has been created. Mostly, these have been unserious, since the huge computational load for managing skin sensation is not part of the story. A few historical examples:

From 2019, this robot skin has “millions of sensors”. Great, but what processes data from those millions of sensors, more sensitive to touch and temperature? You’d need millions of computers to handle sensor data and integrate it with, say a deep learning algorithm.

A close-up of why this “skin” is so sensitive, and why the “computation density” of the skin would blow away even a giant network of thousands of computers.

Here’s and earlier one from 2010:

Here, printed circuit boards are used to sense touch on the robot hand

Cool, but no ability to process – in other words, even the limited number of sensors (dozens instead of millions) is not part of the design.

A “robot skin” image from 2006

And even earlier. Sensors that would work, but extracting meaningful information from touch was – and is – beyond robots.

It’s possible to go back further (skin has been a hot robot topic for decades), but the result is basically the same: there have been a series of announcements of “robot skin” in the tech media, typically putting together a pile of sensors in some plastic matrix. While the sensors are real, the wiring up of the sensors is not addressed, and more importantly, the ability to process data from the sensors is not considered – since no computer at present can do the processing. Actual robots out there work with a very small number of sensors to make decisions.

A great example: the Boeing 737 Max. Software relies on a SINGLE “angle of attack” sensor to determine if it is going into a stall. Even with just one sensor, software designers couldn’t handle “edge” cases, probably leading to multiple plane crashes killing lots of people.

737 Max, where only one AOT (Angle of Attack) sensor is driving the robot “autopilot”. Even military planes only have 4 or so.

So, our current “robots” use few non-vision/sound sensors. However, good tactile sensation is exactly what is needed for Robots that Jump to interact with the environment robustly.

Contrast this with the typical “process control” engineering solution. A single sensor, or a very small group of sensors is used to report data. For simple things, this is fine – if water boils, it is time to turn off the tea kettle. However, for robotic interaction with a real-world environment, it isn’t enough. Time and time again, robots have been built with inadequate sensors to navigate their environment if small changes are made.

Contrast this with a simple creature like a flatworm. It’s body is far less complex than our, but it is saturated with sensory neurons…

THis image shows that the entire body is full of nerve cells, many of which are sensory.

The sensor complexity of this simple creature easily exceeds that of the more advanced “robot skin”. Furthermore, complex nerve nets appeared in the simplest of animals.

Compared to living things, robots show a huge undersupply of sensation. Many in the field have rightly tried to design “skin” – but the overall robot falls into the trap of needing incredibly elaborate processing – something that simple animals don’t have or need to have. Clearly, something’s amiss.

The most recent description of touch-feelie robots point to “greater sensory density than human skin”. It’s not meaningful – just having more sensors doesn’t help. You have to intelligently respond to sensation enabled by that density. Now, nerve tissue is expensive to maintain, so animals don’t have high density because it’s cool – it’s needed. That in turn implies that the high sensory density of animal skin has meaning.

The most recent entry into “sensitive skin” takes a step backwards, and imagines a few hundred sensors (compared to the millions in some robot skin designs).

A robot with flex “skin”, with sensors quite large, but closer to manageable. People have thousands of sensors per square inch!

The sensory equipment of this “advanced” robot is large. Probably the sensor density is below the flatworms above, probably similar to a tiny cheese mite:

The incredibly tiny creature has sensor numbers approaching our big, “intelligent” robot. The brain processing sensation so the mite can move and respond in the world is literally microscopic

Still, this new flat, hex-y sensor is a bit better. As the researchers say, it might prevent a robot from actually crushing you during a so-called “hug”.

Finally, it is still better than Google’s own “sensation” of tactile robots. When you run a Google search, the “sensitive skin” robots are lost between (1) Sex dolls, and (2) The “Sophia” electric puppet. Ironically, the sexbots are designed to feel creepy-rubbery to their equally rubbery owners. And, Sophia doesn’t sense anything on its own gynoid rubber, despite the thing apparently giving talks about “gender” in some countries. Here, we see Sophia’s single-sensor design in context:

Not one bit of touch on this thing, and “sensation” is some smartphone tech. Awesome!

I vote for the cheese mite. Sophia looks very 737 Max.

Published by pindiespace

See for more

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: