There’s a long history of announcements from the robotic community, claiming that “robot skin” has been created. Mostly, these have been unserious, since the huge computational load for managing skin sensation is not part of the story. A few historical examples:
Here’s and earlier one from 2010:
Cool, but no ability to process – in other words, even the limited number of sensors (dozens instead of millions) is not part of the design.
And even earlier. Sensors that would work, but extracting meaningful information from touch was – and is – beyond robots.
It’s possible to go back further (skin has been a hot robot topic for decades), but the result is basically the same: there have been a series of announcements of “robot skin” in the tech media, typically putting together a pile of sensors in some plastic matrix. While the sensors are real, the wiring up of the sensors is not addressed, and more importantly, the ability to process data from the sensors is not considered – since no computer at present can do the processing. Actual robots out there work with a very small number of sensors to make decisions.
A great example: the Boeing 737 Max. Software relies on a SINGLE “angle of attack” sensor to determine if it is going into a stall. Even with just one sensor, software designers couldn’t handle “edge” cases, probably leading to multiple plane crashes killing lots of people.
So, our current “robots” use few non-vision/sound sensors. However, good tactile sensation is exactly what is needed for Robots that Jump to interact with the environment robustly.
Contrast this with the typical “process control” engineering solution. A single sensor, or a very small group of sensors is used to report data. For simple things, this is fine – if water boils, it is time to turn off the tea kettle. However, for robotic interaction with a real-world environment, it isn’t enough. Time and time again, robots have been built with inadequate sensors to navigate their environment if small changes are made.
Contrast this with a simple creature like a flatworm. It’s body is far less complex than our, but it is saturated with sensory neurons…
The sensor complexity of this simple creature easily exceeds that of the more advanced “robot skin”. Furthermore, complex nerve nets appeared in the simplest of animals.
Compared to living things, robots show a huge undersupply of sensation. Many in the field have rightly tried to design “skin” – but the overall robot falls into the trap of needing incredibly elaborate processing – something that simple animals don’t have or need to have. Clearly, something’s amiss.
The most recent description of touch-feelie robots point to “greater sensory density than human skin”. It’s not meaningful – just having more sensors doesn’t help. You have to intelligently respond to sensation enabled by that density. Now, nerve tissue is expensive to maintain, so animals don’t have high density because it’s cool – it’s needed. That in turn implies that the high sensory density of animal skin has meaning.
The most recent entry into “sensitive skin” takes a step backwards, and imagines a few hundred sensors (compared to the millions in some robot skin designs).
The sensory equipment of this “advanced” robot is large. Probably the sensor density is below the flatworms above, probably similar to a tiny cheese mite:
Still, this new flat, hex-y sensor is a bit better. As the researchers say, it might prevent a robot from actually crushing you during a so-called “hug”.
Finally, it is still better than Google’s own “sensation” of tactile robots. When you run a Google search, the “sensitive skin” robots are lost between (1) Sex dolls, and (2) The “Sophia” electric puppet. Ironically, the sexbots are designed to feel creepy-rubbery to their equally rubbery owners. And, Sophia doesn’t sense anything on its own gynoid rubber, despite the thing apparently giving talks about “gender” in some countries. Here, we see Sophia’s single-sensor design in context:
I vote for the cheese mite. Sophia looks very 737 Max.