Robots That Jump

Robot Bodies Needed Before Robot Minds

An Actual Robot that Jumps

The “public” robotics (as opposed to the private world of practical industrial robots) is split into two factions. One promotes electric puppets who supposedly have humanlike “minds” and do art or otherwise replace humans. The second faction tries to build robotic bodies that can actually function in the real world. The public tends to confuse or think that a humanlike mind is needed to drive an agile robot, but that’s not the case. Instead, it’s perfectly possible to create a “mindless” robot that functions efficiently in its environment.

For whatever reason, Boston Dynamics has taken the second track for many years. They demonstrated doglike and mulelike robots as “pack animals” (sadly, the need for a gasoline engine to provide enough power doomed these devices to irrelevancy). More recently, BD has done a lot of PR using its ‘Atlas’ robot – basically the descendant of the DARPA Atlas with benefits.

In the video embedded above, Atlas is showing with improved agility. It is actually doing a bit of tumbling, which is truly remarkabe. While it still looks “mechanical” it is clearly an emulation of humanoid motion. A true Robot that Jumps!

The one problem, just like the “Big Dog” and related robots – power. The size and weight of this robot imply massive electrical consumption. BD hasn’t given up on having electric stepper-motor-like motion, which means that when Atlas cancels its motion after a move, it uses a lot of power. A human might use 150 watts doing brisk exercise; my guess is that Atlas is using 50 times less power.

This wouldn’t be a problem except that the Iron Man “power cell” doesn’t exist. All we have are lithium batteries, which even at their most efficient, could not move this body for more than a few minutes. That’s a huge problem. In the pack-bots, BD used gasoline engines, since fossil fuel has 50 times the energy density of a lithium battery. And future battery tech, while likely to get better, might at best double, and even that with exotic tech like molten salt. Not 50x.

I wish BD or someone who knows would doe an accurate estimate of the power consumption of the Atlas body in vigorous exercise, and then ‘scale’ up or down to see if there is a point where you wouldn’t have to recharge every few minutes. My guess is that very small might be more viable (like the robo-insects) but big battle-bot is out, unless you use nuclear power. In fact, old images of robots by Hans Moravec in places like Scientific American routinely showed nuclear power supplies.

Practical? If not, at least BD has a steady PR stream feeding on the “hopium” of tech-utopians…

Making Dating into a Message from the Future

In a new low for humanity, our most technically advanced news media – distributed, Internet-based stories allowing instant access and comments – decided to act like a 3-year old and “believe” in the Hanson Robotics “Sophia”, the reputed “first robot citizen” of Saudi Arabia.

Sophia with its creator, Dave Hanson of Hanson Robotics

The video interview is visible on this page:

https://finance.yahoo.com/video/sophia-robot-dating-apps-kids-180844755.html

The event in question is a Yahoo! Finance interview – specially run by a reporters and interviewers previously dropped on their heads as children – robot Sophia discussed “modern dating” via dating apps, also who should pay on the first date. Telling Quote from the interviewer:

“…For what it’s worth, the robot that was created by Hong Kong-based Hanson Robotics to improve robot-to-human communication, says she has no desire to pursue eventually raising children of her own, but would prefer working with them instead…”

OK, let me get this straight, trying to keep the straight face of the interviewer. We are supposed to think that said robot has considered dating and children. If it did, it would need at the very least a colossal, cross-referenced sea of “deep learning” pattern recognizers, coupled to some sort of “decider” creating “opinions”. This most assuredly the robot does not have. It is no more interested in dating than a toilet plunger.

Another beauty, this time from the robot’s “mouth” (sorry, robot speakers):

“Before dating apps, the biggest factor in determining love was geographic proximity,” she said, while tethered to a human operator who had been informed of the interview topics ahead of time. “The advent of dating apps has collapsed the distance between people. So even though I don’t date, I am a fan.”

Now, pretending this robot actually has opinions, rather than being a big electric puppet, providing PR for Hanson, is not so bad. After all, we tell little kids about Santa, so why not pretend electric puppets go on dates? Also, the comments supposedly created by Sophia are liberal/wokster, so one might even imagine that they “make a difference” in the zeitgeits. After all, if machines tell us “we” are the problem, not our tech (another Sophia interview), what’s not to like?

Here’s the problem: BS stories like this become embedded in the media, and with modern social networks, frequently are treated as evidence that intelligent robots are on the way. No matter that the “questions” asked of Sophia are submitted to Hanson beforehand, so an interesting (human-generated) answer can be mimed out. And, video images of a robot apparently talking are parsed by kids as the robot is alive. Even when they grow up, Plurals/GenZ will have a “gut feeling” that something is there (meaning the kind of emotions you have around dating) when in fact it is a puppet show.

In a recent study cited in Parenthood magazine entitled “Does Your Kid Know that Robots have no Feelings?“, kids clearly believe they are interacting with a “social being”:

“…90 children ages 9-15 interacted with a humanoid robot named “Robovie.” Within the 15-minute session, children interacted physically and verbally with Robovie, until a researcher interrupted its turn at a game and put the robot in a closet, despite its objections. Post-interview, results showed the majority of younger participants believed the robot had thoughts and feelings and was a “social being.” In other words, it could be a friend…”

Best friend. Joyful happy boy smiling while hugging a robot

The take-home for most techies is that “robots are already talking about dating” and hugging children…so the robot revolution is neigh. True, Sophia probably has better dating skills than the typical basement incel hammering away in Fortnite, but the rest of us don’t fall in this category.

In reality, humanoid robots are proving poor substitutes for humans in tasks that can be measures, instead of “ideas” that can be puppeted. Witness the hapless Fedor, the Russian humanoid, sent to the International Space Station to analyze whether humanoid robots of its type could help with tasks.

Take home: NO. Didn’t work, according to Yevgeny Dudorov, executive director of robot developers Androidnaya Tekhnika (see: https://phys.org/news/2019-09-russia-scraps-robot-fedor-space.html )

“…but Fedor turned out to have a design that does not work well in space—standing 180 centimetres (six feet) tall, its long legs were not needed on space walks, Dudorov said…”

In the real world, we are a long way from the Robot That Jumps – jumping to help in space, or jumping in to give basement losers a mechanical dating partner.

But perhaps the example of Santa is valid. After all, most techies claim to be secular, while holding a set of irrational beliefs in “futurism”, “the singularity”, “strong Ai” and similar beliefs that are impossible to differentiate from ol’ time religion. Since none of these beliefs refer to anything real, a robot like Sophia is a magic elf from the “coming around the corner soon” prophecy spirit future time when robots will go on dates – and those incels will be able to replace their current flabby rubber girls with microprocessor-driven puppets. Hanson Robotics will be there to sell them, I’m sure!

Atlas Deepfake

Well, well, people are so desperate to have robots that they’re willing to propagate phony videos of the Boston Dynamics humanoid in action

This worked great for Corridor Digital, a Los Angeles VFX house, who wanted to parody some of the real videos, including the one of Atlas where it is being “taunted”. A great job of motion capture plus blending in robot body

Corridor Digital Video Site: https://www.youtube.com/user/CorridorDigital

Corridor Digital is doing a great parody of the BD madness. But the real fun comes when you visit tech blogs discussing the face (I wonder how many of them were initially duped) that use the parody to encode pious preaching about how the “robot uprising” will be much deadlier than the video… The proof? The VFX looks a little like real Atlas videos.

Boston Dynamic Videos: https://www.youtube.com/user/BostonDynamics

To their credit, BD actually linked the Corridor video on their own youtube site. All in all, some great shared digital publicity.

But the media appeared caught in a 5-year old’s understanding of both videos…

Gizmodo erupted in crazed slobber of pseudo-news, where (despite the parody), the author takes it as “truth” and preaches to us that the robots will rise up and destroy us, in the best religious fantasy tradition – https://gizmodo.com/that-viral-video-of-a-robot-uprising-is-fake-because-th-1835575686.

In fact, the deepfake appears to have made the author think it is more likely that the robots will rise. CGI “proves” something is real!

The Verge is slightly more sensible, and uses the parody as a discussion about how people feel empathy for things that don’t have a mind, if they act a certain way – https://www.theverge.com/tldr/2019/6/17/18681682/boston-dynamics-robot-uprising-parody-video-cgi-fake

The real problem is that people will see mind and consciousness where there is none, and act accordingly…

“(From the Verge) As MIT researcher and robot ethicist Kate Darling puts it: ‘We’re biologically hardwired to project intent and life onto any movement in our physical space that seems autonomous to us. So people will treat all sorts of robots like they’re alive.’”

Most of the coverage by “lower” tech blogs deleted the fantastic parts of the parody, dropped the quality (so the CGI was harder to see), and simply let people believe an angry robot was breaking out of its cage.

Of course, our “new media” need clickbait, and as always, it is best to distribution religious text. The techno-singularian vision of the future has become more than a cult, and is in fact a replacement for traditional religion in techies. Deepfakes like this are OK because they are “truthy” – they could be true, so we believe!

This is part of a larger problem for our society. The rise of CGI has made people “believe” that anything that can be 3D modeled “could be real”. This is why companies like Facebook and Uber churn out bullshit images of “air cars” that tech media and groupies unthinkly accept as “just around the corner”.

I suspect the writers don’t understand that Uber can endlessly create these CGI videos to look trendy, and rake in gains in stock price. Actually making this helicopter (that’s what it is) would be difficult and dangerous. Better to make a phony video, then say it could be true just around the corner.

So be worried – not abut the BD robot, but about the millions of craven pixel-pushers desperate for a god to worship and (human) sacrifice for…

Chickens run around with their heads cut off, and the BD robot is on its way to being a decapitated chicken in several years. Fascinating that said chicken is touted as our destruction.

Another Puppet Show, Featuring a Gynoid Robot Being a “Creative Artist”

Well, the latest in the bright future of robots is here, and it is a “creative artist” Ai-Da“, a gynoid robot whose “works” have actually been sold to idiot buyers for a total > $1 million dollars.

two images of ai-da robot head
Adian Meller showcases Ai-Da. Source: Metro

Quoting Devidiscourse (India’s media loves humanoid robots),

“…Described as “the world’s first ultra-realistic AI humanoid robot artist”, Ai-Da opens her first solo exhibition of eight drawings, 20 paintings, four sculptures and two video works next week, bringing ‘a new voice’ to the art world, her British inventor and gallery owner Aidan Meller says. “The technological voice is the important one to focus on because it affects everybody,” he told Reuters at a preview…”

The big electric puppet, created by the 46-year old art dealer can’t walk or move around, But that doesn’t stop the flood of PR images of Ai-Da thinking pensively of her future, self-consciously reflecting the incept scene from HBO’s “Westworld” where lead robot, “Dolores” wakes up.

a pensive pile of plastic - ad-ia robot with downcast cameras
Ai-Da on nonfunctional legs evoking an HBO series
source: MSN

Yes, the same people, Engineered Arts, designed and built the Ai-Da and the HBO movie-bot bodies. Ai-Da was given legs to make her look like more like the movie robots.

Evan Rachel Wood as 'Dolores' in HBO 'Westworld' Scene
From “Westworld”, a scene with actress Evan Rachel Wood as the robot “Dolores”. Later in the scene, Dolores sits up, exactly like Ai-Da image above. Source: Daily Kos (big surprise these dumbass “progressives” are anti-historically suckered into this worn-out discussion)

There are several interesting features of the aI-dA machine itself. First, the cameras for “drawing from sight” are actually in the artificial eyes (though I’m surprised there isn’t a open-on-demand third eye), and the drawing arm does exhibit fine motor control for drawing to a canvas. Mechanical plotters have been doing this since the 1940s, but having it in an articulated hand is interesting.

Reminds me of a Fortune-Telling Machine I saw somewhere

The algorithm used is also interesting – it breaks up the image into a bunch of short line segments (like some brain neurons in primary visual cortex may do) and can reproduce your face with said lines. It is neat, though hardly useful, robot-wise, when you can just take a high-resolution digital picture.

Interesting, though I seem to remember seeing stuff like this 40 years ago!
Source: Daily Mirror (click for video)

But…wait! This isn’t the first “robot artist”. Some may remember Aaron from waayyyy back in 1973, an Ai program created by (human) artist Harold Cohen.

Aaron was programmed in C (later on LISP) on computers running 1/500 of the speed, and with 1/100,000 of the storage capacity of current art-bots like Ai-Da. Still, of the two art-bots, it seems the most “creative”…

Aaron’s art from 1979. Source: Cohen Website

Ok, admit it! Aaron is a MUCH better painter than ai-da! Aaron plays with forms and variation, while AiDa makes something like a street artist sketch. ai-dA simply maps contrast to edges, to form to a bunch of lines, similar to what neurons do in the lower-level of the brain. See this article for some recent work on neural “edge detection” in brains.

Aaron, in later incarnations, even mixed its own paint! And that was with computers 1/1000 the power of those used today.

Cohen also made sure that people understood Aaron was code, and he was exploring how much of art was reducable to algorithms.” Each artistic “style” was coded by Cohen, then Aaron would crank out an infinite number of variations using style.

However, there is a LOT of originality in Aaron’s variations. Cohen’s own commentary on the web (site now showing signs of abandonment) may be found at this link.

This is a serious exploration of art’s scope and meaning, with algorithmic art treated as both medium and product.

Aaron image from 1992, after it was reprogrammed in LISP, which improved its color choices. Source: Cohen Website

In my mind, this indeed demonstrates that some of the ‘imaginative’ part of “the creative professions” can be automated – you can truly create an “intelligence amplifier”, even for art.

And Aaron has hardly been alone. Over the years, there have been dozens of “art bots”, like this one from 2011. Created by Benjamin Grosser, it used ambient sounds to adjust the images it painted:

Interactive robotic painting machine Source: vimeo

A good resource for 1990s computer-created art may be found at The Algorists, which seriously treats the idea of algorithmic art. For the 2010s, check an even more recent article on Newatlas.

Now, compare this “automated painting” to Ai-Da. The “art” AiDa draws is clearly more primitive than ANY of these historical art-bots, and just looks like the neural edge detection. It’s INFERIOR to the past, and more image classification than “art”.

Line drawing by Ai-Da. Source: Futurism. Incredibly, the author of this piece failed to mention that Futurism has already covered numerous “robot artists”, all more interesting than Ai-Da!

Ai-Da has created other images, termed “shattered light”, which are abstract rather than figurative. However, the “shattered light” images actually up for sale (to suckers) at the gallery are generated from a different algorithm. They aren’t drawn by the robot arm, but are printed. Then, a human artist colorizes them so they look khuuuuul…

Ai-Da with shattered light painting by creator
The actual images going on sale, termed “shattered light”. No mention anywhere how these images were created (apparently nobody cares), but we do know that a human repainted over the print. Source: Oman Times

At least Aaron mixes his own paints!

The fascinating part, as always, is not the technology, but the emergent robot narrative coupled with the insane, uncritical media worship of this parlour trick by an art gallery, eager to seize the zeitgeist to generate $$$ (I salute Adian Meller for this creative insight).

Why throw shade on poor AiDa? First, Ai-Da is not being represented as what it seems to be, which is an advance in computer vision. Instead, the creators claim they’re raising deep and philosophical questions about the meaning of what it is to be human.

Hey… these deep conversations have been happening for 40 years with the MORE ADVANCED art-bots, and there are vastly more interesting and critical discussions available at the intersection of creativity, algorithms, and science if you bother to look…

But you wouldn’t know it from the media!

Practically none of the hundreds of slobber-stories about Ai-Da mention that there are other, superior art-bots out there. And nobody therefore has to grapple with past robo-painters doing a better job than Ai-Da with inferior hardware.

Instead, in our modern world, the public, the discussion isn’t about creativity and programming. Now, its personal. We are told that we have “suddenly” created an artistic robot who is a “performance artist” and is selling her art in a gallery. No discussion of method, coding, or the actual humanness of the robot – just pretty pictures of a female electric puppet in a fancy home with a painting smock.

Apparently, it’s enough to reinforce “the female robots are among us, COOL!” story.

Even the critical articles, like this one on Artnet, seem completely ignorant of the past. Naomi Rea attacked Ad-ia as anti-female, but missed the forest for the trees – she didn’t even mention Aaron or other artbots – breathtaking anti-historical thinking.

My guess is her “creepy white men” comments were just standard, intoned, wokster piety, tacked onto the end of a poorly researched article.

The a-historical aspect of this robo-worship is breathtaking. Why, on the Futurism blog there are older stories about art-bots! But the author (Victor Tangerman??) of the Ai-Da story doesn’t even mention them, and just parrots the Reuters news release. Possibly we should replace “parrot” with “robot” so we don’t insult birds.

My guess is that Futurism is willfully ignorant of the past, and sees no reason to research the extraordinary claim of robotic art. Instead, it trumpets that this pile of parts is a “new Picasso”. Yeech.

Ironically, Futurism’s “related articles” (which are the result of a pattern-matching algorithm), DO mention earlier art-bots. Score one for the machines!

Ai-Da has very little to do with art, and everything to do with this strange 2010s desire to “believe” that robots are about to appear among us, typically presenting in sexy female form. A few humdrum references to “we must think deep thoughts about robots” always appear, but really, it’s about nerd sex with plastic, not the potential for machine art.

In a recent “news conference” Ai-Da, like the similar Sophia robot, “spoke” to the press. Like Sophia, Ai-Da was pre-loaded with answers by a human operator. In other words, someone remotely operated the robot to give it the apparent ability to speak.

BUT DON’T WORRY – it will have its own voice, soon, you say?

Consider how strange this attitude is…

Before automobiles, people didn’t have a passion to make fake cars and pretend that they actually worked. People rarely pretended they had working airplanes before the first planes flew. They certainly didn’t show a fake then say “believe in it NOW, because it will work ‘soon’…”

Why robots?

I suspect you might have gotten a similar “DON’T WORRY” out of a Egyptian priest who spent his days talking through a temple statue to give it a voice. Watch it, humble farmer, someday, the god might just speak through the this statue!

Here’s a transcript from an 1899 (yup) lecture describing how ancient Egyptian statues were designed to be “spoken through” by priests, and also how the statues had joints and valves to make them move:

” …M. Gastor. Maspero, the well-known French Egyptcologist, has recently written an interesting article on the “speaking statues” of ancient Egypt. He says that he statues of some of the gods were made of jointed parts and were supposed to communicate with the faithful by speech, signs, and other movements. They were made or wood, painted or glided. Their hands could be raised and lowered and their head moved, but it is not known whether their feet could be put in motion.

When one of the faithful asked for advice their god answered, either by signs or words.

Occasionally long speeches were made, and at other times the answer was simply an inclination of the head. Every temple had priests, whose special duty was to assist the statues to make these communications.

The priests did not make any mystery of their part in the proceedings. It was believed that the priests were intermediary between the gods and mortals, and. the priests’ themselves had a very exalted idea of their calling…”

Source: Los Angeles Herald, Volume 00000000602, Number 187, 5 April 1899, via California Digital Newspaper Collection at UCR Center for Biographical Studies and Research. Note: I corrected the clumsy OCR of the robot translator on this website.

If you’re wondering how ancient Egyptians could have possibly listened to a statute puppeted by a priest without laughing, consider that the following image was called “incredibly lifelike” by multiple media outlets, echoing the art dealer’s press release, without questions or comment.

No, it is NOT “hyper-realistic” Source: CreativeBoom

No, this is only slightly improved from a robotic fortune-teller, something which used to be common at theme parks. “Ai-DA” remains deep in the uncanny valley. Only a generation raised on seeing videogames as “realistic” could think of this as “realistic”.

Print of Esmerdala, a Disney model taken from older fortune telling robots at theme parks.
Image of mechanical fortune-telling machine Esmeralda, a Disney model taken from much older fortune-telling theme park robots. Source: Fine Art America

The Ai-Da puppet show does indeed capture, as the creator of Ai-Da desired, the “zeitgeist”. We really, really, really, really want to create robots, but we don’t understand how to do it. We’re stuck on the fast advance of digital computing and “accelerating change”, which seems to require that robots exist now. We resolutely ignore the 40-year old history for robot artists. We ahistorically assume this must be the first time.

Mask of high priest in Egypt? Possibly one of those who puppeted the Egyptian robots… Source: Sotheby’s Catalog

But…there aren’t any robots like the ones we insist upon. So, we set up an electric puppet to fill the void, holding steady our devout faith until the Second Coming of the Machines.

In practical robots, this hopeful puppetry is masking the failure of so-called “driverless car” initiatives.

In all cases to date, “driverless cars” actually have a human operator behind the scenes, monitoring and guiding the card past anything beyond cruise-control complexity, typically a few times in every mile. Essentially, glorified forms of Cruise control allow a single driver to work as a cabbie in multiple vehicles. If you want the job, Designated Driver is hiring!

While “driverless cars” have some self-control, they are corrected every half-mile or so by a human operator, or when the people get tired of waiting for the robot’s incredibly slow progress. Here’s a modern high priest operating one of these puppets. You can apply for a job doing this at Designated Driver Source: Futurism

Will the public catch on that most Ai out there is just a puppet show similar to temple-tricks played thousands of years ago? I’m not holding my breath – people do need religion in their lives, and a religion of godlike robots that want sex with nerdy mortals seems just right for the 2010s.

Meanwhile, Harold Cohen, the creator of Aaron, died in April 2016. His passing unrecognized by the “robotic” tech-future-utopian media. No love for him, or his for his sexless but vastly superior art-bot.

Kissing Empty Air

Recently, the hubub over imaginary CGI robots has reached new heights. While real humanoid robots look pretty inhuman, the media more and more acts like a 3D game character is exactly the same as a 3D “physical meatspace” robot.

Lately, the excitement in our gynoid era has shifted to false female-presenting lips smacking together under the authoritarian C++ code. In one story, a nonexistent “robot” was shown kissing a model. In another case, two lumps of plastic clacked together for a “kiss”.

First, the robot duo “kiss”. A while ago (2009), horribly inhuman, easily defended against:

The second case is more troubling.

Our 2019 “robot kiss” features Calvin Klein apologizing after they released a video showing model Bela Hadid apparently kissing Lil Miquela, a blob of software and code and pixels, in other words somebody’s digital art working as a corporate shill.

lil-miqua-corporate-shill_20180320103933_ZH

As Wikipedia reports:

Miquela is an Instagram model and music artist claiming to be from Downey, California.

The project began in 2016 as an Instagram profile. By April 2018, the account had amassed more than a million followers by portraying the lifestyle of an Instagram it-girl over social media. The account also details a fictional narrative which presents Miquela as a sentient robot in conflict with other digital projects.

In August 2017, Miquela released her first single, “Not Mine”. Her pivot into music has been compared to virtual musicians Gorillaz and Hatsune Miku.

Obviously, Miquela did not “release a single”. Miquela does not exist. Some people recorded an album and “presented” their music along with a bunch of digital character art. It’s people putting on digital masks.

Miquela is NOT a robot. The “sentient robot” is part of the story for the imaginary character. At best, we are lookng at a purely digital puppet, with no internal mind whatsoever (not even “deep learning”). It is a product of puppetmasters manipulating images for marketing in social media.

To repeat, there is no physical Miquela. No robot you could visit, no plastic and metal, just a computer screen.

Outrage began this month when Calvin Kline made a video with model Bela Hadid kissing empty air in front of a greenscreen. Post-production, Digital 3D modeling overlaid a Lil Miquela 3D model, and the result was apparently two women kissing.

The tech was no more sophisticated than any “game character” created and rigged in Maya and deployed in Unity or Unreal Engine. While there was an image of the kiss, there was no kiss.

To repeat, Hadid kissed empty air, or some guy dressed up in green motion capture clothing. 

green-motion-capture

(great kiss, Bela!)

Were people upset that that the event didn’t happen? Nope, their response was a widespread, stunning and cheerful acceptance of the image as a physical reality. The discussion proceeded from there.

First, though, there were immediate complaints from the LGBTQ community caused CK to apologize. Hadid is straight.

 

True, the pixels she pretended to kiss were “presenting” as female. Or, the people behind the digital image, drawing and manipulating it in software were “presenting” as female. OK, typical identity wokster flareup, with some justification in my opinion.

Also, with 3d modeling software we now have an easy way of creating something that normally would have taken a good portrait artist a couple of weeks in the old days. It’s not hard to create a 3D digital portrait of an imaginary human. And, if you’re sending still images to Instagram, you can cover the fact that you’re – just making pictures.

Buttt……….People are calling Lil Miquela a “robot”, as if there are walking humanoid robots that look like this. Apparently, these “reporters” are so dumb that they don’t realize that there is no physical robot – just people uploading 3D modeled images

Now, images have been used to both define and attack identity for a long time, so no problem with seeing this as a bit of queer-baiting by a company that deserved to have it marketers called out.

But the bigger point is truly crazy. I scoured all the news stories, and in all of them, Lil Miquela was called a robot. There was not correction to “digital 3D image”. Every press story called the kis a kiss between a robot and a human.

Nearly every media outlet called it a human kissing a physical, humanoid robot walking confidently and gazing on a human before lip-locking. But there was no kiss. There was no robot.

Not a single major story discussed the reality – a model kissed air, and then had their image added to an animated movie.

And, if you check Google Search, the search is “lil miquea robot”, not “lil miquea digital art character”. Clearly, the audience for these tech stories, wokster-outraged or not, thinks they saw a physical robot.

It’s stunning. I have to believe that the majority of the tech media unthinkingly accepted that this was a physical, humanoid robot kissing a physical model. They must actually believe that it’s possible to build a robot that works like this – and that it must be be easily rented out to the fashion industry. Originally, Instagram followers couldn’t figure out if they was human (eek!). So, their go-to when lil Miqua turns not to be human is that it must be a robot presenting as female.

Where is the “fact-checking” that media is supposed to do? Why the echo chamber in dozens of articles calling a work of digital art a humanoid robot – when in fact, we can’t really make good ones, and the ones we have are incredibly hard to create? People so badly want to believe that we have given birth to artifical humans.

One is forced to conclude that tech media has begun to believe its own science fiction.

Robot Skin and Computational Overload

There’s a long history of announcements from the robotic community, claiming that “robot skin” has been created. Mostly, these have been unserious, since the huge computational load for managing skin sensation is not part of the story. A few historical examples:

From 2019, this robot skin has “millions of sensors”. Great, but what processes data from those millions of sensors, more sensitive to touch and temperature? You’d need millions of computers to handle sensor data and integrate it with, say a deep learning algorithm.

https://newatlas.com/smart-robot-skin/55853/

A close-up of why this “skin” is so sensitive, and why the “computation density” of the skin would blow away even a giant network of thousands of computers.

https://www.zdnet.com/article/hairy-artificial-skin-gives-robots-a-sense-of-touch/

Here’s and earlier one from 2010:

Here, printed circuit boards are used to sense touch on the robot hand

https://cacm.acm.org/news/87060-robots-with-skin-enter-our-touchy-feely-world/fulltext

Cool, but no ability to process – in other words, even the limited number of sensors (dozens instead of millions) is not part of the design.

A “robot skin” image from 2006

https://www.newscientist.com/article/dn7849-electronic-skin-to-give-robots-human-like-touch/

And even earlier. Sensors that would work, but extracting meaningful information from touch was – and is – beyond robots.

It’s possible to go back further (skin has been a hot robot topic for decades), but the result is basically the same: there have been a series of announcements of “robot skin” in the tech media, typically putting together a pile of sensors in some plastic matrix. While the sensors are real, the wiring up of the sensors is not addressed, and more importantly, the ability to process data from the sensors is not considered – since no computer at present can do the processing. Actual robots out there work with a very small number of sensors to make decisions.

A great example: the Boeing 737 Max. Software relies on a SINGLE “angle of attack” sensor to determine if it is going into a stall. Even with just one sensor, software designers couldn’t handle “edge” cases, probably leading to multiple plane crashes killing lots of people.

737 Max, where only one AOT (Angle of Attack) sensor is driving the robot “autopilot”. Even military planes only have 4 or so.

https://www.cnn.com/2019/04/30/politics/boeing-sensor-737-max-faa/index.html

So, our current “robots” use few non-vision/sound sensors. However, good tactile sensation is exactly what is needed for Robots that Jump to interact with the environment robustly.

Contrast this with the typical “process control” engineering solution. A single sensor, or a very small group of sensors is used to report data. For simple things, this is fine – if water boils, it is time to turn off the tea kettle. However, for robotic interaction with a real-world environment, it isn’t enough. Time and time again, robots have been built with inadequate sensors to navigate their environment if small changes are made.

Contrast this with a simple creature like a flatworm. It’s body is far less complex than our, but it is saturated with sensory neurons…

THis image shows that the entire body is full of nerve cells, many of which are sensory.

The sensor complexity of this simple creature easily exceeds that of the more advanced “robot skin”. Furthermore, complex nerve nets appeared in the simplest of animals.

Compared to living things, robots show a huge undersupply of sensation. Many in the field have rightly tried to design “skin” – but the overall robot falls into the trap of needing incredibly elaborate processing – something that simple animals don’t have or need to have. Clearly, something’s amiss.

The most recent description of touch-feelie robots point to “greater sensory density than human skin”. It’s not meaningful – just having more sensors doesn’t help. You have to intelligently respond to sensation enabled by that density. Now, nerve tissue is expensive to maintain, so animals don’t have high density because it’s cool – it’s needed. That in turn implies that the high sensory density of animal skin has meaning.

The most recent entry into “sensitive skin” takes a step backwards, and imagines a few hundred sensors (compared to the millions in some robot skin designs).

A robot with flex “skin”, with sensors quite large, but closer to manageable. People have thousands of sensors per square inch!

https://www.fastcompany.com/90416395/robots-have-skin-now-this-is-fine

The sensory equipment of this “advanced” robot is large. Probably the sensor density is below the flatworms above, probably similar to a tiny cheese mite:

The incredibly tiny creature has sensor numbers approaching our big, “intelligent” robot. The brain processing sensation so the mite can move and respond in the world is literally microscopic

https://imgur.com/gallery/6twAyQk

Still, this new flat, hex-y sensor is a bit better. As the researchers say, it might prevent a robot from actually crushing you during a so-called “hug”.

Finally, it is still better than Google’s own “sensation” of tactile robots. When you run a Google search, the “sensitive skin” robots are lost between (1) Sex dolls, and (2) The “Sophia” electric puppet. Ironically, the sexbots are designed to feel creepy-rubbery to their equally rubbery owners. And, Sophia doesn’t sense anything on its own gynoid rubber, despite the thing apparently giving talks about “gender” in some countries. Here, we see Sophia’s single-sensor design in context:

Not one bit of touch on this thing, and “sensation” is some smartphone tech. Awesome!

I vote for the cheese mite. Sophia looks very 737 Max.

What Real Robots Look Like

Despite all the biped fantasy circulating on the Internet these days, real “robots” – industrial robots which have evolved for 200 years from the first Jacard Looms, are gaining ground. In the 20th century, this kind of automation led to increased productivity per worker. In the 21st, it may be leading to worker devaluation:

https://www.reuters.com/article/us-amazon-com-automation-exclusive/exclusive-amazon-rolls-out-machines-that-pack-orders-and-replace-jobs-idUSKCN1SJ0X1 

This video goes a bit more into the specific automation:

https://www.reuters.tv/v/Pwlk/2019/05/14/amazon-quietly-rolls-out-robots-to-pack-orders

amazon-robots-replace-packers

The implication here is that for the near-term, humanoid robots are stage acts which form the PR wing of the robotization of society. Apparently, Amazon is pushing workers to switch jobs from packing to driving deliveries instead, paying $10,000 compensation if they do so:

http://time.com/5588141/amazon-employees-delivery/

Video: http://time.com/e1f756da-f4cb-44b9-ac27-244a16ccf5e9

amazon-asks-packers-to-switch-jobs.png

This kind of robot rise is the exact sort predicted long ago by Marshall Brain (author of the great howstuffworks.com website) in his “Robot Nation” series, which dates from the early 2000s (despite the blog date). The Amazon driver, switched from warehouse to traffic, slavishly following the orders of their GPS almost perfectly matches Brain’s vision here.

They don’t have to jump to be significant.

 

 

 

 

 

 

Irascible Robots (not ‘fighting back’)

Robots That Jump are designed to interact in the real world, ignoring the silly fantasy of creating ‘mind’ in a machine in order to interact. The removal of that phantasm leads to better operating devices. And there’s no better illustration of this than the recent Boston Dynamics video of a robot trying to go through a door, hindered by a human.

In the video, you can see the robot opening a door, oddly by using its combined head/arm to push it open and move through. The human does several things to stop it – including changing the door position and dragging the robot away. After each attempt (which might be compared to a child tugging on a dog trying to drink) the robot returns to its course of action.

From the perspective of Robots That Jump, this is awesome. BD had integrated one of the basic features of life – irascible behavior. This goes all the way back to Aristotle’s, Augustine’s and Thomas Aquinas’s idea of the properties of life, hardcore link here.  Basically, ‘dumb animals’ – those with now elaborate ‘mind’ to understand the future can have a passion, or emotion that drives them to their goal against obstacles. You see this in the simplest forms of life – an energy to continue existing or accomplish a goal. Suppose an earthworm lands on a sidewalk. Its initial desire is to get back into the ground. However, the ground is concrete. A ‘rational’ worm, or a typical robot would just stop at that point. However, the worm wiggles wildly and in ways that are not normal for it, continually trying to do something that gets it closer to its goal. Sometimes a thrash and a roll might land it back in the grass, and the worm returns to its wormy world.

Note no ‘mind’ is needed for this – just an initial drive to accomplish something, plus unexpected resistance which is matched with (creative?) exploration of ways around the obstacles. This is one reason emotions are believed to have evolved. Emotions don’t provide a computed solution – they drive an animal to accomplish things when it runs into resistance. Nor is this exclusive to animals – plants react irascibly to rocks and sidewalks in the way of their roots, exploring around until they run into a solution. In that case, there’s not only no mind, there’s no brain. None needed for the plant, or a Robot That Jumps.

However, the pop-culture vision of robots as godlets who will overthrow us or have a “message” to give us (see the miserable “robot citizen” Sophia) can’t see this robot as doing something robust and lifelike. Instead, they imagine mind…a mind storing up all the grievances of insults received, for a later reckoning with humanity.

The science-as-magic media (including so-called “science journalists” who, for example never bother to ask if it is even possible to for Elon Musk to physically move a million people to Mars by 2062) portray the Boston Dynamics robots as having conscious mind, unjustly attacked and thwarted. Additionally, they see a grudge developing which will be paid back.

This is crazy.

Does a tree get made and plan to “get even” when we put concrete over some of its roots? No. Mind and payback are the province of very advanced social animals, where remembering benefit and harm from other members of your social species is useful. Most animals and plant’s don’t bother with it at all. But they do have irascible behavior.

However, a robot carrying a grudge fits into the techno-religion that has replaced classic religion in the supposedly secular world of the tech hipster. It’s a belief system that sees us rushing to Apocalypse jwith us tech-sinners incapable of properly worshiping the gods we have created – for which they will either punish us a.k.a. Skynet, or replace us, a.k.a. Singularity. To be taken seriously, tech-prophets like Musk and Hawking must warn us of the anger of the gods, and demand our repentance (I guess that means buying more iPhones faster).  We must eat of their flesh (“embedded computing”) to preserve some particle of ourselves in their awe-ful and righteous power.

Sheesh.

This is just an earthworm trying to get home.

Too bad there wasn’t a beheading…

Even Forbes, which should know better, has its lame reporters reporting on the “mind” of Sophia, the first “robot citizen” of Saudi Arabia.

https://www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-to-know-about-sophia-the-worlds-first-robot-citizen/#29420af346fa

This big electric puppet, with some small “ai” features useful in tricking children into thinking it is alive, is claimed to have mind and emotions.

After characterizing the robot as a publicity stunt, the article then moves on to beauty contest questions, e.g. “world peace”.

In practice, there isn’t any Ai on earth that could have reacted to the questions aimed at the robot during press conferences in an intelligent fashion. Instead, the robot uses ELIZA style reflection to appear intelligent (“what makes you think I am thinking about that”) along with canned answers that are supposed to seem wise. Apparently, the conservatives in Saudi Arabia don’t mind an idol-esque oracle being set up to answer our “deepest questions”. And since oracles are often blown out on drugs or gas seeping through cave walls, mayhaps that makes sense.

https://en.wikipedia.org/wiki/Sophia_(robot)

Business Insider has done a few articles that point out the clumsy voice recognition. But it makes a fascinating claim:

Sophia’s capacity for displaying emotion is still limited. It can show happiness — sort of.

Sophia fakes it

 

I put this image in because, to the robot (presumably a couple of Linux boxes) the nature of the “happiness” has been mapped directly to a collection of actuator movements. That, in turn implies that human happiness is simply a collection of muscles firing in a way to stretch the face. The amazin thing is that there’s no concern about a subjective – nobody wonders if Sophia actually feels “happy” or “sad” internally. In fact, the robot makes a face when something (e.g. a keyword) causes a sub-program to execute.

Whatever human emotion is, it is unlikely to be a Von Neuman machine ratcheting through alternative subprograms. The brain’s structure is so different from a computer that its is nearly impossible that this be true.

In Ai, however, analogy is often used to refute the argument. Flight, for example can be achieved by bird or insect wings in completely different ways. So the argument goes that emotions can be created in similarly analogous ways.

The problem is that nobody has adequately defined what “happy” or “sad” are in the first place so that we can construct analogous ways of achieving it. The argument that Sophia is “happy” is the same as saying a corpse with its face stitched into a smile is “happy” if we can make that smile by attaching electrodes to the muscles – from Wikipedia entry:

Giovanni Aldini (Luigi’s nephew) performed a famous public demonstration of the electro-stimulation technique of deceased limbs on the corpse of an executed criminal George Foster at Newgate in London in 1803.[5][6] The Newgate Calendar describes what happened when the galvanic process was used on the body:

On the first application of the process to the face, the jaws of the deceased criminal began to quiver, and the adjoining muscles were horribly contorted, and one eye was actually opened. In the subsequent part of the process the right hand was raised and clenched, and the legs and thighs were set in motion.[7]

I doubt if a modern electrophysiologist would conclude that a recently dead body was “happy” if the muscles twist. But replace that flesh with plastic, the electrical stimulus with a command out the serial port, and it is “feeling”.

Hoping that Hanson Robotics studies Behaviorist theory before making more silly claims about puppets.

 

Finally, a ROBOT THAT JUMPS!

I’ve been writing about the (incorrect) direction of much of robotics, along with the fakery which implies robots are more than they are for 15 years. The alternative I’ve pushed is building an agile body over “Ai” or “mind” popular with almost everyone. In my opinion, a mindless body that is physically agile – one that can reproduce the things even simple animals can do on a daily basis – is vastly more useful than a so-called “intelligent” robot.

Well now, Boston Dynamics, which has followed the “body first, then mind” approach, has finally come through. The newest version of the Atlas robot can jump and even do backflips in what looks to be an agile way:

The results are impressive. This is a huge, heavy metal robot showng agility. Small motions of the arms are being used to stabilize, which makes it look like a pretty complex control system. Though I am guessing this is pretty staged (change the box positions and the robot falls), it is a candidate for a true Robot that Jumps.

This follows on other robots designed to imitate living thins, in pursuit of agile behavior.

While these robots are finally meaningful (in terms of being real robots instead of fakey electric puppets), they are far from practical. The biggest issue is power supply – they use IMHO 20-100x times as much power as a comparable animal body to move. Electric battery density simply isn’t high enough for a human-sized robot to remain powered for more than a few minutes at a time. At present, you would probably need to use nuclear power to move these robots for reasonable operations – or big Diesel engines.

The other issue is muscles. Robots using electric motors aren’t very agile. Those using pneumatics can achieve lifelike motion, but it is impractical to power them standalone. I’m guessing the gasoline engines on the early BD robots supplied compression for the leg actuators – NOISY. These are not quiet, like the would be if a human did the same motions. The holy grail – elastic plastic and contractile muscle tissue – is a long way off. You see visions of such muscles in movies like Ghost in the Shell, but they don’t actually exist.

https://vitalybulgarov.com/ghost-in-the-shell/

Ghost in the Shell - major shelling sequence

The closest we have is something like this combined electro (heating?) hydraulic muscle:

https://www.sciencedaily.com/releases/2017/09/170919091000.htm

It took 30 years for walking robots to learn to jump, good muscles may easily take that long.

I’m guessing another 50 before something like Ghost in the Shell could be built.

However, a mental point might have been reached. With the debut of the BD robot, people may be less accepting of the stupid fake robots pedaled at SF conventions as somehow “real” or “the future”.  Mayhaps we’ll start ignoring the “mind without body robot”, typlified by the creepy sexist machines popular in the media:

 

In a word…this is a big fake puppet implying human abilities that don’t exist. And this machine can’t jump.