Covid-Forced Dreams of Robo-Tech

2020 is the year of the Covid-19 pandemic, caused by a coronavirus. First off, I’d like to point out the single best writer on the topic, Erin Bromage, if you actually want to understand what the virus is and how it spreads:

Wellllll….robots don’t get covid (though someone is probably readying a training dummy with symptoms) so how does the pandemic affect Robots That Jump?

The answer is simple: tech-utopians decry our non-acceptance of robots everyhere, and hope maybe this will force them to see the future.

All over the world, the response of those who boost humanoid robotics as “the future” to the pandemic has been the gnarly hope…a hope that people will start using humanoid robots due to the pandemic.

For those who can’t take the time, a summary:

  • You get the virus mostly from people’s breath, talking, coughing, sneezing. Touching packages in the store is much less dangerous than talking to the cashier.
  • Your chance of getting infected is much worse in an enclosed space (like a church choir or office). The risk in an open park or beach is very low, in a subway or bus, very high.
  • Healthcare professionals, even young ones, are at vastly greater risk because during an 8-hour shift, they are exposed to billions of times the virus you might get from a casual contact during shopping. There is evidence that infections are more severe if you get lots of the virus at once, again means we should be prioritizing healthcare professionals and 1st responders.
  • You wear a mask mostly to protect others from you, since the vast majority of infected people don’t develop symptoms.

IMPORTANT! I am not “anti-lockdown” or some shill for tinfoil hats who pretend this is no pandemic. I did RNA virus research myself in the 1980s. I posted articles at the very top of this message so you can look at the ways you can protect yourself, based on science.

So, the following critique of “robots in Covid” is NOT an attack on the general response to the pandemic. It is about how those with robot-religion, desperate to have robots everywhere, trying to hitch their machines to Covid, and hoping the pandemic forces us to accept their “inevitable” future.

This “hope” has been the response of two industries: Robotics, and Virtual Reality. Both are “high tech” big ideas that has been pushed for over 100 years as “the future” but yet haven’t gotten widespread adoption like cars, radios, or smartphones. Interestingly, VR headsets aren’t selling, despite the fact that everyone is at home. Why? they are the future.

Puzzled, robo and VR evangelists see Covid-19 as a way to force people to accept that humanoid robots are “almost here”, and will form a big part of their future.

And, at first glance, it makes sense – a robot can’t get infected, right? But basically this is PR, if not religious preaching.

Case in point: India, where the system is not overloaded, despite its weak infrastructure relative to the USA. Chalk that up to the young age of the population, and low incidence of “rich country” co-morbidities like obesity, diabetes, high blood pressure, etc. However, there are lots of sick people in hospitals. Can robots help?

The headline from tech magazine Protocol, “Rise of the Robots: COVID-19 is Causing a Hesitant India to Welcome Automation” is typical of the robo-mumbo-jumbo announcing there is finally a compelling reason to put humanoid robots everywhere.

The article covers some predictable, but useful things – e.g. not having a human present during an interview might reduce infection, robots can clean floors, etc.

However, the real agenda is clearly that (1) humanoid robots must be the future, and (2) the pandemic is making people realize they must be their future.

A great quote:

“…Arun Sundararajan, an NYU Stern School of Business professor researching how digital technologies transform society told Protocol that he believes a new tech paradigm will emerge after the pandemic recedes.

‘Crisis can be sort of a catalyst or can speed up changes that are on the way — it almost can serve as an accelerant,’ he said.

I don’t think the professor actually meant “almost” – it is “on the way!”.

In other words, the pandemic is just forcing something that must happen in the future faster – a specific kind of technology (humanoid robots) that is inevitable as death and taxes, and Covid is a sign that we must accept this. The hidden emotion among techies: we probably should be glad that the pandemic is making us wake up and see our predestined future.

In practice, we don’t have useful robots to do the main things we need in a pandemic – observant care of sick patients, using advanced AI to screen symptoms without questioning, cleaning complex medical devices in hospitals, doing contact tracing in the field to snuff out outbreaks (as was done effectively in Vietnam).

All of these would be a welcome use of robots – but there are no “agile” or “smart” robots around that can actually do any of this!

Robots can’t clean bedpans. Robots can’t sterilize equipment except by spraying the whole device – they aren’t dexterous enough to use cloth and cleaner on odd-shaped parts. Robots can’t drive cars (wait, can’t we get them a self-driving car? No.) Robots can’t ask people questions any better than an automated phone support system.

Consider that even getting out of a car is too much for the vaunted Boston Dynamics acrobat-robot, despite its acrobat skills:

Entertaining, but no ability to clean bedpans!

What are we left with, when robots can’t do anything we need for this pandemic in the hospital? Greeters and Lecturers.

  • Greeters replace a microphone + phone scripts with a big electric puppet barking friendly messages.
  • Lecturers run around warning us of danger.

Is this really a good use of resources? Well, if you’re in the industry, it is soooo important that people understand humanoid robots are the future no matter what you do… Thefore, we will loan out our electric puppets to remind you they are the future. Never mind that these “robots” are just glorified microphones or megaphones; for medical personnel or crowd control.

Another wonderful quote from the same Protocol article:

“…UK-based data analytics firm, GlobalData, has said that a shortage of personal protective equipment will drive adoption of robots to treat COVID-19 patients in India...”

This is INSANE. Wouldn’t the cost of a humanoid robot be better spent on masks and other Personal Protection Equipment (PPE)? How many masks, gloves, PPE could I buy for the cost of renting a greeting robot?

The goal is not to improve efficiency. The goal is to awe the incoming, sickly natives with the future tech. But these greeter-bots aren’t doing anything much different than radio-controlled, tele-operated radio “robots” did 50 years ago in shopping malls. Mostly, people are just creeped out.

Think about it: Do you want a big electric puppet as your final companion during illness? Apparently some people thought Pepper, (the robot you’re not supposed to have sex with), is the perfect end-of-life friend.


Lamentable quotes from Pepper:

“…Please, wear a mask inside,” it said in a perky voice. “I hope you recover as quickly as possible…”

“…I pray the spread of the disease is contained as soon as possible…”

“…Let’s join our hearts and get through this together…”

If instead, you put up a sign with these messages, humans would assume they come from humans. But, if you have a robot say these messages, the popular perception is that the robot has ‘taken over’ from the humans. So reassuring!

This is a huge fail of User Experience Design (UX).

To be fair, the hospital has non-humanoid robots of the iRobot type which help with floor cleaning – but this is hardly news, since hospitals have been using these primitive kinds of robots since the late 1970s. Here’s a robot in an Indian hospital actually doing something…

Milagrow Floor Robot iMap9.0

Read more at:

Unfortunately, the same company has an insipid humanoid robot greeter for the sick:

Can monitor patients remotely – just like a microphone!

A movie of Pepper barking orders at humans in Germany:

Here’s the point: you could have put up a cardboard cutout of the robot with an attached speaker in the mall, and had virtually the same effect! Better, since people would understand the cardboard cutout as created by humans, not as a future overload telling us what to do.

By tying a commanding robot into the pandemic, you’ve actually created and enabled the narrative that various nutbars out there want you to believe.

FILE – In this April 15, 2020, file photo protesters carry rifles near the steps of the Michigan State Capitol building in Lansing, Mich. Protesters drove past the Michigan Capitol to show their displeasure with Gov. Gretchen Whitmer’s orders to keep people at home and businesses locked during the new coronavirus COVID-19 outbreak. (AP Photo/Paul Sancya, File)

You know what these guys would say about Pepper, don’t you?

The American version of the “big electric puppet barking orders” completely freaked people out in NYC Central Park, and was removed in an hour because “it didn’t have a permit”:

https://nypost.com/2020/02/06/city-boots-creepy-coronavirus-detecting-robot-from-bryant-park/ (video in article, after ad)

Electric puppet tries to question people about their covid symptoms.

Boston Dynamics probably knew that there would be a negative response, so it deployed its dog-bot barking orders to Singapore, where an initial apparent success in containing the virus failed, cases then spikes (very common in pandemics), and the authoritarian government thought it’s best bet was to play right into the conspiracy nutbars who think the pandemic was faked to bring in the reptilians, aliens, Illumanti, whatever.

How to enable the fringe-y right? Give them a robot overlord in the park! Note: this “robot”, like others, is actually tele-operated by a remote human, similar to flying a drone.


Video on Instagram at:

Be safe!

Frankly, we have a LOT of unemployed – why not send a person out, in a car with a megaphone, instead of an expensive robot that removes the human job? As the posts at the top of the article show, the risk is incredibly low, as long as you don’t pack people into parades.

Also note the cog-dissanyo dumbass of the person who posted this.

“…Dont worry about bumping into Spot as it is fitted with safety sensors to detect objects and people within 1m to avoid collision…Robots can observe safe distancing too!”

Safe distance is 6 feet or greater. The robot is modeling unsafe behavior.

Back to our India article. A priceless movie of a stiff electric puppet barking orders, apparently what robots will “do” in the future that must happen:

The Covid-19 pandemic is a great example of a human problem – robots don’t get the disease. Service robots (like a floor-cleaner) have some real value, but humanoid greeters and order-ers do not. Using these robots is an expensive waste of money that could be used for protective equipment.

And, the belief among tech-utopians that robots must be the future doesn’t justify diverting money to push their techie religion, versus spending on human health.


An Actual Robot that Jumps

The “public” robotics (as opposed to the private world of practical industrial robots) is split into two factions. One promotes electric puppets who supposedly have humanlike “minds” and do art or otherwise replace humans. The second faction tries to build robotic bodies that can actually function in the real world. The public tends to confuse or think that a humanlike mind is needed to drive an agile robot, but that’s not the case. Instead, it’s perfectly possible to create a “mindless” robot that functions efficiently in its environment.

For whatever reason, Boston Dynamics has taken the second track for many years. They demonstrated doglike and mulelike robots as “pack animals” (sadly, the need for a gasoline engine to provide enough power doomed these devices to irrelevancy). More recently, BD has done a lot of PR using its ‘Atlas’ robot – basically the descendant of the DARPA Atlas with benefits.

In the video embedded above, Atlas is showing with improved agility. It is actually doing a bit of tumbling, which is truly remarkabe. While it still looks “mechanical” it is clearly an emulation of humanoid motion. A true Robot that Jumps!

The one problem, just like the “Big Dog” and related robots – power. The size and weight of this robot imply massive electrical consumption. BD hasn’t given up on having electric stepper-motor-like motion, which means that when Atlas cancels its motion after a move, it uses a lot of power. A human might use 150 watts doing brisk exercise; my guess is that Atlas is using 50 times less power.

This wouldn’t be a problem except that the Iron Man “power cell” doesn’t exist. All we have are lithium batteries, which even at their most efficient, could not move this body for more than a few minutes. That’s a huge problem. In the pack-bots, BD used gasoline engines, since fossil fuel has 50 times the energy density of a lithium battery. And future battery tech, while likely to get better, might at best double, and even that with exotic tech like molten salt. Not 50x.

I wish BD or someone who knows would doe an accurate estimate of the power consumption of the Atlas body in vigorous exercise, and then ‘scale’ up or down to see if there is a point where you wouldn’t have to recharge every few minutes. My guess is that very small might be more viable (like the robo-insects) but big battle-bot is out, unless you use nuclear power. In fact, old images of robots by Hans Moravec in places like Scientific American routinely showed nuclear power supplies.

Practical? If not, at least BD has a steady PR stream feeding on the “hopium” of tech-utopians…


Making Dating into a Message from the Future

In a new low for humanity, our most technically advanced news media – distributed, Internet-based stories allowing instant access and comments – decided to act like a 3-year old and “believe” in the Hanson Robotics “Sophia”, the reputed “first robot citizen” of Saudi Arabia.

Sophia with its creator, Dave Hanson of Hanson Robotics

The video interview is visible on this page:


The event in question is a Yahoo! Finance interview – specially run by a reporters and interviewers previously dropped on their heads as children – robot Sophia discussed “modern dating” via dating apps, also who should pay on the first date. Telling Quote from the interviewer:

“…For what it’s worth, the robot that was created by Hong Kong-based Hanson Robotics to improve robot-to-human communication, says she has no desire to pursue eventually raising children of her own, but would prefer working with them instead…”

OK, let me get this straight, trying to keep the straight face of the interviewer. We are supposed to think that said robot has considered dating and children. If it did, it would need at the very least a colossal, cross-referenced sea of “deep learning” pattern recognizers, coupled to some sort of “decider” creating “opinions”. This most assuredly the robot does not have. It is no more interested in dating than a toilet plunger.

Another beauty, this time from the robot’s “mouth” (sorry, robot speakers):

“Before dating apps, the biggest factor in determining love was geographic proximity,” she said, while tethered to a human operator who had been informed of the interview topics ahead of time. “The advent of dating apps has collapsed the distance between people. So even though I don’t date, I am a fan.”

Now, pretending this robot actually has opinions, rather than being a big electric puppet, providing PR for Hanson, is not so bad. After all, we tell little kids about Santa, so why not pretend electric puppets go on dates? Also, the comments supposedly created by Sophia are liberal/wokster, so one might even imagine that they “make a difference” in the zeitgeits. After all, if machines tell us “we” are the problem, not our tech (another Sophia interview), what’s not to like?

Here’s the problem: BS stories like this become embedded in the media, and with modern social networks, frequently are treated as evidence that intelligent robots are on the way. No matter that the “questions” asked of Sophia are submitted to Hanson beforehand, so an interesting (human-generated) answer can be mimed out. And, video images of a robot apparently talking are parsed by kids as the robot is alive. Even when they grow up, Plurals/GenZ will have a “gut feeling” that something is there (meaning the kind of emotions you have around dating) when in fact it is a puppet show.

In a recent study cited in Parenthood magazine entitled “Does Your Kid Know that Robots have no Feelings?“, kids clearly believe they are interacting with a “social being”:

“…90 children ages 9-15 interacted with a humanoid robot named “Robovie.” Within the 15-minute session, children interacted physically and verbally with Robovie, until a researcher interrupted its turn at a game and put the robot in a closet, despite its objections. Post-interview, results showed the majority of younger participants believed the robot had thoughts and feelings and was a “social being.” In other words, it could be a friend…”

Best friend. Joyful happy boy smiling while hugging a robot

The take-home for most techies is that “robots are already talking about dating” and hugging children…so the robot revolution is neigh. True, Sophia probably has better dating skills than the typical basement incel hammering away in Fortnite, but the rest of us don’t fall in this category.

In reality, humanoid robots are proving poor substitutes for humans in tasks that can be measures, instead of “ideas” that can be puppeted. Witness the hapless Fedor, the Russian humanoid, sent to the International Space Station to analyze whether humanoid robots of its type could help with tasks.

Take home: NO. Didn’t work, according to Yevgeny Dudorov, executive director of robot developers Androidnaya Tekhnika (see: https://phys.org/news/2019-09-russia-scraps-robot-fedor-space.html )

“…but Fedor turned out to have a design that does not work well in space—standing 180 centimetres (six feet) tall, its long legs were not needed on space walks, Dudorov said…”

In the real world, we are a long way from the Robot That Jumps – jumping to help in space, or jumping in to give basement losers a mechanical dating partner.

But perhaps the example of Santa is valid. After all, most techies claim to be secular, while holding a set of irrational beliefs in “futurism”, “the singularity”, “strong Ai” and similar beliefs that are impossible to differentiate from ol’ time religion. Since none of these beliefs refer to anything real, a robot like Sophia is a magic elf from the “coming around the corner soon” prophecy spirit future time when robots will go on dates – and those incels will be able to replace their current flabby rubber girls with microprocessor-driven puppets. Hanson Robotics will be there to sell them, I’m sure!


Atlas Deepfake

Well, well, people are so desperate to have robots that they’re willing to propagate phony videos of the Boston Dynamics humanoid in action

This worked great for Corridor Digital, a Los Angeles VFX house, who wanted to parody some of the real videos, including the one of Atlas where it is being “taunted”. A great job of motion capture plus blending in robot body

Corridor Digital Video Site: https://www.youtube.com/user/CorridorDigital

Corridor Digital is doing a great parody of the BD madness. But the real fun comes when you visit tech blogs discussing the face (I wonder how many of them were initially duped) that use the parody to encode pious preaching about how the “robot uprising” will be much deadlier than the video… The proof? The VFX looks a little like real Atlas videos.

Boston Dynamic Videos: https://www.youtube.com/user/BostonDynamics

To their credit, BD actually linked the Corridor video on their own youtube site. All in all, some great shared digital publicity.

But the media appeared caught in a 5-year old’s understanding of both videos…

Gizmodo erupted in crazed slobber of pseudo-news, where (despite the parody), the author takes it as “truth” and preaches to us that the robots will rise up and destroy us, in the best religious fantasy tradition – https://gizmodo.com/that-viral-video-of-a-robot-uprising-is-fake-because-th-1835575686.

In fact, the deepfake appears to have made the author think it is more likely that the robots will rise. CGI “proves” something is real!

The Verge is slightly more sensible, and uses the parody as a discussion about how people feel empathy for things that don’t have a mind, if they act a certain way – https://www.theverge.com/tldr/2019/6/17/18681682/boston-dynamics-robot-uprising-parody-video-cgi-fake

The real problem is that people will see mind and consciousness where there is none, and act accordingly…

“(From the Verge) As MIT researcher and robot ethicist Kate Darling puts it: ‘We’re biologically hardwired to project intent and life onto any movement in our physical space that seems autonomous to us. So people will treat all sorts of robots like they’re alive.’”

Most of the coverage by “lower” tech blogs deleted the fantastic parts of the parody, dropped the quality (so the CGI was harder to see), and simply let people believe an angry robot was breaking out of its cage.

Of course, our “new media” need clickbait, and as always, it is best to distribution religious text. The techno-singularian vision of the future has become more than a cult, and is in fact a replacement for traditional religion in techies. Deepfakes like this are OK because they are “truthy” – they could be true, so we believe!

This is part of a larger problem for our society. The rise of CGI has made people “believe” that anything that can be 3D modeled “could be real”. This is why companies like Facebook and Uber churn out bullshit images of “air cars” that tech media and groupies unthinkly accept as “just around the corner”.

I suspect the writers don’t understand that Uber can endlessly create these CGI videos to look trendy, and rake in gains in stock price. Actually making this helicopter (that’s what it is) would be difficult and dangerous. Better to make a phony video, then say it could be true just around the corner.

So be worried – not abut the BD robot, but about the millions of craven pixel-pushers desperate for a god to worship and (human) sacrifice for…

Chickens run around with their heads cut off, and the BD robot is on its way to being a decapitated chicken in several years. Fascinating that said chicken is touted as our destruction.


Another Puppet Show, Featuring a Gynoid Robot Being a “Creative Artist”

Well, the latest in the bright future of robots is here, and it is a “creative artist” Ai-Da“, a gynoid robot whose “works” have actually been sold to idiot buyers for a total > $1 million dollars.

two images of ai-da robot head
Adian Meller showcases Ai-Da. Source: Metro

Quoting Devidiscourse (India’s media loves humanoid robots),

“…Described as “the world’s first ultra-realistic AI humanoid robot artist”, Ai-Da opens her first solo exhibition of eight drawings, 20 paintings, four sculptures and two video works next week, bringing ‘a new voice’ to the art world, her British inventor and gallery owner Aidan Meller says. “The technological voice is the important one to focus on because it affects everybody,” he told Reuters at a preview…”

The big electric puppet, created by the 46-year old art dealer can’t walk or move around, But that doesn’t stop the flood of PR images of Ai-Da thinking pensively of her future, self-consciously reflecting the incept scene from HBO’s “Westworld” where lead robot, “Dolores” wakes up.

a pensive pile of plastic - ad-ia robot with downcast cameras
Ai-Da on nonfunctional legs evoking an HBO series
source: MSN

Yes, the same people, Engineered Arts, designed and built the Ai-Da and the HBO movie-bot bodies. Ai-Da was given legs to make her look like more like the movie robots.

Evan Rachel Wood as 'Dolores' in HBO 'Westworld' Scene
From “Westworld”, a scene with actress Evan Rachel Wood as the robot “Dolores”. Later in the scene, Dolores sits up, exactly like Ai-Da image above. Source: Daily Kos (big surprise these dumbass “progressives” are anti-historically suckered into this worn-out discussion)

There are several interesting features of the aI-dA machine itself. First, the cameras for “drawing from sight” are actually in the artificial eyes (though I’m surprised there isn’t a open-on-demand third eye), and the drawing arm does exhibit fine motor control for drawing to a canvas. Mechanical plotters have been doing this since the 1940s, but having it in an articulated hand is interesting.

Reminds me of a Fortune-Telling Machine I saw somewhere

The algorithm used is also interesting – it breaks up the image into a bunch of short line segments (like some brain neurons in primary visual cortex may do) and can reproduce your face with said lines. It is neat, though hardly useful, robot-wise, when you can just take a high-resolution digital picture.

Interesting, though I seem to remember seeing stuff like this 40 years ago!
Source: Daily Mirror (click for video)

But…wait! This isn’t the first “robot artist”. Some may remember Aaron from waayyyy back in 1973, an Ai program created by (human) artist Harold Cohen.

Aaron was programmed in C (later on LISP) on computers running 1/500 of the speed, and with 1/100,000 of the storage capacity of current art-bots like Ai-Da. Still, of the two art-bots, it seems the most “creative”…

Aaron’s art from 1979. Source: Cohen Website

Ok, admit it! Aaron is a MUCH better painter than ai-da! Aaron plays with forms and variation, while AiDa makes something like a street artist sketch. ai-dA simply maps contrast to edges, to form to a bunch of lines, similar to what neurons do in the lower-level of the brain. See this article for some recent work on neural “edge detection” in brains.

Aaron, in later incarnations, even mixed its own paint! And that was with computers 1/1000 the power of those used today.

Cohen also made sure that people understood Aaron was code, and he was exploring how much of art was reducable to algorithms.” Each artistic “style” was coded by Cohen, then Aaron would crank out an infinite number of variations using style.

However, there is a LOT of originality in Aaron’s variations. Cohen’s own commentary on the web (site now showing signs of abandonment) may be found at this link.

This is a serious exploration of art’s scope and meaning, with algorithmic art treated as both medium and product.

Aaron image from 1992, after it was reprogrammed in LISP, which improved its color choices. Source: Cohen Website

In my mind, this indeed demonstrates that some of the ‘imaginative’ part of “the creative professions” can be automated – you can truly create an “intelligence amplifier”, even for art.

And Aaron has hardly been alone. Over the years, there have been dozens of “art bots”, like this one from 2011. Created by Benjamin Grosser, it used ambient sounds to adjust the images it painted:

Interactive robotic painting machine Source: vimeo

A good resource for 1990s computer-created art may be found at The Algorists, which seriously treats the idea of algorithmic art. For the 2010s, check an even more recent article on Newatlas.

Now, compare this “automated painting” to Ai-Da. The “art” AiDa draws is clearly more primitive than ANY of these historical art-bots, and just looks like the neural edge detection. It’s INFERIOR to the past, and more image classification than “art”.

Line drawing by Ai-Da. Source: Futurism. Incredibly, the author of this piece failed to mention that Futurism has already covered numerous “robot artists”, all more interesting than Ai-Da!

Ai-Da has created other images, termed “shattered light”, which are abstract rather than figurative. However, the “shattered light” images actually up for sale (to suckers) at the gallery are generated from a different algorithm. They aren’t drawn by the robot arm, but are printed. Then, a human artist colorizes them so they look khuuuuul…

Ai-Da with shattered light painting by creator
The actual images going on sale, termed “shattered light”. No mention anywhere how these images were created (apparently nobody cares), but we do know that a human repainted over the print. Source: Oman Times

At least Aaron mixes his own paints!

The fascinating part, as always, is not the technology, but the emergent robot narrative coupled with the insane, uncritical media worship of this parlour trick by an art gallery, eager to seize the zeitgeist to generate $$$ (I salute Adian Meller for this creative insight).

Why throw shade on poor AiDa? First, Ai-Da is not being represented as what it seems to be, which is an advance in computer vision. Instead, the creators claim they’re raising deep and philosophical questions about the meaning of what it is to be human.

Hey… these deep conversations have been happening for 40 years with the MORE ADVANCED art-bots, and there are vastly more interesting and critical discussions available at the intersection of creativity, algorithms, and science if you bother to look…

But you wouldn’t know it from the media!

Practically none of the hundreds of slobber-stories about Ai-Da mention that there are other, superior art-bots out there. And nobody therefore has to grapple with past robo-painters doing a better job than Ai-Da with inferior hardware.

Instead, in our modern world, the public, the discussion isn’t about creativity and programming. Now, its personal. We are told that we have “suddenly” created an artistic robot who is a “performance artist” and is selling her art in a gallery. No discussion of method, coding, or the actual humanness of the robot – just pretty pictures of a female electric puppet in a fancy home with a painting smock.

Apparently, it’s enough to reinforce “the female robots are among us, COOL!” story.

Even the critical articles, like this one on Artnet, seem completely ignorant of the past. Naomi Rea attacked Ad-ia as anti-female, but missed the forest for the trees – she didn’t even mention Aaron or other artbots – breathtaking anti-historical thinking.

My guess is her “creepy white men” comments were just standard, intoned, wokster piety, tacked onto the end of a poorly researched article.

The a-historical aspect of this robo-worship is breathtaking. Why, on the Futurism blog there are older stories about art-bots! But the author (Victor Tangerman??) of the Ai-Da story doesn’t even mention them, and just parrots the Reuters news release. Possibly we should replace “parrot” with “robot” so we don’t insult birds.

My guess is that Futurism is willfully ignorant of the past, and sees no reason to research the extraordinary claim of robotic art. Instead, it trumpets that this pile of parts is a “new Picasso”. Yeech.

Ironically, Futurism’s “related articles” (which are the result of a pattern-matching algorithm), DO mention earlier art-bots. Score one for the machines!

Ai-Da has very little to do with art, and everything to do with this strange 2010s desire to “believe” that robots are about to appear among us, typically presenting in sexy female form. A few humdrum references to “we must think deep thoughts about robots” always appear, but really, it’s about nerd sex with plastic, not the potential for machine art.

In a recent “news conference” Ai-Da, like the similar Sophia robot, “spoke” to the press. Like Sophia, Ai-Da was pre-loaded with answers by a human operator. In other words, someone remotely operated the robot to give it the apparent ability to speak.

BUT DON’T WORRY – it will have its own voice, soon, you say?

Consider how strange this attitude is…

Before automobiles, people didn’t have a passion to make fake cars and pretend that they actually worked. People rarely pretended they had working airplanes before the first planes flew. They certainly didn’t show a fake then say “believe in it NOW, because it will work ‘soon’…”

Why robots?

I suspect you might have gotten a similar “DON’T WORRY” out of a Egyptian priest who spent his days talking through a temple statue to give it a voice. Watch it, humble farmer, someday, the god might just speak through the this statue!

Here’s a transcript from an 1899 (yup) lecture describing how ancient Egyptian statues were designed to be “spoken through” by priests, and also how the statues had joints and valves to make them move:

” …M. Gastor. Maspero, the well-known French Egyptcologist, has recently written an interesting article on the “speaking statues” of ancient Egypt. He says that he statues of some of the gods were made of jointed parts and were supposed to communicate with the faithful by speech, signs, and other movements. They were made or wood, painted or glided. Their hands could be raised and lowered and their head moved, but it is not known whether their feet could be put in motion.

When one of the faithful asked for advice their god answered, either by signs or words.

Occasionally long speeches were made, and at other times the answer was simply an inclination of the head. Every temple had priests, whose special duty was to assist the statues to make these communications.

The priests did not make any mystery of their part in the proceedings. It was believed that the priests were intermediary between the gods and mortals, and. the priests’ themselves had a very exalted idea of their calling…”

Source: Los Angeles Herald, Volume 00000000602, Number 187, 5 April 1899, via California Digital Newspaper Collection at UCR Center for Biographical Studies and Research. Note: I corrected the clumsy OCR of the robot translator on this website.

If you’re wondering how ancient Egyptians could have possibly listened to a statute puppeted by a priest without laughing, consider that the following image was called “incredibly lifelike” by multiple media outlets, echoing the art dealer’s press release, without questions or comment.

No, it is NOT “hyper-realistic” Source: CreativeBoom

No, this is only slightly improved from a robotic fortune-teller, something which used to be common at theme parks. “Ai-DA” remains deep in the uncanny valley. Only a generation raised on seeing videogames as “realistic” could think of this as “realistic”.

Print of Esmerdala, a Disney model taken from older fortune telling robots at theme parks.
Image of mechanical fortune-telling machine Esmeralda, a Disney model taken from much older fortune-telling theme park robots. Source: Fine Art America

The Ai-Da puppet show does indeed capture, as the creator of Ai-Da desired, the “zeitgeist”. We really, really, really, really want to create robots, but we don’t understand how to do it. We’re stuck on the fast advance of digital computing and “accelerating change”, which seems to require that robots exist now. We resolutely ignore the 40-year old history for robot artists. We ahistorically assume this must be the first time.

Mask of high priest in Egypt? Possibly one of those who puppeted the Egyptian robots… Source: Sotheby’s Catalog

But…there aren’t any robots like the ones we insist upon. So, we set up an electric puppet to fill the void, holding steady our devout faith until the Second Coming of the Machines.

In practical robots, this hopeful puppetry is masking the failure of so-called “driverless car” initiatives.

In all cases to date, “driverless cars” actually have a human operator behind the scenes, monitoring and guiding the card past anything beyond cruise-control complexity, typically a few times in every mile. Essentially, glorified forms of Cruise control allow a single driver to work as a cabbie in multiple vehicles. If you want the job, Designated Driver is hiring!

While “driverless cars” have some self-control, they are corrected every half-mile or so by a human operator, or when the people get tired of waiting for the robot’s incredibly slow progress. Here’s a modern high priest operating one of these puppets. You can apply for a job doing this at Designated Driver Source: Futurism

Will the public catch on that most Ai out there is just a puppet show similar to temple-tricks played thousands of years ago? I’m not holding my breath – people do need religion in their lives, and a religion of godlike robots that want sex with nerdy mortals seems just right for the 2010s.

Meanwhile, Harold Cohen, the creator of Aaron, died in April 2016. His passing unrecognized by the “robotic” tech-future-utopian media. No love for him, or his for his sexless but vastly superior art-bot.


Kissing Empty Air

Recently, the hubub over imaginary CGI robots has reached new heights. While real humanoid robots look pretty inhuman, the media more and more acts like a 3D game character is exactly the same as a 3D “physical meatspace” robot.

Lately, the excitement in our gynoid era has shifted to false female-presenting lips smacking together under the authoritarian C++ code. In one story, a nonexistent “robot” was shown kissing a model. In another case, two lumps of plastic clacked together for a “kiss”.

First, the robot duo “kiss”. A while ago (2009), horribly inhuman, easily defended against:

The second case is more troubling.

Our 2019 “robot kiss” features Calvin Klein apologizing after they released a video showing model Bela Hadid apparently kissing Lil Miquela, a blob of software and code and pixels, in other words somebody’s digital art working as a corporate shill.


As Wikipedia reports:

Miquela is an Instagram model and music artist claiming to be from Downey, California.

The project began in 2016 as an Instagram profile. By April 2018, the account had amassed more than a million followers by portraying the lifestyle of an Instagram it-girl over social media. The account also details a fictional narrative which presents Miquela as a sentient robot in conflict with other digital projects.

In August 2017, Miquela released her first single, “Not Mine”. Her pivot into music has been compared to virtual musicians Gorillaz and Hatsune Miku.

Obviously, Miquela did not “release a single”. Miquela does not exist. Some people recorded an album and “presented” their music along with a bunch of digital character art. It’s people putting on digital masks.

Miquela is NOT a robot. The “sentient robot” is part of the story for the imaginary character. At best, we are lookng at a purely digital puppet, with no internal mind whatsoever (not even “deep learning”). It is a product of puppetmasters manipulating images for marketing in social media.

To repeat, there is no physical Miquela. No robot you could visit, no plastic and metal, just a computer screen.

Outrage began this month when Calvin Kline made a video with model Bela Hadid kissing empty air in front of a greenscreen. Post-production, Digital 3D modeling overlaid a Lil Miquela 3D model, and the result was apparently two women kissing.

The tech was no more sophisticated than any “game character” created and rigged in Maya and deployed in Unity or Unreal Engine. While there was an image of the kiss, there was no kiss.

To repeat, Hadid kissed empty air, or some guy dressed up in green motion capture clothing. 


(great kiss, Bela!)

Were people upset that that the event didn’t happen? Nope, their response was a widespread, stunning and cheerful acceptance of the image as a physical reality. The discussion proceeded from there.

First, though, there were immediate complaints from the LGBTQ community caused CK to apologize. Hadid is straight.


True, the pixels she pretended to kiss were “presenting” as female. Or, the people behind the digital image, drawing and manipulating it in software were “presenting” as female. OK, typical identity wokster flareup, with some justification in my opinion.

Also, with 3d modeling software we now have an easy way of creating something that normally would have taken a good portrait artist a couple of weeks in the old days. It’s not hard to create a 3D digital portrait of an imaginary human. And, if you’re sending still images to Instagram, you can cover the fact that you’re – just making pictures.

Buttt……….People are calling Lil Miquela a “robot”, as if there are walking humanoid robots that look like this. Apparently, these “reporters” are so dumb that they don’t realize that there is no physical robot – just people uploading 3D modeled images

Now, images have been used to both define and attack identity for a long time, so no problem with seeing this as a bit of queer-baiting by a company that deserved to have it marketers called out.

But the bigger point is truly crazy. I scoured all the news stories, and in all of them, Lil Miquela was called a robot. There was not correction to “digital 3D image”. Every press story called the kis a kiss between a robot and a human.

Nearly every media outlet called it a human kissing a physical, humanoid robot walking confidently and gazing on a human before lip-locking. But there was no kiss. There was no robot.

Not a single major story discussed the reality – a model kissed air, and then had their image added to an animated movie.

And, if you check Google Search, the search is “lil miquea robot”, not “lil miquea digital art character”. Clearly, the audience for these tech stories, wokster-outraged or not, thinks they saw a physical robot.

It’s stunning. I have to believe that the majority of the tech media unthinkingly accepted that this was a physical, humanoid robot kissing a physical model. They must actually believe that it’s possible to build a robot that works like this – and that it must be be easily rented out to the fashion industry. Originally, Instagram followers couldn’t figure out if they was human (eek!). So, their go-to when lil Miqua turns not to be human is that it must be a robot presenting as female.

Where is the “fact-checking” that media is supposed to do? Why the echo chamber in dozens of articles calling a work of digital art a humanoid robot – when in fact, we can’t really make good ones, and the ones we have are incredibly hard to create? People so badly want to believe that we have given birth to artifical humans.

One is forced to conclude that tech media has begun to believe its own science fiction.

Signing a Virtual Artist

I must say – this one surprised even me:

What does it mean to “sign” a band that doesn’t exist? While virtual bands have been put together ever since the Monkees of the Beatles’ 1960 era, to Gorillaz today. To be fair, Peter Tork could actually play music. And “real” bands have teams which create their “public image,” changing their appearance, speaking for them, tweaking their songs.

But in each case of these “old school” fakes, there were actual people creating music. If you admired the music or lyrics, the source ultimately was a human being. The band was a fantasy costume UI, built over the toiling studio musicians, writers, and marketers.

What happens when software is added? The first of these imaginary bands to be “signed” is “Skullz,” A simulated “mugshot” of these rebellious – nothings.

Skullz Orkid image
(sigh) a rebel, probably for “protesting” something or other…



To “go to a concert” by Skullz you buy “passes” (in other words, you get a digital token (NFT?). The music is supposed to be “emo pop,” and the band images show a “rebel” holding a police card (presumable they were “arrested”). I couldn’t bring myself to actually listen.

It’s a bit like a cover band – anonymous musicians in a Beatles cover band channeling what the original Beatles would have done. The innovation: just create the virtual band, instead of covering a real one lost to history.

The video (linked below) claims Skullz has a blowout debut. Presumably, that means that in the future you will be able to go into an online game and “meet” Skullz, and hear them “perform” – or rather, hear studio musicians play and use the Skullz UI to pretend they don’t exist.

But tech-utopians hope that…maybe…someday…the music will be written by the software! Skullz will complete it’s transformation to “real” AI!

The idea that someone thinks this is going to work – shows that people’s capacity for believing in “the invisible world” is unabated in our supposedly rational, secular society. In the first level, fake bands betray our desire to be fooled in an entertaining way. There are lots of cover bands. Discussing the members of an imaginary band is not so different from talking about other creatures like Obi-Wan or vampires as if they existed. “What would Jar Jar Binks do about racism?” kind of questions…

But there is a difference here – power. Folk songs are the product of the community of musicians. Cover bands cover stuff that musicians made. Virtual musicians are products of a company, combining the manufacture of the music and musician in software, and hiding the “soft” human, musician parts out of site. There is centralized control and ownership of everything, compared to distributed ownership in a folk song community – a flat, mesh-network.

Cover bands license music, implying there was a “real” band in the past. The licensing distributes power from the original band in a tree-style network of power.

With a game character band, all power resides in one group. The game creators grab IP from artists and musicians. The old licensing model isn’t there, despite the crazy “sign a virtual artist” language. The image below with an interview with the creators shows it all – guyz creating some artificial women (not really that different from a “living doll”) and manipulating them.

Watch the the swam of guys play with these female images….

…there are no girls here, not in the creation, or in any of the people who “call in” to the presentation in the YouTube video above.

Virtual bands seems to be an expression of rising “hive mind” mentality, discussed to great effect in You are Not a Gadget. The people creating the virtual band aren’t individually important, any more than you worry about individual liver cells in your body. Instead, they live through the emergent identity created by this digital media. If we can’t tell the behavior of the virtual band from a real band, the Turing Test requires that they think of the band as “real” – we have created a “hive mind” AI with an independent, emergent being.

Really, those “women” were the product of this guy’s efforts.

Pet Kirtley

There are no “girls” behind the scenes.

I’m not saying we’re fooled. Nobody is “duped” by virtual musicians – meaning they think this band exists when questioned. But they suspend belief because they “want them to be real…”

Consider that many religions don’t have an old man in the sky; but they do feature an unseen world that requires belief, even when “the trick” is in plain sight. Consider that statues in Egypt had a priest inside talking for the god, but nobody lost their belief in the gods, even if they could see said priest squatting behind the eye-slits.

If you challenged the faith of the people, they would have been very angry – it is not “just a joke.”

How much worse does it become when the unseen world is algorithmic?

Creating art and imaginary characters is one thing – a universal practice, possibly diagnostic of humanity. But past efforts were very manual – the equivalent of costuming real musicians in fancy digital outfits. What happens when the virtual musicians, their music is created by a “neural net?”

In my opinion, you don’t get an AI musician. Instead, making imaginary characters that deliberately try to fool you via a Turning test is a kind of religion.

CES Robotic Pandemic Glorification

Just a short note on CES 2021 robotics. As you might expect, people are writing post-mortems for a year with log innovation, but lots of adoption of virtual services. During 2020, many tech-pundits felt that the pandemic would finally bring humanoid robots into widespread use. Instead, we got Zoom.

However there is a framework for stories about robots helping during the pandemic, and reporters have duly been plugging in dubious tech into the expected narrative. Check out “robots can ease our pandemic woes.”


The article features more humanoid shells for tele-operation. The author makes the unlikely stretch-case that people will use robots to move around in infected (read: public) areas. There is a report of a carpet-cleaning robot moving around with strong UV lights to disinfect spaces – first reported in the spring of 2020. This is something that might be useful. Finally, “smart” N95 masks, from game dev Razer, which seem to be more about being a Batman villain

You need batteries for air to be forced into the mask (geez), and UV sterilization (more practical. However, the big goal with this concept seems to be making masking be cool. In fact, it fits the gamer aesthetic of looking part-robot, more like the 3D generated images in a game. One wears masks for medical reasons. However, the excitement here seems to be more about strapping machines onto one’s body than protection against covid.

Bane would approve:

Source: Fandom (image linked)

(Interesting how much the Bane ‘mask’ looks like the fleshy Predator face!

Custom NECA Elder face
Source: Deviant Art (image linked)

Fun and games with a pandemic, but this is more serious – some work on long-anticipated “micro-robots” that can crawl around your body and do stuff.

Source: Cornel University

Laser jolts microscopic electronic robots into motion | Cornell Chronicle

Near-term, not so bad. There may be some use cases for micro (as opposed to nano) robots. Long-term, a problem.

Imagine someone full of these remote-controlled robots doing ‘maintenance’ on their various body parts. In a certain sense, their body is not partly controlled and directed by a third party, who also owns the intellectual property for the robots, and probably the physical robots as well. The person has a ‘service.’ The intimate contact with their body implies that to some extent, their body has itself been converted to a ‘service that they consume’ rather than a physical blob of tissue that they have rights over. Chew on that one…

Robots Encourage Human Risk-Taking

There’s been a lot of excitement about robots and artificial intelligence in recent years. One is pretty irrational – humanoid robots becoming the equal of humans. One, more rational, is various service robots, as well as “artificial intelligences” taking over tasks like driving cars and airplanes.

The 1950s and 1960s spawned two views of future computing – computers as “mind” and computers as human “intelligence amplifiers.

Now that we are actually getting some functional AI systems, we’re also beginning to learn about the consequences which have nothing to do with their capabilities, but with human perception of their capabilities. As I’ve discussed many times before here, people tend to see any mimicry of human behavior by a machine as ‘proof’ there is a fully human, conscious, choosing entity behind the scenes. They then extrapolate this and begin treating the machine as if it had qualities it doesn’t have (like a mind).

Case in point. People interacting with robots immediately apply a “human mind in silicon” model to the robot’s actions. Making the robot “friendly” or “humanoid” just encourages this. And it leads to people assuming the robot is taking charges – so they take risks. In other words, their own work deteriorates as they imagine the robot is picking up the slack.

‘The robot made me do it’: Robots encourage risk-taking behaviour in people (spacedaily.com)

The research, published in the journal Cyberpsychology, Behavior, and Social Networking, involved 180 undergraduate students taking the Balloon Analogue Risk Task (BART), a computer assessment that asks participants to press the spacebar on a keyboard to inflate a balloon displayed on the screen. For some sessions, a robot as present, providing encouraging statements to keep pumping.

The results showed that the group who were encouraged by the robot took more risks, blowing up their balloons significantly more frequently than those in the other groups did. “Popping” a virtual balloon caused the controls with a silent robot present to scale back their pumping – but in the presence of the robot, test subjects continued to pump, even when the balloons routinely popped.

This is a great example of how people map humanlike qualities to objects, provided the objects provide cues that they are human. The students mapped a “mind” onto the code creating the robot’s speech, and further took that speech as evidence they should continue pumping.

Dr Yaniv Hanoch, Associate Professor in Risk Management at the University of Southampton who led the study explained noted that the robots apparently exerted peer pressure on the students, similar to that provided by an actual human egging the students on. However, he also saw a silver lining.

“On the one hand, our results might raise alarms about the prospect of robots causing harm by increasing risky behavior. On the other hand, our data points to the possibility of using robots and AI in preventive programs, such as anti-smoking campaigns in schools, and with hard to reach populations, such as addicts.”

There’s a clear moral hazard here. My guess is that a sign with a picture of a person telling you what to do creates peer pressure – think of Uncle Sam:

However, in this image, along with statues and other obviously dead human representations, the person almost certainly weighs their response based on the fact that this is clearly something created by humans, not a human. In the case of the robot, this is less certain. There’s a widespread belief, encouraged by science fiction and so-called ‘science’ writers that Artificial Intelligence is close to creating human minds, or even superhuman minds. If a person interacts with a robot and maps their response to ‘human’ or ‘superhuman’ they may be more likely to follow along.

This in turn means that the behind-the-scenes actors can use robot puppets to push their goals in a way superior to old media. The robot is more than an abstract ‘brand representative’ – it is seen as a person.

Near-future society will have a more powerful method to help its citizens in robot spokesholes substituting for graphic design. But, of course, society may not have the best results of its citizens in mind.

This can’t be good…

Robot Skin and Computational Overload

There’s a long history of announcements from the robotic community, claiming that “robot skin” has been created. Mostly, these have been unserious, since the huge computational load for managing skin sensation is not part of the story. A few historical examples:

From 2019, this robot skin has “millions of sensors”. Great, but what processes data from those millions of sensors, more sensitive to touch and temperature? You’d need millions of computers to handle sensor data and integrate it with, say a deep learning algorithm.


A close-up of why this “skin” is so sensitive, and why the “computation density” of the skin would blow away even a giant network of thousands of computers.


Here’s and earlier one from 2010:

Here, printed circuit boards are used to sense touch on the robot hand


Cool, but no ability to process – in other words, even the limited number of sensors (dozens instead of millions) is not part of the design.

A “robot skin” image from 2006


And even earlier. Sensors that would work, but extracting meaningful information from touch was – and is – beyond robots.

It’s possible to go back further (skin has been a hot robot topic for decades), but the result is basically the same: there have been a series of announcements of “robot skin” in the tech media, typically putting together a pile of sensors in some plastic matrix. While the sensors are real, the wiring up of the sensors is not addressed, and more importantly, the ability to process data from the sensors is not considered – since no computer at present can do the processing. Actual robots out there work with a very small number of sensors to make decisions.

A great example: the Boeing 737 Max. Software relies on a SINGLE “angle of attack” sensor to determine if it is going into a stall. Even with just one sensor, software designers couldn’t handle “edge” cases, probably leading to multiple plane crashes killing lots of people.

737 Max, where only one AOT (Angle of Attack) sensor is driving the robot “autopilot”. Even military planes only have 4 or so.


So, our current “robots” use few non-vision/sound sensors. However, good tactile sensation is exactly what is needed for Robots that Jump to interact with the environment robustly.

Contrast this with the typical “process control” engineering solution. A single sensor, or a very small group of sensors is used to report data. For simple things, this is fine – if water boils, it is time to turn off the tea kettle. However, for robotic interaction with a real-world environment, it isn’t enough. Time and time again, robots have been built with inadequate sensors to navigate their environment if small changes are made.

Contrast this with a simple creature like a flatworm. It’s body is far less complex than our, but it is saturated with sensory neurons…

THis image shows that the entire body is full of nerve cells, many of which are sensory.

The sensor complexity of this simple creature easily exceeds that of the more advanced “robot skin”. Furthermore, complex nerve nets appeared in the simplest of animals.

Compared to living things, robots show a huge undersupply of sensation. Many in the field have rightly tried to design “skin” – but the overall robot falls into the trap of needing incredibly elaborate processing – something that simple animals don’t have or need to have. Clearly, something’s amiss.

The most recent description of touch-feelie robots point to “greater sensory density than human skin”. It’s not meaningful – just having more sensors doesn’t help. You have to intelligently respond to sensation enabled by that density. Now, nerve tissue is expensive to maintain, so animals don’t have high density because it’s cool – it’s needed. That in turn implies that the high sensory density of animal skin has meaning.

The most recent entry into “sensitive skin” takes a step backwards, and imagines a few hundred sensors (compared to the millions in some robot skin designs).

A robot with flex “skin”, with sensors quite large, but closer to manageable. People have thousands of sensors per square inch!


The sensory equipment of this “advanced” robot is large. Probably the sensor density is below the flatworms above, probably similar to a tiny cheese mite:

The incredibly tiny creature has sensor numbers approaching our big, “intelligent” robot. The brain processing sensation so the mite can move and respond in the world is literally microscopic


Still, this new flat, hex-y sensor is a bit better. As the researchers say, it might prevent a robot from actually crushing you during a so-called “hug”.

Finally, it is still better than Google’s own “sensation” of tactile robots. When you run a Google search, the “sensitive skin” robots are lost between (1) Sex dolls, and (2) The “Sophia” electric puppet. Ironically, the sexbots are designed to feel creepy-rubbery to their equally rubbery owners. And, Sophia doesn’t sense anything on its own gynoid rubber, despite the thing apparently giving talks about “gender” in some countries. Here, we see Sophia’s single-sensor design in context:

Not one bit of touch on this thing, and “sensation” is some smartphone tech. Awesome!

I vote for the cheese mite. Sophia looks very 737 Max.