Featured

Signing a Virtual Artist

I must say – this one surprised even me:

What does it mean to “sign” a band that doesn’t exist? While virtual bands have been put together ever since the Monkees of the Beatles’ 1960 era, to Gorillaz today. To be fair, Peter Tork could actually play music. And “real” bands have teams which create their “public image,” changing their appearance, speaking for them, tweaking their songs.

But in each case of these “old school” fakes, there were actual people creating music. If you admired the music or lyrics, the source ultimately was a human being. The band was a fantasy costume UI, built over the toiling studio musicians, writers, and marketers.

What happens when software is added? The first of these imaginary bands to be “signed” is “Skullz,” A simulated “mugshot” of these rebellious – nothings.

Skullz Orkid image
(sigh) a rebel, probably for “protesting” something or other…

https://skullzofficial.com/#/home

https://discord.gg/mW2654Nw3Z

To “go to a concert” by Skullz you buy “passes” (in other words, you get a digital token (NFT?). The music is supposed to be “emo pop,” and the band images show a “rebel” holding a police card (presumable they were “arrested”). I couldn’t bring myself to actually listen.

It’s a bit like a cover band – anonymous musicians in a Beatles cover band channeling what the original Beatles would have done. The innovation: just create the virtual band, instead of covering a real one lost to history.

The video (linked below) claims Skullz has a blowout debut. Presumably, that means that in the future you will be able to go into an online game and “meet” Skullz, and hear them “perform” – or rather, hear studio musicians play and use the Skullz UI to pretend they don’t exist.

But tech-utopians hope that…maybe…someday…the music will be written by the software! Skullz will complete it’s transformation to “real” AI!

The idea that someone thinks this is going to work – shows that people’s capacity for believing in “the invisible world” is unabated in our supposedly rational, secular society. In the first level, fake bands betray our desire to be fooled in an entertaining way. There are lots of cover bands. Discussing the members of an imaginary band is not so different from talking about other creatures like Obi-Wan or vampires as if they existed. “What would Jar Jar Binks do about racism?” kind of questions…

But there is a difference here – power. Folk songs are the product of the community of musicians. Cover bands cover stuff that musicians made. Virtual musicians are products of a company, combining the manufacture of the music and musician in software, and hiding the “soft” human, musician parts out of site. There is centralized control and ownership of everything, compared to distributed ownership in a folk song community – a flat, mesh-network.

Cover bands license music, implying there was a “real” band in the past. The licensing distributes power from the original band in a tree-style network of power.

With a game character band, all power resides in one group. The game creators grab IP from artists and musicians. The old licensing model isn’t there, despite the crazy “sign a virtual artist” language. The image below with an interview with the creators shows it all – guyz creating some artificial women (not really that different from a “living doll”) and manipulating them.

Watch the the swam of guys play with these female images….

…there are no girls here, not in the creation, or in any of the people who “call in” to the presentation in the YouTube video above.

Virtual bands seems to be an expression of rising “hive mind” mentality, discussed to great effect in You are Not a Gadget. The people creating the virtual band aren’t individually important, any more than you worry about individual liver cells in your body. Instead, they live through the emergent identity created by this digital media. If we can’t tell the behavior of the virtual band from a real band, the Turing Test requires that they think of the band as “real” – we have created a “hive mind” AI with an independent, emergent being.

Really, those “women” were the product of this guy’s efforts.

Pet Kirtley

There are no “girls” behind the scenes.

I’m not saying we’re fooled. Nobody is “duped” by virtual musicians – meaning they think this band exists when questioned. But they suspend belief because they “want them to be real…”

Consider that many religions don’t have an old man in the sky; but they do feature an unseen world that requires belief, even when “the trick” is in plain sight. Consider that statues in Egypt had a priest inside talking for the god, but nobody lost their belief in the gods, even if they could see said priest squatting behind the eye-slits.

If you challenged the faith of the people, they would have been very angry – it is not “just a joke.”

How much worse does it become when the unseen world is algorithmic?

Creating art and imaginary characters is one thing – a universal practice, possibly diagnostic of humanity. But past efforts were very manual – the equivalent of costuming real musicians in fancy digital outfits. What happens when the virtual musicians, their music is created by a “neural net?”

In my opinion, you don’t get an AI musician. Instead, making imaginary characters that deliberately try to fool you via a Turning test is a kind of religion.

Featured

CES Robotic Pandemic Glorification

Just a short note on CES 2021 robotics. As you might expect, people are writing post-mortems for a year with log innovation, but lots of adoption of virtual services. During 2020, many tech-pundits felt that the pandemic would finally bring humanoid robots into widespread use. Instead, we got Zoom.

However there is a framework for stories about robots helping during the pandemic, and reporters have duly been plugging in dubious tech into the expected narrative. Check out “robots can ease our pandemic woes.”

https://www.cnet.com/news/ces-2021-showed-us-how-robots-can-ease-our-pandemic-woes

The article features more humanoid shells for tele-operation. The author makes the unlikely stretch-case that people will use robots to move around in infected (read: public) areas. There is a report of a carpet-cleaning robot moving around with strong UV lights to disinfect spaces – first reported in the spring of 2020. This is something that might be useful. Finally, “smart” N95 masks, from game dev Razer, which seem to be more about being a Batman villain

You need batteries for air to be forced into the mask (geez), and UV sterilization (more practical. However, the big goal with this concept seems to be making masking be cool. In fact, it fits the gamer aesthetic of looking part-robot, more like the 3D generated images in a game. One wears masks for medical reasons. However, the excitement here seems to be more about strapping machines onto one’s body than protection against covid.

Bane would approve:

Source: Fandom (image linked)

(Interesting how much the Bane ‘mask’ looks like the fleshy Predator face!

Custom NECA Elder face
Source: Deviant Art (image linked)

Fun and games with a pandemic, but this is more serious – some work on long-anticipated “micro-robots” that can crawl around your body and do stuff.

 robots
Source: Cornel University

Laser jolts microscopic electronic robots into motion | Cornell Chronicle

Near-term, not so bad. There may be some use cases for micro (as opposed to nano) robots. Long-term, a problem.

Imagine someone full of these remote-controlled robots doing ‘maintenance’ on their various body parts. In a certain sense, their body is not partly controlled and directed by a third party, who also owns the intellectual property for the robots, and probably the physical robots as well. The person has a ‘service.’ The intimate contact with their body implies that to some extent, their body has itself been converted to a ‘service that they consume’ rather than a physical blob of tissue that they have rights over. Chew on that one…

Featured

Covid-Forced Dreams of Robo-Tech

2020 is the year of the Covid-19 pandemic, caused by a coronavirus. First off, I’d like to point out the single best writer on the topic, Erin Bromage, if you actually want to understand what the virus is and how it spreads:

Wellllll….robots don’t get covid (though someone is probably readying a training dummy with symptoms) so how does the pandemic affect Robots That Jump?

The answer is simple: tech-utopians decry our non-acceptance of robots everyhere, and hope maybe this will force us to see the future.

All over the world, the response of those who boost humanoid robotics as “the future” to the pandemic has been the gnarly hope…a hope that people will start using humanoid robots due to the pandemic, like we’re supposed to.

This hope is evident in the response of two industries: Robotics, and Virtual Reality. Both are “high tech” big ideas that has been pushed for over 100 years as “the future” but yet haven’t gotten the widespread adoption of cars, radios, or smartphones. Interestingly, VR headsets aren’t selling in 20202, despite the fact that everyone is at home. Why? they are the future. Why, asks Silicon Valley, aren’t people accepting the future?

Puzzled, robo and VR evangelists see Covid-19 as a way to force people to accept that humanoid robots are “almost here”, and will form a big part of their future. People’s limited mobility, and the fact that a robot can’t get infected, provides a test case for robots in society.

And, at first glance, it makes sense – a robot can’t get infected, right? But basically this is PR, if not religious preaching.

Case in point: India, where there are lots of sick people in hospitals. Can robots help?

The headline from tech magazine Protocol, “Rise of the Robots: COVID-19 is Causing a Hesitant India to Welcome Automation” is typical of the robo-mumbo-jumbo announcing there is finally a compelling reason to put humanoid robots everywhere.

The article covers some predictable, but useful things – e.g. not having a human present during an interview might reduce infection, robots can clean floors, that might be contaminated, and so on.

However, the real agenda is clearly that (1) humanoid robots must be the future, and (2) the pandemic is making people realize they must be their future.

A great quote:

“…Arun Sundararajan, an NYU Stern School of Business professor researching how digital technologies transform society told Protocol that he believes a new tech paradigm will emerge after the pandemic recedes.

‘Crisis can be sort of a catalyst or can speed up changes that are on the way — it almost can serve as an accelerant,’ he said.

I don’t think the professor actually meant “almost” – it is “on the way,” and you better use the pandemic to start practicing for your robot future.

In other words, the pandemic is just forcing something that must happen in the future faster – a specific kind of technology (humanoid robots) that is inevitable as death and taxes. The hidden emotion among techies: we probably should be glad that the pandemic is making us wake up and see our predestined future.

In practice, we don’t have useful robots to do many of the things we need in a pandemic – observant care of sick patients, using advanced AI to screen symptoms, cleaning complex medical devices in hospitals, doing contact tracing in the field to snuff out outbreaks (as was done effectively in Vietnam, limiting their 2020 covid surge).

All of these would be a welcome use of robots – but there are no “agile” or “smart” robots around that can actually do any of this!

Robots can’t clean bedpans. Robots can’t sterilize equipment except by spraying the whole device – they aren’t dexterous enough to use cloth and cleaner on odd-shaped parts. Robots can’t drive cars (wait, can’t we get them a self-driving car? No.) Robots can’t ask people questions any better than an automated phone support system. Robots aren’t needed to monitor patient’s vital signs.

So, the main purpose of humanoid robots seems to be to “reassure” people, or bark orders from a big metal and plastic puppet. The type of dexterity that would make a humanoid robot useful (e.g. helping a patient walk down the hall) is not available. Consider that even getting out of a car is too much for the vaunted Boston Dynamics acrobat-robot, despite its acrobat skills:

Entertaining, but no ability to clean bedpans!

These jumps are entertaining, but the machine lacks the ability to “map” these motions to other activities. A human that is able to do backflips would be able to easily guide a patient down the hall, even if they had never done so. In contrast, the robot, like all current AI, is domain-specific. It’s ability to do backflips is confined to backflips. It can’t apply this dexterity to help patients walk. You would have to start over and do a long and energy-intensive “deep learning” training set specifically to walk patients around.

What are we left with, when robots can’t do anything requiring the dexterity of a nurse? Greeters, Lecturers, and Cops.

  • Greeters replace a microphone + phone scripts with a big electric puppet barking friendly messages.
  • Lecturers read a script, informing us of danger.
  • Cops order us to do stuff.

Is this really a good use of resources? Any of the above could be accomplished with a recorded message, graphic wall poster, or video.

But if you’re trying to make robots real, you map your clanking mannequins to these tasks. The reason is that you feel it is very, very important that people understand humanoid robots are the future… Thefore, you loan out your laboratory electric puppets to hospitals to remind everyone that they are the future. Never mind that these “robots” are just glorified microphones or megaphones.

Another wonderful quote from the same Protocol article:

“…UK-based data analytics firm, GlobalData, has said that a shortage of personal protective equipment will drive adoption of robots to treat COVID-19 patients in India...”

This is INSANE. Wouldn’t the cost of a humanoid robot be better spent on masks and other Personal Protection Equipment (PPE)? How many masks, gloves, PPE could I buy for the cost of renting just one greeting robot?

The goal of covid-robots is not to improve efficiency. The goal is to awe the incoming, sickly humans with the future. But these greeter-bots aren’t doing anything much different than radio-controlled, tele-operated radio “robots” did 50 years ago in shopping malls. Mostly, people are just creeped out by this stuff.

Think about it: Do you want a big electric puppet as your final companion during a serious illness? Apparently some people thought Pepper, (the robot you’re not supposed to have sex with), was the perfect end-of-life friend.

https://www.usnews.com/news/world/articles/2020-05-01/robots-on-hand-to-greet-japanese-coronavirus-patients-in-hotels

Lamentable quotes from Pepper:

“…Please, wear a mask inside,” it said in a perky voice. “I hope you recover as quickly as possible…”

“…I pray the spread of the disease is contained as soon as possible…”

“…Let’s join our hearts and get through this together…”

If you put up a sign with these messages, humans would assume they come from humans. If you have a robot say these messages, the popular perception is that the robot has ‘taken over’ from the humans and is running things now. So reassuring!

This is a huge fail of User Experience Design (UX).

To be fair, the hospital in questions has non-humanoid robots of the iRobot type which help with floor cleaning – but this is hardly news, since hospitals have been using these primitive kinds of robots since the late 1970s. Here’s a robot in an Indian hospital actually doing something…

Milagrow Floor Robot iMap9.0

Read more at:

Unfortunately, the same company has an insipid humanoid robot greeter for the sick:

Can monitor patients remotely – just like a microphone!

“Hello, I hope you don’t die, but if you do, I hope is it a pleasant experience. We are here to serve.”

A movie of Pepper barking orders at humans in Germany:

Here’s the point: you could have put up a cardboard cutout of the robot with an attached speaker, and conveyed the same information. Better, since people would understand the cardboard cutout was created by humans, not a future overload beginning to order us around.

By tying a cop-robot into the pandemic, you’ve actually created and enabled the narrative that the various nutbars out there want you to believe.

FILE – In this April 15, 2020, file photo protesters carry rifles near the steps of the Michigan State Capitol building in Lansing, Mich. Protesters drove past the Michigan Capitol to show their displeasure with Gov. Gretchen Whitmer’s orders to keep people at home and businesses locked during the new coronavirus COVID-19 outbreak. (AP Photo/Paul Sancya, File)

You know what these guys would say about Pepper, don’t you?

The American version of the “big electric puppet barking orders” completely freaked people out in NYC Central Park, and was removed after hour because “it didn’t have a permit”:

https://nypost.com/2020/02/06/city-boots-creepy-coronavirus-detecting-robot-from-bryant-park/ (video in article, after ad)

Big electric puppet demands to know whether people have coronavirus symptoms.
Electric puppet tries to question people about their covid symptoms, makes snap judgements

Boston Dynamics probably knew that there would be a negative response, so it deployed its dog-bot Singapore, far away form the West. Really, there’s a bit of colonial thinking here. After the initial apparent success in containing the virus failed, the authoritarian government thought their best bet was to play right into the conspiracy nutbars who think the pandemic was faked to bring in robots, reptilians, aliens, whatever.

How to enable the fringe-y right? Give them a robot overlord in the park! Note: this “robot”, like others, is actually tele-operated by a remote human. Someone is basically flying a drone.

Boston Dynamics has found a new job for its dog-like robot: Social distancing patrol

Video on Instagram at:

Be safe!

Frankly, we have a LOT of unemployed – why not send a person out, in a car with a megaphone and a face mask, instead of an expensive robot that removes the human job? As the posts at the top of the article show, the risk outside of infection is low, as long as you don’t pack people into parades.

Also note the cog-dissanyo dumbass of the person who wrote this article.

“…Dont worry about bumping into Spot as it is fitted with safety sensors to detect objects and people within 1m to avoid collision…Robots can observe safe distancing too!”

Safe distance is 6 feet or greater. 1m is three feet. The robot is modeling unsafe behavior.

Back to our India article. A priceless movie of a stiff electric puppet barking orders, apparently what robots will “do” in the future that must happen:

The Covid-19 pandemic is a great example of a human proble. Service robots (like a floor-cleaner) have some real value, but humanoid greeters and order-ers do not. Using these robots is an expensive waste of money that could be used for protective equipment.

And, the belief among tech-utopians that robots must be the future doesn’t justify diverting money from pandemic relief to push their techie religion.

Featured

An Actual Robot that Jumps

The “public” robotics (as opposed to the private world of practical industrial robots) is split into two factions. One promotes electric puppets who supposedly have humanlike “minds” and do art or otherwise replace humans. The second faction tries to build robotic bodies that can actually function in the real world. The public tends to confuse or think that a humanlike mind is needed to drive an agile robot, but that’s not the case. Instead, it’s perfectly possible to create a “mindless” robot that functions efficiently in its environment.

For whatever reason, Boston Dynamics has taken the second track for many years. They demonstrated doglike and mulelike robots as “pack animals” (sadly, the need for a gasoline engine to provide enough power doomed these devices to irrelevancy). More recently, BD has done a lot of PR using its ‘Atlas’ robot – basically the descendant of the DARPA Atlas with benefits.

In the video embedded above, Atlas is showing with improved agility. It is actually doing a bit of tumbling, which is truly remarkabe. While it still looks “mechanical” it is clearly an emulation of humanoid motion. A true Robot that Jumps!

The one problem, just like the “Big Dog” and related robots – power. The size and weight of this robot imply massive electrical consumption. BD hasn’t given up on having electric stepper-motor-like motion, which means that when Atlas cancels its motion after a move, it uses a lot of power. A human might use 150 watts doing brisk exercise; my guess is that Atlas is using 50 times less power.

This wouldn’t be a problem except that the Iron Man “power cell” doesn’t exist. All we have are lithium batteries, which even at their most efficient, could not move this body for more than a few minutes. That’s a huge problem. In the pack-bots, BD used gasoline engines, since fossil fuel has 50 times the energy density of a lithium battery. And future battery tech, while likely to get better, might at best double, and even that with exotic tech like molten salt. Not 50x.

I wish BD or someone who knows would doe an accurate estimate of the power consumption of the Atlas body in vigorous exercise, and then ‘scale’ up or down to see if there is a point where you wouldn’t have to recharge every few minutes. My guess is that very small might be more viable (like the robo-insects) but big battle-bot is out, unless you use nuclear power. In fact, old images of robots by Hans Moravec in places like Scientific American routinely showed nuclear power supplies.

Practical? If not, at least BD has a steady PR stream feeding on the “hopium” of tech-utopians…

Featured

Making Dating into a Message from the Future

In a new low for humanity, our most technically advanced news media – distributed, Internet-based stories allowing instant access and comments – decided to act like a 3-year old and “believe” in the Hanson Robotics “Sophia”, the reputed “first robot citizen” of Saudi Arabia.

Sophia with its creator, Dave Hanson of Hanson Robotics

The video interview is visible on this page:

https://finance.yahoo.com/video/sophia-robot-dating-apps-kids-180844755.html

The event in question is a Yahoo! Finance interview – specially run by a reporters and interviewers previously dropped on their heads as children – robot Sophia discussed “modern dating” via dating apps, also who should pay on the first date. Telling Quote from the interviewer:

“…For what it’s worth, the robot that was created by Hong Kong-based Hanson Robotics to improve robot-to-human communication, says she has no desire to pursue eventually raising children of her own, but would prefer working with them instead…”

OK, let me get this straight, trying to keep the straight face of the interviewer. We are supposed to think that said robot has considered dating and children. If it did, it would need at the very least a colossal, cross-referenced sea of “deep learning” pattern recognizers, coupled to some sort of “decider” creating “opinions”. This most assuredly the robot does not have. It is no more interested in dating than a toilet plunger.

Another beauty, this time from the robot’s “mouth” (sorry, robot speakers):

“Before dating apps, the biggest factor in determining love was geographic proximity,” she said, while tethered to a human operator who had been informed of the interview topics ahead of time. “The advent of dating apps has collapsed the distance between people. So even though I don’t date, I am a fan.”

Now, pretending this robot actually has opinions, rather than being a big electric puppet, providing PR for Hanson, is not so bad. After all, we tell little kids about Santa, so why not pretend electric puppets go on dates? Also, the comments supposedly created by Sophia are liberal/wokster, so one might even imagine that they “make a difference” in the zeitgeits. After all, if machines tell us “we” are the problem, not our tech (another Sophia interview), what’s not to like?

Here’s the problem: BS stories like this become embedded in the media, and with modern social networks, frequently are treated as evidence that intelligent robots are on the way. No matter that the “questions” asked of Sophia are submitted to Hanson beforehand, so an interesting (human-generated) answer can be mimed out. And, video images of a robot apparently talking are parsed by kids as the robot is alive. Even when they grow up, Plurals/GenZ will have a “gut feeling” that something is there (meaning the kind of emotions you have around dating) when in fact it is a puppet show.

In a recent study cited in Parenthood magazine entitled “Does Your Kid Know that Robots have no Feelings?“, kids clearly believe they are interacting with a “social being”:

“…90 children ages 9-15 interacted with a humanoid robot named “Robovie.” Within the 15-minute session, children interacted physically and verbally with Robovie, until a researcher interrupted its turn at a game and put the robot in a closet, despite its objections. Post-interview, results showed the majority of younger participants believed the robot had thoughts and feelings and was a “social being.” In other words, it could be a friend…”

Best friend. Joyful happy boy smiling while hugging a robot

The take-home for most techies is that “robots are already talking about dating” and hugging children…so the robot revolution is neigh. True, Sophia probably has better dating skills than the typical basement incel hammering away in Fortnite, but the rest of us don’t fall in this category.

In reality, humanoid robots are proving poor substitutes for humans in tasks that can be measures, instead of “ideas” that can be puppeted. Witness the hapless Fedor, the Russian humanoid, sent to the International Space Station to analyze whether humanoid robots of its type could help with tasks.

Take home: NO. Didn’t work, according to Yevgeny Dudorov, executive director of robot developers Androidnaya Tekhnika (see: https://phys.org/news/2019-09-russia-scraps-robot-fedor-space.html )

“…but Fedor turned out to have a design that does not work well in space—standing 180 centimetres (six feet) tall, its long legs were not needed on space walks, Dudorov said…”

In the real world, we are a long way from the Robot That Jumps – jumping to help in space, or jumping in to give basement losers a mechanical dating partner.

But perhaps the example of Santa is valid. After all, most techies claim to be secular, while holding a set of irrational beliefs in “futurism”, “the singularity”, “strong Ai” and similar beliefs that are impossible to differentiate from ol’ time religion. Since none of these beliefs refer to anything real, a robot like Sophia is a magic elf from the “coming around the corner soon” prophecy spirit future time when robots will go on dates – and those incels will be able to replace their current flabby rubber girls with microprocessor-driven puppets. Hanson Robotics will be there to sell them, I’m sure!

Featured

Atlas Deepfake

Well, well, people are so desperate to have robots that they’re willing to propagate phony videos of the Boston Dynamics humanoid in action

This worked great for Corridor Digital, a Los Angeles VFX house, who wanted to parody some of the real videos, including the one of Atlas where it is being “taunted”. A great job of motion capture plus blending in robot body

Corridor Digital Video Site: https://www.youtube.com/user/CorridorDigital

Corridor Digital is doing a great parody of the BD madness. But the real fun comes when you visit tech blogs discussing the face (I wonder how many of them were initially duped) that use the parody to encode pious preaching about how the “robot uprising” will be much deadlier than the video… The proof? The VFX looks a little like real Atlas videos.

Boston Dynamic Videos: https://www.youtube.com/user/BostonDynamics

To their credit, BD actually linked the Corridor video on their own youtube site. All in all, some great shared digital publicity.

But the media appeared caught in a 5-year old’s understanding of both videos…

Gizmodo erupted in crazed slobber of pseudo-news, where (despite the parody), the author takes it as “truth” and preaches to us that the robots will rise up and destroy us, in the best religious fantasy tradition – https://gizmodo.com/that-viral-video-of-a-robot-uprising-is-fake-because-th-1835575686.

In fact, the deepfake appears to have made the author think it is more likely that the robots will rise. CGI “proves” something is real!

The Verge is slightly more sensible, and uses the parody as a discussion about how people feel empathy for things that don’t have a mind, if they act a certain way – https://www.theverge.com/tldr/2019/6/17/18681682/boston-dynamics-robot-uprising-parody-video-cgi-fake

The real problem is that people will see mind and consciousness where there is none, and act accordingly…

“(From the Verge) As MIT researcher and robot ethicist Kate Darling puts it: ‘We’re biologically hardwired to project intent and life onto any movement in our physical space that seems autonomous to us. So people will treat all sorts of robots like they’re alive.’”

Most of the coverage by “lower” tech blogs deleted the fantastic parts of the parody, dropped the quality (so the CGI was harder to see), and simply let people believe an angry robot was breaking out of its cage.

Of course, our “new media” need clickbait, and as always, it is best to distribution religious text. The techno-singularian vision of the future has become more than a cult, and is in fact a replacement for traditional religion in techies. Deepfakes like this are OK because they are “truthy” – they could be true, so we believe!

This is part of a larger problem for our society. The rise of CGI has made people “believe” that anything that can be 3D modeled “could be real”. This is why companies like Facebook and Uber churn out bullshit images of “air cars” that tech media and groupies unthinkly accept as “just around the corner”.

I suspect the writers don’t understand that Uber can endlessly create these CGI videos to look trendy, and rake in gains in stock price. Actually making this helicopter (that’s what it is) would be difficult and dangerous. Better to make a phony video, then say it could be true just around the corner.

So be worried – not abut the BD robot, but about the millions of craven pixel-pushers desperate for a god to worship and (human) sacrifice for…

Chickens run around with their heads cut off, and the BD robot is on its way to being a decapitated chicken in several years. Fascinating that said chicken is touted as our destruction.

Featured

Another Puppet Show, Featuring a Gynoid Robot Being a “Creative Artist”

Well, the latest in the bright future of robots is here, and it is a “creative artist” Ai-Da“, a gynoid robot whose “works” have actually been sold to idiot buyers for a total > $1 million dollars.

two images of ai-da robot head
Adian Meller showcases Ai-Da. Source: Metro

Quoting Devidiscourse (India’s media loves humanoid robots),

“…Described as “the world’s first ultra-realistic AI humanoid robot artist”, Ai-Da opens her first solo exhibition of eight drawings, 20 paintings, four sculptures and two video works next week, bringing ‘a new voice’ to the art world, her British inventor and gallery owner Aidan Meller says. “The technological voice is the important one to focus on because it affects everybody,” he told Reuters at a preview…”

The big electric puppet, created by the 46-year old art dealer can’t walk or move around, But that doesn’t stop the flood of PR images of Ai-Da thinking pensively of her future, self-consciously reflecting the incept scene from HBO’s “Westworld” where lead robot, “Dolores” wakes up.

a pensive pile of plastic - ad-ia robot with downcast cameras
Ai-Da on nonfunctional legs evoking an HBO series
source: MSN

Yes, the same people, Engineered Arts, designed and built the Ai-Da and the HBO movie-bot bodies. Ai-Da was given legs to make her look like more like the movie robots.

Evan Rachel Wood as 'Dolores' in HBO 'Westworld' Scene
From “Westworld”, a scene with actress Evan Rachel Wood as the robot “Dolores”. Later in the scene, Dolores sits up, exactly like Ai-Da image above. Source: Daily Kos (big surprise these dumbass “progressives” are anti-historically suckered into this worn-out discussion)

There are several interesting features of the aI-dA machine itself. First, the cameras for “drawing from sight” are actually in the artificial eyes (though I’m surprised there isn’t a open-on-demand third eye), and the drawing arm does exhibit fine motor control for drawing to a canvas. Mechanical plotters have been doing this since the 1940s, but having it in an articulated hand is interesting.

Reminds me of a Fortune-Telling Machine I saw somewhere

The algorithm used is also interesting – it breaks up the image into a bunch of short line segments (like some brain neurons in primary visual cortex may do) and can reproduce your face with said lines. It is neat, though hardly useful, robot-wise, when you can just take a high-resolution digital picture.

Interesting, though I seem to remember seeing stuff like this 40 years ago!
Source: Daily Mirror (click for video)

But…wait! This isn’t the first “robot artist”. Some may remember Aaron from waayyyy back in 1973, an Ai program created by (human) artist Harold Cohen.

Aaron was programmed in C (later on LISP) on computers running 1/500 of the speed, and with 1/100,000 of the storage capacity of current art-bots like Ai-Da. Still, of the two art-bots, it seems the most “creative”…

Aaron’s art from 1979. Source: Cohen Website

Ok, admit it! Aaron is a MUCH better painter than ai-da! Aaron plays with forms and variation, while AiDa makes something like a street artist sketch. ai-dA simply maps contrast to edges, to form to a bunch of lines, similar to what neurons do in the lower-level of the brain. See this article for some recent work on neural “edge detection” in brains.

Aaron, in later incarnations, even mixed its own paint! And that was with computers 1/1000 the power of those used today.

Cohen also made sure that people understood Aaron was code, and he was exploring how much of art was reducable to algorithms.” Each artistic “style” was coded by Cohen, then Aaron would crank out an infinite number of variations using style.

However, there is a LOT of originality in Aaron’s variations. Cohen’s own commentary on the web (site now showing signs of abandonment) may be found at this link.

This is a serious exploration of art’s scope and meaning, with algorithmic art treated as both medium and product.

Aaron image from 1992, after it was reprogrammed in LISP, which improved its color choices. Source: Cohen Website

In my mind, this indeed demonstrates that some of the ‘imaginative’ part of “the creative professions” can be automated – you can truly create an “intelligence amplifier”, even for art.

And Aaron has hardly been alone. Over the years, there have been dozens of “art bots”, like this one from 2011. Created by Benjamin Grosser, it used ambient sounds to adjust the images it painted:

Interactive robotic painting machine Source: vimeo

A good resource for 1990s computer-created art may be found at The Algorists, which seriously treats the idea of algorithmic art. For the 2010s, check an even more recent article on Newatlas.

Now, compare this “automated painting” to Ai-Da. The “art” AiDa draws is clearly more primitive than ANY of these historical art-bots, and just looks like the neural edge detection. It’s INFERIOR to the past, and more image classification than “art”.

Line drawing by Ai-Da. Source: Futurism. Incredibly, the author of this piece failed to mention that Futurism has already covered numerous “robot artists”, all more interesting than Ai-Da!

Ai-Da has created other images, termed “shattered light”, which are abstract rather than figurative. However, the “shattered light” images actually up for sale (to suckers) at the gallery are generated from a different algorithm. They aren’t drawn by the robot arm, but are printed. Then, a human artist colorizes them so they look khuuuuul…

Ai-Da with shattered light painting by creator
The actual images going on sale, termed “shattered light”. No mention anywhere how these images were created (apparently nobody cares), but we do know that a human repainted over the print. Source: Oman Times

At least Aaron mixes his own paints!

The fascinating part, as always, is not the technology, but the emergent robot narrative coupled with the insane, uncritical media worship of this parlour trick by an art gallery, eager to seize the zeitgeist to generate $$$ (I salute Adian Meller for this creative insight).

Why throw shade on poor AiDa? First, Ai-Da is not being represented as what it seems to be, which is an advance in computer vision. Instead, the creators claim they’re raising deep and philosophical questions about the meaning of what it is to be human.

Hey… these deep conversations have been happening for 40 years with the MORE ADVANCED art-bots, and there are vastly more interesting and critical discussions available at the intersection of creativity, algorithms, and science if you bother to look…

But you wouldn’t know it from the media!

Practically none of the hundreds of slobber-stories about Ai-Da mention that there are other, superior art-bots out there. And nobody therefore has to grapple with past robo-painters doing a better job than Ai-Da with inferior hardware.

Instead, in our modern world, the public, the discussion isn’t about creativity and programming. Now, its personal. We are told that we have “suddenly” created an artistic robot who is a “performance artist” and is selling her art in a gallery. No discussion of method, coding, or the actual humanness of the robot – just pretty pictures of a female electric puppet in a fancy home with a painting smock.

Apparently, it’s enough to reinforce “the female robots are among us, COOL!” story.

Even the critical articles, like this one on Artnet, seem completely ignorant of the past. Naomi Rea attacked Ad-ia as anti-female, but missed the forest for the trees – she didn’t even mention Aaron or other artbots – breathtaking anti-historical thinking.

My guess is her “creepy white men” comments were just standard, intoned, wokster piety, tacked onto the end of a poorly researched article.

The a-historical aspect of this robo-worship is breathtaking. Why, on the Futurism blog there are older stories about art-bots! But the author (Victor Tangerman??) of the Ai-Da story doesn’t even mention them, and just parrots the Reuters news release. Possibly we should replace “parrot” with “robot” so we don’t insult birds.

My guess is that Futurism is willfully ignorant of the past, and sees no reason to research the extraordinary claim of robotic art. Instead, it trumpets that this pile of parts is a “new Picasso”. Yeech.

Ironically, Futurism’s “related articles” (which are the result of a pattern-matching algorithm), DO mention earlier art-bots. Score one for the machines!

Ai-Da has very little to do with art, and everything to do with this strange 2010s desire to “believe” that robots are about to appear among us, typically presenting in sexy female form. A few humdrum references to “we must think deep thoughts about robots” always appear, but really, it’s about nerd sex with plastic, not the potential for machine art.

In a recent “news conference” Ai-Da, like the similar Sophia robot, “spoke” to the press. Like Sophia, Ai-Da was pre-loaded with answers by a human operator. In other words, someone remotely operated the robot to give it the apparent ability to speak.

BUT DON’T WORRY – it will have its own voice, soon, you say?

Consider how strange this attitude is…

Before automobiles, people didn’t have a passion to make fake cars and pretend that they actually worked. People rarely pretended they had working airplanes before the first planes flew. They certainly didn’t show a fake then say “believe in it NOW, because it will work ‘soon’…”

Why robots?

I suspect you might have gotten a similar “DON’T WORRY” out of a Egyptian priest who spent his days talking through a temple statue to give it a voice. Watch it, humble farmer, someday, the god might just speak through the this statue!

Here’s a transcript from an 1899 (yup) lecture describing how ancient Egyptian statues were designed to be “spoken through” by priests, and also how the statues had joints and valves to make them move:

” …M. Gastor. Maspero, the well-known French Egyptcologist, has recently written an interesting article on the “speaking statues” of ancient Egypt. He says that he statues of some of the gods were made of jointed parts and were supposed to communicate with the faithful by speech, signs, and other movements. They were made or wood, painted or glided. Their hands could be raised and lowered and their head moved, but it is not known whether their feet could be put in motion.

When one of the faithful asked for advice their god answered, either by signs or words.

Occasionally long speeches were made, and at other times the answer was simply an inclination of the head. Every temple had priests, whose special duty was to assist the statues to make these communications.

The priests did not make any mystery of their part in the proceedings. It was believed that the priests were intermediary between the gods and mortals, and. the priests’ themselves had a very exalted idea of their calling…”

Source: Los Angeles Herald, Volume 00000000602, Number 187, 5 April 1899, via California Digital Newspaper Collection at UCR Center for Biographical Studies and Research. Note: I corrected the clumsy OCR of the robot translator on this website.

If you’re wondering how ancient Egyptians could have possibly listened to a statute puppeted by a priest without laughing, consider that the following image was called “incredibly lifelike” by multiple media outlets, echoing the art dealer’s press release, without questions or comment.

No, it is NOT “hyper-realistic” Source: CreativeBoom

No, this is only slightly improved from a robotic fortune-teller, something which used to be common at theme parks. “Ai-DA” remains deep in the uncanny valley. Only a generation raised on seeing videogames as “realistic” could think of this as “realistic”.

Print of Esmerdala, a Disney model taken from older fortune telling robots at theme parks.
Image of mechanical fortune-telling machine Esmeralda, a Disney model taken from much older fortune-telling theme park robots. Source: Fine Art America

The Ai-Da puppet show does indeed capture, as the creator of Ai-Da desired, the “zeitgeist”. We really, really, really, really want to create robots, but we don’t understand how to do it. We’re stuck on the fast advance of digital computing and “accelerating change”, which seems to require that robots exist now. We resolutely ignore the 40-year old history for robot artists. We ahistorically assume this must be the first time.

Mask of high priest in Egypt? Possibly one of those who puppeted the Egyptian robots… Source: Sotheby’s Catalog

But…there aren’t any robots like the ones we insist upon. So, we set up an electric puppet to fill the void, holding steady our devout faith until the Second Coming of the Machines.

In practical robots, this hopeful puppetry is masking the failure of so-called “driverless car” initiatives.

In all cases to date, “driverless cars” actually have a human operator behind the scenes, monitoring and guiding the card past anything beyond cruise-control complexity, typically a few times in every mile. Essentially, glorified forms of Cruise control allow a single driver to work as a cabbie in multiple vehicles. If you want the job, Designated Driver is hiring!

While “driverless cars” have some self-control, they are corrected every half-mile or so by a human operator, or when the people get tired of waiting for the robot’s incredibly slow progress. Here’s a modern high priest operating one of these puppets. You can apply for a job doing this at Designated Driver Source: Futurism

Will the public catch on that most Ai out there is just a puppet show similar to temple-tricks played thousands of years ago? I’m not holding my breath – people do need religion in their lives, and a religion of godlike robots that want sex with nerdy mortals seems just right for the 2010s.

Meanwhile, Harold Cohen, the creator of Aaron, died in April 2016. His passing unrecognized by the “robotic” tech-future-utopian media. No love for him, or his for his sexless but vastly superior art-bot.

Featured

Kissing Empty Air

Recently, the hubub over imaginary CGI robots has reached new heights. While real humanoid robots look pretty inhuman, the media more and more acts like a 3D game character is exactly the same as a 3D “physical meatspace” robot.

Lately, the excitement in our gynoid era has shifted to false female-presenting lips smacking together under the authoritarian C++ code. In one story, a nonexistent “robot” was shown kissing a model. In another case, two lumps of plastic clacked together for a “kiss”.

First, the robot duo “kiss”. A while ago (2009), horribly inhuman, easily defended against:

The second case is more troubling.

Our 2019 “robot kiss” features Calvin Klein apologizing after they released a video showing model Bela Hadid apparently kissing Lil Miquela, a blob of software and code and pixels, in other words somebody’s digital art working as a corporate shill.

lil-miqua-corporate-shill_20180320103933_ZH

As Wikipedia reports:

Miquela is an Instagram model and music artist claiming to be from Downey, California.

The project began in 2016 as an Instagram profile. By April 2018, the account had amassed more than a million followers by portraying the lifestyle of an Instagram it-girl over social media. The account also details a fictional narrative which presents Miquela as a sentient robot in conflict with other digital projects.

In August 2017, Miquela released her first single, “Not Mine”. Her pivot into music has been compared to virtual musicians Gorillaz and Hatsune Miku.

Obviously, Miquela did not “release a single”. Miquela does not exist. Some people recorded an album and “presented” their music along with a bunch of digital character art. It’s people putting on digital masks.

Miquela is NOT a robot. The “sentient robot” is part of the story for the imaginary character. At best, we are lookng at a purely digital puppet, with no internal mind whatsoever (not even “deep learning”). It is a product of puppetmasters manipulating images for marketing in social media.

To repeat, there is no physical Miquela. No robot you could visit, no plastic and metal, just a computer screen.

Outrage began this month when Calvin Kline made a video with model Bela Hadid kissing empty air in front of a greenscreen. Post-production, Digital 3D modeling overlaid a Lil Miquela 3D model, and the result was apparently two women kissing.

The tech was no more sophisticated than any “game character” created and rigged in Maya and deployed in Unity or Unreal Engine. While there was an image of the kiss, there was no kiss.

To repeat, Hadid kissed empty air, or some guy dressed up in green motion capture clothing. 

green-motion-capture

(great kiss, Bela!)

Were people upset that that the event didn’t happen? Nope, their response was a widespread, stunning and cheerful acceptance of the image as a physical reality. The discussion proceeded from there.

First, though, there were immediate complaints from the LGBTQ community caused CK to apologize. Hadid is straight.

 

True, the pixels she pretended to kiss were “presenting” as female. Or, the people behind the digital image, drawing and manipulating it in software were “presenting” as female. OK, typical identity wokster flareup, with some justification in my opinion.

Also, with 3d modeling software we now have an easy way of creating something that normally would have taken a good portrait artist a couple of weeks in the old days. It’s not hard to create a 3D digital portrait of an imaginary human. And, if you’re sending still images to Instagram, you can cover the fact that you’re – just making pictures.

Buttt……….People are calling Lil Miquela a “robot”, as if there are walking humanoid robots that look like this. Apparently, these “reporters” are so dumb that they don’t realize that there is no physical robot – just people uploading 3D modeled images

Now, images have been used to both define and attack identity for a long time, so no problem with seeing this as a bit of queer-baiting by a company that deserved to have it marketers called out.

But the bigger point is truly crazy. I scoured all the news stories, and in all of them, Lil Miquela was called a robot. There was not correction to “digital 3D image”. Every press story called the kis a kiss between a robot and a human.

Nearly every media outlet called it a human kissing a physical, humanoid robot walking confidently and gazing on a human before lip-locking. But there was no kiss. There was no robot.

Not a single major story discussed the reality – a model kissed air, and then had their image added to an animated movie.

And, if you check Google Search, the search is “lil miquea robot”, not “lil miquea digital art character”. Clearly, the audience for these tech stories, wokster-outraged or not, thinks they saw a physical robot.

It’s stunning. I have to believe that the majority of the tech media unthinkingly accepted that this was a physical, humanoid robot kissing a physical model. They must actually believe that it’s possible to build a robot that works like this – and that it must be be easily rented out to the fashion industry. Originally, Instagram followers couldn’t figure out if they was human (eek!). So, their go-to when lil Miqua turns not to be human is that it must be a robot presenting as female.

Where is the “fact-checking” that media is supposed to do? Why the echo chamber in dozens of articles calling a work of digital art a humanoid robot – when in fact, we can’t really make good ones, and the ones we have are incredibly hard to create? People so badly want to believe that we have given birth to artifical humans.

One is forced to conclude that tech media has begun to believe its own science fiction.

Power for Robot Brains

Robots use a lot of power compared to a human. At first thought, this might be obvious – after all, can’t they fly and crash through walls? But the real story is that humanoid robots are delicate – more like a 95-year old stepping very carefully so they don’t fall and break their hip. Their problems are sensory, cognitive, and motor. Compared to a human, they could break very easily.

This isn’t really surprising. Most robots are “toy” demonstrations for a grad student thesis, or a way to wow investors to pour in money. As demos, they only have to work for a short amount of time. However, creating Robots That Jump means making resilient robot bodies that don’t need constant maintenance. Their “minds” also have to be resilient – able to adapt to new and novel stimuli. But time, space, and money have mostly prevented humanoid robots from becoming robust enough to use in the real world.

To solve these problems, engineers apply “technology.” But the “technology” is not as new or constantly changing as some thing. Many of the hardware and software solutions to robotics are decades old. The engineers aren’t developing novel systems, but instead make systems that use more power.

As the push to create the future of robots continues, brute power is substituting for an understanding of how to make agile, robust robot bodies. Rising power user in robots runs straight into issues of sustainability and climate change. Like the so-called “metaverse,” they are energy-hungry solutions to problems that have mostly been solved, but activate a business model.

I’ll start with robot brains, since this is easier to discuss. Frankly, I don’t understand the problems with building a robust robot body that doesn’t break a hip when falling. This really requires a discussion by an mechanical engineer. I’ll just note that the human body uses a few hundred watts of power, tops. Humanoid robots that walk use thousands to tens of thousands of watts just to walk, and need heavy but apparently fragile bodies mounting those big flammable lithium batteries. More power devices (e.g. the failed DARPA robot “Mule”) used gasoline engines to get the necessary, sustained power they needed – the human body is silent in motion. So there’s a problem, but I won’t go further in this post.

On the other hand, I do know something about coding. I was team leader for one of the DARPA 2005 driverless car entries, and tried to program said vehicle. We didn’t get that far, but I am more of a “Subject Matter Expert” here. So, I’ll discuss robot brains and power, specifically referencing two articles about “neural-net” based AI.

To begin with, a general-purpose robot that could do and learn a variety of tasks would need the type of adaptable brain that an animal has. And it is true that AIs using newer “deep learning” algorithms and multiple layers in their neural nets have gotten better at pattern recognition – lots better, in fact. Face recognition, so long as (you’re not dark-skinned), is pretty good. The technology is pretty well standardized.

Deep neural network and training set – https://www.researchgate.net/figure/Layers-and-their-abstraction-in-deep-learning-Image-recognition-as-measured-by-ImageNet_fig17_32653165

So, if the robot with an embedded or networked AI can recognize faces, is this a stepping-stone to general recognition of the environment?

Nope.

The problem with “neural net” systems that are trained by “deep learning” is that they are very specific. You can use a neural net plus deep learning to train a system to differentiate, say, shoes from similarly-shaped objects. You’ll get somewhat useful results – typically 80% with a moderate training set. With larger training sets, you might get to 90%. All is good, right?

The problem is that, in order to push up the accuracy, or make fine discriminations (e.g. women’s versus men’s shoes) you’ll need a bigger network and training set. And the training will consume a lot more power. As you push for 100% accuracy of recognition, the cost goes up in a nonlinear fashion. This makes very high accuracy (e.g. 99%) almost unobtainable, even for large networks like Google. The problems were detailed in the following article:

Deep Learning’s Computational Cost

https://spectrum.ieee.org/deep-learning-computational-cost

A basic feature of modern neural networks is that they need to be “trained.” Most non-experts imagine the “AI” as a rules-based system. But the programmerscreate a random neural net, then apply a “learning set” to set the weights in the network. The training cycles take a huge amount of power, and more and more power is needed to reduce error. Training the network to, say, 95% accuracy takes the power of a large city for a month(!)

This article discusses actual energy use by real deep learning systems, rather than the extrapolation above:It takes a lot of energy for machines to learn – here’s why AI is so power-hungry (theconversation.com)

Some energy measurement tools for the operation of the AI (not the training stage, which is where the energy listed above is used):AI industry, obsessed with speed, is loathe to consider the energy cost in latest MLPerf benchmark | ZDNet

Think of it. To train an AI system recognizing objects at 95%, you need the power the city of New York for one month. If you went to 99%, you might need the power the entire United States consumes in one month.

This problem is well-known in the actual AI industry, (as opposed to the marketers and shills looking for investment) as “Red AI.” A recent discussion of the problem can be found in the paper below:

Green AI (2019), Roy Schwartz Jesse Dodge, Noah A. Smith, Oren Etzioni

1907.10597.pdf (arxiv.org)

A great quote from the abstract summarizes the problem:

“…The computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018. These computations have a surprisingly large carbon footprint. Ironically, deep learning was inspired by the human brain, which is remarkably energy efficient. Moreover, the financial cost of the computations can make it difficult for academics, students, and researchers, in particular those from emerging economies, to engage in deep learning research…”

Consider: our advances in “deep learning” have required a 300,000x rise in energy consumption. I guess we’re gonna need more power!

The authors go on to propose a “Green AI” that includes sustainability as a goal, not just speed and efficiency. They want grad students to be able to work on AI without having to spend tens of thousands of dollars on – electrical power.

In certain environments this is acceptable. Some pattern-recognizer AIs work well. Face recognition is a good example, as all humans have pretty similar faces, and we don’t have to train networks to recognize a variety of non-human faces. True, the networks often only recognize white faces, but mayhaps we can burn a few thousands of tons of coal for each, and be done with it (until fashions and makeup change).

So, you might be able to justify burning all that coal to get a good facial recognition system for, say passenger identification in an airport.

But in natural, uncontrolled environments (like those encountered by a driverless car), you need thousands of trained AIs, all firing at once. Novel patterns not in the training set will be unpredictable. A good example of this might be in driving. Normally, traffic signs are standardized, and it isn’t too hard to recognize a stop sign. But what if the sign is broken, and a hand-drawn temporary sign is present? The system will almost certainly fail.

Basically, this is an unsolveable problem with our current technology. In theory, the robotics industry could grab half the word’s power for 5 years and get better recognition. But it won’t happen. It’s going to be a huge challenge to shift from fossil fuels to renewables in the next few decades, and nobody is going to support burning a massive amount of coal to improve robot vision.

Even if we did this, we need huge, expensive 5G networks. You can’t use the neural net onboard – too big. Instead, the robo-car needed to be connected to “the cloud” at very low latency and high bandwidth. So, burn megatons of coal to build out 5G.

So, we can’t build good driverless cars with current “deep learning” neural nets. We’ll run out of power that should be diverted to, say creating renewable energy. The power requirements of robots are that big. We can’t supply the power of multiple cities to make AI-based shopping better!

Instead, robot-pushers push two ideas. The first is infrastructure. If we can’t build pattern-recognizer AI that works in the real world – well, change the real world! Make streets and crosswalks perfectly regular. Paint every crosswalk and lane divider in high contrast. Put sensors or IoT devices into the road. Make every building more square. Basically make the world look like the 3D game simulations often proposed for training driverless cars:

From: Learning from Demonstrations Using Signal Temporal Logic, was presented at the Conference on Robot Learning (CoRL), Nov. 18.

This gets ridiculous. Before long, pedestrians who don’t want to be run over will be required to wear “robot friendly” clothing – loud colors, clear delineation of arms and legs, ultrasound reflectors. We will all need a Minecraft costume so the robots don’t get confused. Don’t worry; its FUN.

This is the way the world SHOULD look! – from https://www.aitrends.com/ai-insider/autonomous-cars-and-minecraft-have-this-in-common/

Basically, this so-called “solution” requires that people and the environment become more “machine-like” – so we can have “the future” promised robot world. But the real world is quite messy:

We’ll need

But the cost, and “embedded energy” to change the physical world would be huge beyond huge – the current USA infrastructure bill would barely start the process, even though it is in the trillions of dollars.

In short, we would have to expend massive amounts of energy just to make the world safe for robots. We won’t do that – we’ll use the energy to build windmills and nuke plants instead.

Remember…the whole idea of “technology” is that it is supposed to be faster, cheaper, better than human work – and more efficient. This is exactly the opposite.

The second idea for fixing the “robot brain power problem” is even more insidious. Put on a headset, and pretend your company has created a “self driving” car. In reality, just fire some people and make a smaller number of people monitor the robots, fixing their mistakes, like this poor slob…

Wow, only have to pay 1 driver, instead of 9!

Currently, most self-driving car experiments have an extremely bored, presumably minimum-wage human sitting in the driver’s seat, supposedly ready to react when the robot fucks up. Now, this won’t sell – you won’t convince people the car is “self-driving” if you get a bored human along with your car purchase. These people were supposed to disappear as the robots got better – but it has been close to a decade, and they are still there. Clearly, robot learning has run into limits, energy and otherwise.

So, driverless cars are currently a fail. The promises of 2015 failed. So, what to do? What to tell those investors and tech blogs?

So, the new proposed solution is to roll out a “5G” network that allow fast, real-time video transmission from the robo-car to a building somewhere with low wages. In that building, a group of incredibly bored, minimum-wage humans watch 9-12 computer screens, each showing a different robo-car. These bored people have to “adjust” the self-driving car’s behavior every few thousand feet or so. They are remote, so people don’t have that annoying human in their “self-driving” car. Yay!

Now, consider this from a system perspective. You don’t have to wear a tinfoil hat to see that 5G, low-latency, real-time networks everywhere will require an enormous use of energy and resources. 5G runs at a higher frequency – more energy per second. 5G is more line of sight – so you need lots of 5G transmitters – several on a single city block. 5G needs bigger and faster towers and central processing servers networks. Lots more power!

In other words, by removing the people, you have swapped in lots of power consumption.

And what for? Basically, you’ve admitted that you can’t remove humans. So you minimize the number of humans – basically an affront to your beautiful machines, and give them the nastiest, grungy jobs possible. The real goal is to eliminate some human drivers (so you don’t have to pay the), and pile the remaining driving risk, responsibility, attention onto a smaller number of remote tele-operators. You accomplish this by burning a bunch of extra energy.

As a bonus, you convert a skilled driver job into a lowly, minimum-wage, security guard type gig. Your benefit is that your company drops the healthcare plans of those “redundant” humans, and you get your holiday bonus. Finally, since the operators are remote, you can pretend that you actually have achieved your goal human-style artificial intelligence, and can post all those photoshopped images of blissful people doing nothing in their cars – except watching ads, presumably…

The energy equation is a little better for the long-haul trucks, but you still need a fast-real-time connection between the robo-trucks and human leaders, plus the local onboard super cruise-control AI.

But…there’s already a system that allows a bunch of cargo containers to “follow the leader” perfectly with just one person in charge.

It’s called a train.

Consider how dumb robo-trucks are from a “green” perspective. A train can easily manage 1000mpg for a few tons of cargo. A comparable truck – 10mpg. So, you burn 100x as much fuel to move things by truck.

But having robot-truck go in a “convoy” behind a human driver makes a train-like system, with none of the advantages of trains. They’re following the driver, using a big pile of computers and networks. A train just uses a passive connector that doesn’t need power. Flexibility is lost. If it were flexible, the robo-trucks would be able to drive themselves to their final destination after exiting the convoy. But that kind of driving is exactly what robot cars can’t do. So, you link in a bored remote human operator – again.

Now, the original idea of a truck is “flexibility” – individual cargos can be routed in real time to stores, more efficient than trains. Sure, this makes some sense, especially back in the day when oil was selling at an inflation-adjusted prices of $4/gallon and wasn’t a big cost. But in a world trying to reduce fossil fuel consumption, it doesn’t wash – it makes sense to pick the less energy-intensive system.

But what about “electrifying” the truck? Well, batteries have low energy density. If you give the trucks the range of current diesel trucks, your payload drops almost to nothing. So, your robo-convoy will either have to stop frequently to recharge, or carry much less stuff. You could just electrify the trains!

Let’s sum up how robots and power interact in our supposed “future”:

  1. Use the energy needed to power large cities for months to build neural nets
  2. Fire some people doing a boring, but not incredibly boring job
  3. Train a minimum-wage, out of sight “gig worker” to do the incredibly boring job of fixing the mistakes of the robots.
  4. Raise risk, reduce quality of service
  5. Profit!
  6. Executive bonus

To summarize, the push for humanoid robots and “self-driving” vehicles will require gigantic amounts of energy, either for monster neural net learning sets, or reworking infrastructure into Minecraft. That huge amount of energy would be taken away from making our system more sustainable. The increase energy of a robot world will make getting to sustainable use harder. And the result of partial robot-ization – the only kind we can do today, will be job destruction, employees pushed into smaller number of crappier jobs, and a more fragile service. Think of a phone tree versus getting a human support person. Think of a human walking down the street, versus a robot trying not to fall, plus its human “assistor” constantly adjusting it so it doesn’t fall.

This is the the reality of the robot future currently being sold.

Robots Encourage Human Risk-Taking

There’s been a lot of excitement about robots and artificial intelligence in recent years. One is pretty irrational – humanoid robots becoming the equal of humans. One, more rational, is various service robots, as well as “artificial intelligences” taking over tasks like driving cars and airplanes.

The 1950s and 1960s spawned two views of future computing – computers as “mind” and computers as human “intelligence amplifiers.

Now that we are actually getting some functional AI systems, we’re also beginning to learn about the consequences which have nothing to do with their capabilities, but with human perception of their capabilities. As I’ve discussed many times before here, people tend to see any mimicry of human behavior by a machine as ‘proof’ there is a fully human, conscious, choosing entity behind the scenes. They then extrapolate this and begin treating the machine as if it had qualities it doesn’t have (like a mind).

Case in point. People interacting with robots immediately apply a “human mind in silicon” model to the robot’s actions. Making the robot “friendly” or “humanoid” just encourages this. And it leads to people assuming the robot is taking charges – so they take risks. In other words, their own work deteriorates as they imagine the robot is picking up the slack.

‘The robot made me do it’: Robots encourage risk-taking behaviour in people (spacedaily.com)

The research, published in the journal Cyberpsychology, Behavior, and Social Networking, involved 180 undergraduate students taking the Balloon Analogue Risk Task (BART), a computer assessment that asks participants to press the spacebar on a keyboard to inflate a balloon displayed on the screen. For some sessions, a robot as present, providing encouraging statements to keep pumping.

The results showed that the group who were encouraged by the robot took more risks, blowing up their balloons significantly more frequently than those in the other groups did. “Popping” a virtual balloon caused the controls with a silent robot present to scale back their pumping – but in the presence of the robot, test subjects continued to pump, even when the balloons routinely popped.

This is a great example of how people map humanlike qualities to objects, provided the objects provide cues that they are human. The students mapped a “mind” onto the code creating the robot’s speech, and further took that speech as evidence they should continue pumping.

Dr Yaniv Hanoch, Associate Professor in Risk Management at the University of Southampton who led the study explained noted that the robots apparently exerted peer pressure on the students, similar to that provided by an actual human egging the students on. However, he also saw a silver lining.

“On the one hand, our results might raise alarms about the prospect of robots causing harm by increasing risky behavior. On the other hand, our data points to the possibility of using robots and AI in preventive programs, such as anti-smoking campaigns in schools, and with hard to reach populations, such as addicts.”

There’s a clear moral hazard here. My guess is that a sign with a picture of a person telling you what to do creates peer pressure – think of Uncle Sam:

However, in this image, along with statues and other obviously dead human representations, the person almost certainly weighs their response based on the fact that this is clearly something created by humans, not a human. In the case of the robot, this is less certain. There’s a widespread belief, encouraged by science fiction and so-called ‘science’ writers that Artificial Intelligence is close to creating human minds, or even superhuman minds. If a person interacts with a robot and maps their response to ‘human’ or ‘superhuman’ they may be more likely to follow along.

This in turn means that the behind-the-scenes actors can use robot puppets to push their goals in a way superior to old media. The robot is more than an abstract ‘brand representative’ – it is seen as a person.

Near-future society will have a more powerful method to help its citizens in robot spokesholes substituting for graphic design. But, of course, society may not have the best results of its citizens in mind.

This can’t be good…

Robot Skin and Computational Overload

There’s a long history of announcements from the robotic community, claiming that “robot skin” has been created. Mostly, these have been unserious, since the huge computational load for managing skin sensation is not part of the story. A few historical examples:

From 2019, this robot skin has “millions of sensors”. Great, but what processes data from those millions of sensors, more sensitive to touch and temperature? You’d need millions of computers to handle sensor data and integrate it with, say a deep learning algorithm.

https://newatlas.com/smart-robot-skin/55853/

A close-up of why this “skin” is so sensitive, and why the “computation density” of the skin would blow away even a giant network of thousands of computers.

https://www.zdnet.com/article/hairy-artificial-skin-gives-robots-a-sense-of-touch/

Here’s and earlier one from 2010:

Here, printed circuit boards are used to sense touch on the robot hand

https://cacm.acm.org/news/87060-robots-with-skin-enter-our-touchy-feely-world/fulltext

Cool, but no ability to process – in other words, even the limited number of sensors (dozens instead of millions) is not part of the design.

A “robot skin” image from 2006

https://www.newscientist.com/article/dn7849-electronic-skin-to-give-robots-human-like-touch/

And even earlier. Sensors that would work, but extracting meaningful information from touch was – and is – beyond robots.

It’s possible to go back further (skin has been a hot robot topic for decades), but the result is basically the same: there have been a series of announcements of “robot skin” in the tech media, typically putting together a pile of sensors in some plastic matrix. While the sensors are real, the wiring up of the sensors is not addressed, and more importantly, the ability to process data from the sensors is not considered – since no computer at present can do the processing. Actual robots out there work with a very small number of sensors to make decisions.

A great example: the Boeing 737 Max. Software relies on a SINGLE “angle of attack” sensor to determine if it is going into a stall. Even with just one sensor, software designers couldn’t handle “edge” cases, probably leading to multiple plane crashes killing lots of people.

737 Max, where only one AOT (Angle of Attack) sensor is driving the robot “autopilot”. Even military planes only have 4 or so.

https://www.cnn.com/2019/04/30/politics/boeing-sensor-737-max-faa/index.html

So, our current “robots” use few non-vision/sound sensors. However, good tactile sensation is exactly what is needed for Robots that Jump to interact with the environment robustly.

Contrast this with the typical “process control” engineering solution. A single sensor, or a very small group of sensors is used to report data. For simple things, this is fine – if water boils, it is time to turn off the tea kettle. However, for robotic interaction with a real-world environment, it isn’t enough. Time and time again, robots have been built with inadequate sensors to navigate their environment if small changes are made.

Contrast this with a simple creature like a flatworm. It’s body is far less complex than our, but it is saturated with sensory neurons…

THis image shows that the entire body is full of nerve cells, many of which are sensory.

The sensor complexity of this simple creature easily exceeds that of the more advanced “robot skin”. Furthermore, complex nerve nets appeared in the simplest of animals.

Compared to living things, robots show a huge undersupply of sensation. Many in the field have rightly tried to design “skin” – but the overall robot falls into the trap of needing incredibly elaborate processing – something that simple animals don’t have or need to have. Clearly, something’s amiss.

The most recent description of touch-feelie robots point to “greater sensory density than human skin”. It’s not meaningful – just having more sensors doesn’t help. You have to intelligently respond to sensation enabled by that density. Now, nerve tissue is expensive to maintain, so animals don’t have high density because it’s cool – it’s needed. That in turn implies that the high sensory density of animal skin has meaning.

The most recent entry into “sensitive skin” takes a step backwards, and imagines a few hundred sensors (compared to the millions in some robot skin designs).

A robot with flex “skin”, with sensors quite large, but closer to manageable. People have thousands of sensors per square inch!

https://www.fastcompany.com/90416395/robots-have-skin-now-this-is-fine

The sensory equipment of this “advanced” robot is large. Probably the sensor density is below the flatworms above, probably similar to a tiny cheese mite:

The incredibly tiny creature has sensor numbers approaching our big, “intelligent” robot. The brain processing sensation so the mite can move and respond in the world is literally microscopic

https://imgur.com/gallery/6twAyQk

Still, this new flat, hex-y sensor is a bit better. As the researchers say, it might prevent a robot from actually crushing you during a so-called “hug”.

Finally, it is still better than Google’s own “sensation” of tactile robots. When you run a Google search, the “sensitive skin” robots are lost between (1) Sex dolls, and (2) The “Sophia” electric puppet. Ironically, the sexbots are designed to feel creepy-rubbery to their equally rubbery owners. And, Sophia doesn’t sense anything on its own gynoid rubber, despite the thing apparently giving talks about “gender” in some countries. Here, we see Sophia’s single-sensor design in context:

Not one bit of touch on this thing, and “sensation” is some smartphone tech. Awesome!

I vote for the cheese mite. Sophia looks very 737 Max.