Robots That Jump

Robot Bodies Needed Before Robot Minds

Atlas Deepfake

Well, well, people are so desperate to have robots that they’re willing to propagate phony videos of the Boston Dynamics humanoid in action

This worked great for Corridor Digital, a Los Angeles VFX house, who wanted to parody some of the real videos, including the one of Atlas where it is being “taunted”. A great job of motion capture plus blending in robot body

Corridor Digital Video Site: https://www.youtube.com/user/CorridorDigital

Corridor Digital is doing a great parody of the BD madness. But the real fun comes when you visit tech blogs discussing the face (I wonder how many of them were initially duped) that use the parody to encode pious preaching about how the “robot uprising” will be much deadlier than the video… The proof? The VFX looks a little like real Atlas videos.

Boston Dynamic Videos: https://www.youtube.com/user/BostonDynamics

To their credit, BD actually linked the Corridor video on their own youtube site. All in all, some great shared digital publicity.

But the media appeared caught in a 5-year old’s understanding of both videos…

Gizmodo erupted in crazed slobber of pseudo-news, where (despite the parody), the author takes it as “truth” and preaches to us that the robots will rise up and destroy us, in the best religious fantasy tradition – https://gizmodo.com/that-viral-video-of-a-robot-uprising-is-fake-because-th-1835575686.

In fact, the deepfake appears to have made the author think it is more likely that the robots will rise. CGI “proves” something is real!

The Verge is slightly more sensible, and uses the parody as a discussion about how people feel empathy for things that don’t have a mind, if they act a certain way – https://www.theverge.com/tldr/2019/6/17/18681682/boston-dynamics-robot-uprising-parody-video-cgi-fake

The real problem is that people will see mind and consciousness where there is none, and act accordingly…

“(From the Verge) As MIT researcher and robot ethicist Kate Darling puts it: ‘We’re biologically hardwired to project intent and life onto any movement in our physical space that seems autonomous to us. So people will treat all sorts of robots like they’re alive.’”

Most of the coverage by “lower” tech blogs deleted the fantastic parts of the parody, dropped the quality (so the CGI was harder to see), and simply let people believe an angry robot was breaking out of its cage.

Of course, our “new media” need clickbait, and as always, it is best to distribution religious text. The techno-singularian vision of the future has become more than a cult, and is in fact a replacement for traditional religion in techies. Deepfakes like this are OK because they are “truthy” – they could be true, so we believe!

This is part of a larger problem for our society. The rise of CGI has made people “believe” that anything that can be 3D modeled “could be real”. This is why companies like Facebook and Uber churn out bullshit images of “air cars” that tech media and groupies unthinkly accept as “just around the corner”.

I suspect the writers don’t understand that Uber can endlessly create these CGI videos to look trendy, and rake in gains in stock price. Actually making this helicopter (that’s what it is) would be difficult and dangerous. Better to make a phony video, then say it could be true just around the corner.

So be worried – not abut the BD robot, but about the millions of craven pixel-pushers desperate for a god to worship and (human) sacrifice for…

Chickens run around with their heads cut off, and the BD robot is on its way to being a decapitated chicken in several years. Fascinating that said chicken is touted as our destruction.

Advertisements

Another Puppet Show, Featuring a Gynoid Robot Being a “Creative Artist”

Well, the latest in the bright future of robots is here, and it is a “creative artist” Ai-Da“, a gynoid robot whose “works” have actually been sold to idiot buyers for a total > $1 million dollars.

two images of ai-da robot head
Adian Meller showcases Ai-Da. Source: Metro

Quoting Devidiscourse (India’s media loves humanoid robots),

“…Described as “the world’s first ultra-realistic AI humanoid robot artist”, Ai-Da opens her first solo exhibition of eight drawings, 20 paintings, four sculptures and two video works next week, bringing ‘a new voice’ to the art world, her British inventor and gallery owner Aidan Meller says. “The technological voice is the important one to focus on because it affects everybody,” he told Reuters at a preview…”

The big electric puppet, created by the 46-year old art dealer can’t walk or move around, But that doesn’t stop the flood of PR images of Ai-Da thinking pensively of her future, self-consciously reflecting the incept scene from HBO’s “Westworld” where lead robot, “Dolores” wakes up.

a pensive pile of plastic - ad-ia robot with downcast cameras
Ai-Da on nonfunctional legs evoking an HBO series
source: MSN

Yes, the same people, Engineered Arts, designed and built the Ai-Da and the HBO movie-bot bodies. Ai-Da was given legs to make her look like more like the movie robots.

Evan Rachel Wood as 'Dolores' in HBO 'Westworld' Scene
From “Westworld”, a scene with actress Evan Rachel Wood as the robot “Dolores”. Later in the scene, Dolores sits up, exactly like Ai-Da image above. Source: Daily Kos (big surprise these dumbass “progressives” are anti-historically suckered into this worn-out discussion)

There are several interesting features of the aI-dA machine itself. First, the cameras for “drawing from sight” are actually in the artificial eyes (though I’m surprised there isn’t a open-on-demand third eye), and the drawing arm does exhibit fine motor control for drawing to a canvas. Mechanical plotters have been doing this since the 1940s, but having it in an articulated hand is interesting.

Reminds me of a Fortune-Telling Machine I saw somewhere

The algorithm used is also interesting – it breaks up the image into a bunch of short line segments (like some brain neurons in primary visual cortex may do) and can reproduce your face with said lines. It is neat, though hardly useful, robot-wise, when you can just take a high-resolution digital picture.

Interesting, though I seem to remember seeing stuff like this 40 years ago!
Source: Daily Mirror (click for video)

But…wait! This isn’t the first “robot artist”. Some may remember Aaron from waayyyy back in 1973, an Ai program created by (human) artist Harold Cohen.

Aaron was programmed in C (later on LISP) on computers running 1/500 of the speed, and with 1/100,000 of the storage capacity of current art-bots like Ai-Da. Still, of the two art-bots, it seems the most “creative”…

Aaron’s art from 1979. Source: Cohen Website

Ok, admit it! Aaron is a MUCH better painter than ai-da! Aaron plays with forms and variation, while AiDa makes something like a street artist sketch. ai-dA simply maps contrast to edges, to form to a bunch of lines, similar to what neurons do in the lower-level of the brain. See this article for some recent work on neural “edge detection” in brains.

Aaron, in later incarnations, even mixed its own paint! And that was with computers 1/1000 the power of those used today.

Cohen also made sure that people understood Aaron was code, and he was exploring how much of art was reducable to algorithms.” Each artistic “style” was coded by Cohen, then Aaron would crank out an infinite number of variations using style.

However, there is a LOT of originality in Aaron’s variations. Cohen’s own commentary on the web (site now showing signs of abandonment) may be found at this link.

This is a serious exploration of art’s scope and meaning, with algorithmic art treated as both medium and product.

Aaron image from 1992, after it was reprogrammed in LISP, which improved its color choices. Source: Cohen Website

In my mind, this indeed demonstrates that some of the ‘imaginative’ part of “the creative professions” can be automated – you can truly create an “intelligence amplifier”, even for art.

And Aaron has hardly been alone. Over the years, there have been dozens of “art bots”, like this one from 2011. Created by Benjamin Grosser, it used ambient sounds to adjust the images it painted:

Interactive robotic painting machine Source: vimeo

A good resource for 1990s computer-created art may be found at The Algorists, which seriously treats the idea of algorithmic art. For the 2010s, check an even more recent article on Newatlas.

Now, compare this “automated painting” to Ai-Da. The “art” AiDa draws is clearly more primitive than ANY of these historical art-bots, and just looks like the neural edge detection. It’s INFERIOR to the past, and more image classification than “art”.

Line drawing by Ai-Da. Source: Futurism. Incredibly, the author of this piece failed to mention that Futurism has already covered numerous “robot artists”, all more interesting than Ai-Da!

Ai-Da has created other images, termed “shattered light”, which are abstract rather than figurative. However, the “shattered light” images actually up for sale (to suckers) at the gallery are generated from a different algorithm. They aren’t drawn by the robot arm, but are printed. Then, a human artist colorizes them so they look khuuuuul…

Ai-Da with shattered light painting by creator
The actual images going on sale, termed “shattered light”. No mention anywhere how these images were created (apparently nobody cares), but we do know that a human repainted over the print. Source: Oman Times

At least Aaron mixes his own paints!

The fascinating part, as always, is not the technology, but the emergent robot narrative coupled with the insane, uncritical media worship of this parlour trick by an art gallery, eager to seize the zeitgeist to generate $$$ (I salute Adian Meller for this creative insight).

Why throw shade on poor AiDa? First, Ai-Da is not being represented as what it seems to be, which is an advance in computer vision. Instead, the creators claim they’re raising deep and philosophical questions about the meaning of what it is to be human.

Hey… these deep conversations have been happening for 40 years with the MORE ADVANCED art-bots, and there are vastly more interesting and critical discussions available at the intersection of creativity, algorithms, and science if you bother to look…

But you wouldn’t know it from the media!

Practically none of the hundreds of slobber-stories about Ai-Da mention that there are other, superior art-bots out there. And nobody therefore has to grapple with past robo-painters doing a better job than Ai-Da with inferior hardware.

Instead, in our modern world, the public, the discussion isn’t about creativity and programming. Now, its personal. We are told that we have “suddenly” created an artistic robot who is a “performance artist” and is selling her art in a gallery. No discussion of method, coding, or the actual humanness of the robot – just pretty pictures of a female electric puppet in a fancy home with a painting smock.

Apparently, it’s enough to reinforce “the female robots are among us, COOL!” story.

Even the critical articles, like this one on Artnet, seem completely ignorant of the past. Naomi Rea attacked Ad-ia as anti-female, but missed the forest for the trees – she didn’t even mention Aaron or other artbots – breathtaking anti-historical thinking.

My guess is her “creepy white men” comments were just standard, intoned, wokster piety, tacked onto the end of a poorly researched article.

The a-historical aspect of this robo-worship is breathtaking. Why, on the Futurism blog there are older stories about art-bots! But the author (Victor Tangerman??) of the Ai-Da story doesn’t even mention them, and just parrots the Reuters news release. Possibly we should replace “parrot” with “robot” so we don’t insult birds.

My guess is that Futurism is willfully ignorant of the past, and sees no reason to research the extraordinary claim of robotic art. Instead, it trumpets that this pile of parts is a “new Picasso”. Yeech.

Ironically, Futurism’s “related articles” (which are the result of a pattern-matching algorithm), DO mention earlier art-bots. Score one for the machines!

Ai-Da has very little to do with art, and everything to do with this strange 2010s desire to “believe” that robots are about to appear among us, typically presenting in sexy female form. A few humdrum references to “we must think deep thoughts about robots” always appear, but really, it’s about nerd sex with plastic, not the potential for machine art.

In a recent “news conference” Ai-Da, like the similar Sophia robot, “spoke” to the press. Like Sophia, Ai-Da was pre-loaded with answers by a human operator. In other words, someone remotely operated the robot to give it the apparent ability to speak.

BUT DON’T WORRY – it will have its own voice, soon, you say?

Consider how strange this attitude is…

Before automobiles, people didn’t have a passion to make fake cars and pretend that they actually worked. People rarely pretended they had working airplanes before the first planes flew. They certainly didn’t show a fake then say “believe in it NOW, because it will work ‘soon’…”

Why robots?

I suspect you might have gotten a similar “DON’T WORRY” out of a Egyptian priest who spent his days talking through a temple statue to give it a voice. Watch it, humble farmer, someday, the god might just speak through the this statue!

Here’s a transcript from an 1899 (yup) lecture describing how ancient Egyptian statues were designed to be “spoken through” by priests, and also how the statues had joints and valves to make them move:

” …M. Gastor. Maspero, the well-known French Egyptcologist, has recently written an interesting article on the “speaking statues” of ancient Egypt. He says that he statues of some of the gods were made of jointed parts and were supposed to communicate with the faithful by speech, signs, and other movements. They were made or wood, painted or glided. Their hands could be raised and lowered and their head moved, but it is not known whether their feet could be put in motion.

When one of the faithful asked for advice their god answered, either by signs or words.

Occasionally long speeches were made, and at other times the answer was simply an inclination of the head. Every temple had priests, whose special duty was to assist the statues to make these communications.

The priests did not make any mystery of their part in the proceedings. It was believed that the priests were intermediary between the gods and mortals, and. the priests’ themselves had a very exalted idea of their calling…”

Source: Los Angeles Herald, Volume 00000000602, Number 187, 5 April 1899, via California Digital Newspaper Collection at UCR Center for Biographical Studies and Research. Note: I corrected the clumsy OCR of the robot translator on this website.

If you’re wondering how ancient Egyptians could have possibly listened to a statute puppeted by a priest without laughing, consider that the following image was called “incredibly lifelike” by multiple media outlets, echoing the art dealer’s press release, without questions or comment.

No, it is NOT “hyper-realistic” Source: CreativeBoom

No, this is only slightly improved from a robotic fortune-teller, something which used to be common at theme parks. “Ai-DA” remains deep in the uncanny valley. Only a generation raised on seeing videogames as “realistic” could think of this as “realistic”.

Print of Esmerdala, a Disney model taken from older fortune telling robots at theme parks.
Image of mechanical fortune-telling machine Esmeralda, a Disney model taken from much older fortune-telling theme park robots. Source: Fine Art America

The Ai-Da puppet show does indeed capture, as the creator of Ai-Da desired, the “zeitgeist”. We really, really, really, really want to create robots, but we don’t understand how to do it. We’re stuck on the fast advance of digital computing and “accelerating change”, which seems to require that robots exist now. We resolutely ignore the 40-year old history for robot artists. We ahistorically assume this must be the first time.

Mask of high priest in Egypt? Possibly one of those who puppeted the Egyptian robots… Source: Sotheby’s Catalog

But…there aren’t any robots like the ones we insist upon. So, we set up an electric puppet to fill the void, holding steady our devout faith until the Second Coming of the Machines.

In practical robots, this hopeful puppetry is masking the failure of so-called “driverless car” initiatives.

In all cases to date, “driverless cars” actually have a human operator behind the scenes, monitoring and guiding the card past anything beyond cruise-control complexity, typically a few times in every mile. Essentially, glorified forms of Cruise control allow a single driver to work as a cabbie in multiple vehicles. If you want the job, Designated Driver is hiring!

While “driverless cars” have some self-control, they are corrected every half-mile or so by a human operator, or when the people get tired of waiting for the robot’s incredibly slow progress. Here’s a modern high priest operating one of these puppets. You can apply for a job doing this at Designated Driver Source: Futurism

Will the public catch on that most Ai out there is just a puppet show similar to temple-tricks played thousands of years ago? I’m not holding my breath – people do need religion in their lives, and a religion of godlike robots that want sex with nerdy mortals seems just right for the 2010s.

Meanwhile, Harold Cohen, the creator of Aaron, died in April 2016. His passing unrecognized by the “robotic” tech-future-utopian media. No love for him, or his for his sexless but vastly superior art-bot.

Kissing Empty Air

Recently, the hubub over imaginary CGI robots has reached new heights. While real humanoid robots look pretty inhuman, the media more and more acts like a 3D game character is exactly the same as a 3D “physical meatspace” robot.

Lately, the excitement in our gynoid era has shifted to false female-presenting lips smacking together under the authoritarian C++ code. In one story, a nonexistent “robot” was shown kissing a model. In another case, two lumps of plastic clacked together for a “kiss”.

First, the robot duo “kiss”. A while ago (2009), horribly inhuman, easily defended against:

The second case is more troubling.

Our 2019 “robot kiss” features Calvin Klein apologizing after they released a video showing model Bela Hadid apparently kissing Lil Miquela, a blob of software and code and pixels, in other words somebody’s digital art working as a corporate shill.

lil-miqua-corporate-shill_20180320103933_ZH

As Wikipedia reports:

Miquela is an Instagram model and music artist claiming to be from Downey, California.

The project began in 2016 as an Instagram profile. By April 2018, the account had amassed more than a million followers by portraying the lifestyle of an Instagram it-girl over social media. The account also details a fictional narrative which presents Miquela as a sentient robot in conflict with other digital projects.

In August 2017, Miquela released her first single, “Not Mine”. Her pivot into music has been compared to virtual musicians Gorillaz and Hatsune Miku.

Obviously, Miquela did not “release a single”. Miquela does not exist. Some people recorded an album and “presented” their music along with a bunch of digital character art. It’s people putting on digital masks.

Miquela is NOT a robot. The “sentient robot” is part of the story for the imaginary character. At best, we are lookng at a purely digital puppet, with no internal mind whatsoever (not even “deep learning”). It is a product of puppetmasters manipulating images for marketing in social media.

To repeat, there is no physical Miquela. No robot you could visit, no plastic and metal, just a computer screen.

Outrage began this month when Calvin Kline made a video with model Bela Hadid kissing empty air in front of a greenscreen. Post-production, Digital 3D modeling overlaid a Lil Miquela 3D model, and the result was apparently two women kissing.

The tech was no more sophisticated than any “game character” created and rigged in Maya and deployed in Unity or Unreal Engine. While there was an image of the kiss, there was no kiss.

To repeat, Hadid kissed empty air, or some guy dressed up in green motion capture clothing. 

green-motion-capture

(great kiss, Bela!)

Were people upset that that the event didn’t happen? Nope, their response was a widespread, stunning and cheerful acceptance of the image as a physical reality. The discussion proceeded from there.

First, though, there were immediate complaints from the LGBTQ community caused CK to apologize. Hadid is straight.

 

True, the pixels she pretended to kiss were “presenting” as female. Or, the people behind the digital image, drawing and manipulating it in software were “presenting” as female. OK, typical identity wokster flareup, with some justification in my opinion.

Also, with 3d modeling software we now have an easy way of creating something that normally would have taken a good portrait artist a couple of weeks in the old days. It’s not hard to create a 3D digital portrait of an imaginary human. And, if you’re sending still images to Instagram, you can cover the fact that you’re – just making pictures.

Buttt……….People are calling Lil Miquela a “robot”, as if there are walking humanoid robots that look like this. Apparently, these “reporters” are so dumb that they don’t realize that there is no physical robot – just people uploading 3D modeled images

Now, images have been used to both define and attack identity for a long time, so no problem with seeing this as a bit of queer-baiting by a company that deserved to have it marketers called out.

But the bigger point is truly crazy. I scoured all the news stories, and in all of them, Lil Miquela was called a robot. There was not correction to “digital 3D image”. Every press story called the kis a kiss between a robot and a human.

Nearly every media outlet called it a human kissing a physical, humanoid robot walking confidently and gazing on a human before lip-locking. But there was no kiss. There was no robot.

Not a single major story discussed the reality – a model kissed air, and then had their image added to an animated movie.

And, if you check Google Search, the search is “lil miquea robot”, not “lil miquea digital art character”. Clearly, the audience for these tech stories, wokster-outraged or not, thinks they saw a physical robot.

It’s stunning. I have to believe that the majority of the tech media unthinkingly accepted that this was a physical, humanoid robot kissing a physical model. They must actually believe that it’s possible to build a robot that works like this – and that it must be be easily rented out to the fashion industry. Originally, Instagram followers couldn’t figure out if they was human (eek!). So, their go-to when lil Miqua turns not to be human is that it must be a robot presenting as female.

Where is the “fact-checking” that media is supposed to do? Why the echo chamber in dozens of articles calling a work of digital art a humanoid robot – when in fact, we can’t really make good ones, and the ones we have are incredibly hard to create? People so badly want to believe that we have given birth to artifical humans.

One is forced to conclude that tech media has begun to believe its own science fiction.

What Real Robots Look Like

Despite all the biped fantasy circulating on the Internet these days, real “robots” – industrial robots which have evolved for 200 years from the first Jacard Looms, are gaining ground. In the 20th century, this kind of automation led to increased productivity per worker. In the 21st, it may be leading to worker devaluation:

https://www.reuters.com/article/us-amazon-com-automation-exclusive/exclusive-amazon-rolls-out-machines-that-pack-orders-and-replace-jobs-idUSKCN1SJ0X1 

This video goes a bit more into the specific automation:

https://www.reuters.tv/v/Pwlk/2019/05/14/amazon-quietly-rolls-out-robots-to-pack-orders

amazon-robots-replace-packers

The implication here is that for the near-term, humanoid robots are stage acts which form the PR wing of the robotization of society. Apparently, Amazon is pushing workers to switch jobs from packing to driving deliveries instead, paying $10,000 compensation if they do so:

http://time.com/5588141/amazon-employees-delivery/

Video: http://time.com/e1f756da-f4cb-44b9-ac27-244a16ccf5e9

amazon-asks-packers-to-switch-jobs.png

This kind of robot rise is the exact sort predicted long ago by Marshall Brain (author of the great howstuffworks.com website) in his “Robot Nation” series, which dates from the early 2000s (despite the blog date). The Amazon driver, switched from warehouse to traffic, slavishly following the orders of their GPS almost perfectly matches Brain’s vision here.

They don’t have to jump to be significant.

 

 

 

 

 

 

Irascible Robots (not ‘fighting back’)

Robots That Jump are designed to interact in the real world, ignoring the silly fantasy of creating ‘mind’ in a machine in order to interact. The removal of that phantasm leads to better operating devices. And there’s no better illustration of this than the recent Boston Dynamics video of a robot trying to go through a door, hindered by a human.

In the video, you can see the robot opening a door, oddly by using its combined head/arm to push it open and move through. The human does several things to stop it – including changing the door position and dragging the robot away. After each attempt (which might be compared to a child tugging on a dog trying to drink) the robot returns to its course of action.

From the perspective of Robots That Jump, this is awesome. BD had integrated one of the basic features of life – irascible behavior. This goes all the way back to Aristotle’s, Augustine’s and Thomas Aquinas’s idea of the properties of life, hardcore link here.  Basically, ‘dumb animals’ – those with now elaborate ‘mind’ to understand the future can have a passion, or emotion that drives them to their goal against obstacles. You see this in the simplest forms of life – an energy to continue existing or accomplish a goal. Suppose an earthworm lands on a sidewalk. Its initial desire is to get back into the ground. However, the ground is concrete. A ‘rational’ worm, or a typical robot would just stop at that point. However, the worm wiggles wildly and in ways that are not normal for it, continually trying to do something that gets it closer to its goal. Sometimes a thrash and a roll might land it back in the grass, and the worm returns to its wormy world.

Note no ‘mind’ is needed for this – just an initial drive to accomplish something, plus unexpected resistance which is matched with (creative?) exploration of ways around the obstacles. This is one reason emotions are believed to have evolved. Emotions don’t provide a computed solution – they drive an animal to accomplish things when it runs into resistance. Nor is this exclusive to animals – plants react irascibly to rocks and sidewalks in the way of their roots, exploring around until they run into a solution. In that case, there’s not only no mind, there’s no brain. None needed for the plant, or a Robot That Jumps.

However, the pop-culture vision of robots as godlets who will overthrow us or have a “message” to give us (see the miserable “robot citizen” Sophia) can’t see this robot as doing something robust and lifelike. Instead, they imagine mind…a mind storing up all the grievances of insults received, for a later reckoning with humanity.

The science-as-magic media (including so-called “science journalists” who, for example never bother to ask if it is even possible to for Elon Musk to physically move a million people to Mars by 2062) portray the Boston Dynamics robots as having conscious mind, unjustly attacked and thwarted. Additionally, they see a grudge developing which will be paid back.

This is crazy.

Does a tree get made and plan to “get even” when we put concrete over some of its roots? No. Mind and payback are the province of very advanced social animals, where remembering benefit and harm from other members of your social species is useful. Most animals and plant’s don’t bother with it at all. But they do have irascible behavior.

However, a robot carrying a grudge fits into the techno-religion that has replaced classic religion in the supposedly secular world of the tech hipster. It’s a belief system that sees us rushing to Apocalypse jwith us tech-sinners incapable of properly worshiping the gods we have created – for which they will either punish us a.k.a. Skynet, or replace us, a.k.a. Singularity. To be taken seriously, tech-prophets like Musk and Hawking must warn us of the anger of the gods, and demand our repentance (I guess that means buying more iPhones faster).  We must eat of their flesh (“embedded computing”) to preserve some particle of ourselves in their awe-ful and righteous power.

Sheesh.

This is just an earthworm trying to get home.

Too bad there wasn’t a beheading…

Even Forbes, which should know better, has its lame reporters reporting on the “mind” of Sophia, the first “robot citizen” of Saudi Arabia.

https://www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-to-know-about-sophia-the-worlds-first-robot-citizen/#29420af346fa

This big electric puppet, with some small “ai” features useful in tricking children into thinking it is alive, is claimed to have mind and emotions.

After characterizing the robot as a publicity stunt, the article then moves on to beauty contest questions, e.g. “world peace”.

In practice, there isn’t any Ai on earth that could have reacted to the questions aimed at the robot during press conferences in an intelligent fashion. Instead, the robot uses ELIZA style reflection to appear intelligent (“what makes you think I am thinking about that”) along with canned answers that are supposed to seem wise. Apparently, the conservatives in Saudi Arabia don’t mind an idol-esque oracle being set up to answer our “deepest questions”. And since oracles are often blown out on drugs or gas seeping through cave walls, mayhaps that makes sense.

https://en.wikipedia.org/wiki/Sophia_(robot)

Business Insider has done a few articles that point out the clumsy voice recognition. But it makes a fascinating claim:

Sophia’s capacity for displaying emotion is still limited. It can show happiness — sort of.

Sophia fakes it

 

I put this image in because, to the robot (presumably a couple of Linux boxes) the nature of the “happiness” has been mapped directly to a collection of actuator movements. That, in turn implies that human happiness is simply a collection of muscles firing in a way to stretch the face. The amazin thing is that there’s no concern about a subjective – nobody wonders if Sophia actually feels “happy” or “sad” internally. In fact, the robot makes a face when something (e.g. a keyword) causes a sub-program to execute.

Whatever human emotion is, it is unlikely to be a Von Neuman machine ratcheting through alternative subprograms. The brain’s structure is so different from a computer that its is nearly impossible that this be true.

In Ai, however, analogy is often used to refute the argument. Flight, for example can be achieved by bird or insect wings in completely different ways. So the argument goes that emotions can be created in similarly analogous ways.

The problem is that nobody has adequately defined what “happy” or “sad” are in the first place so that we can construct analogous ways of achieving it. The argument that Sophia is “happy” is the same as saying a corpse with its face stitched into a smile is “happy” if we can make that smile by attaching electrodes to the muscles – from Wikipedia entry:

Giovanni Aldini (Luigi’s nephew) performed a famous public demonstration of the electro-stimulation technique of deceased limbs on the corpse of an executed criminal George Foster at Newgate in London in 1803.[5][6] The Newgate Calendar describes what happened when the galvanic process was used on the body:

On the first application of the process to the face, the jaws of the deceased criminal began to quiver, and the adjoining muscles were horribly contorted, and one eye was actually opened. In the subsequent part of the process the right hand was raised and clenched, and the legs and thighs were set in motion.[7]

I doubt if a modern electrophysiologist would conclude that a recently dead body was “happy” if the muscles twist. But replace that flesh with plastic, the electrical stimulus with a command out the serial port, and it is “feeling”.

Hoping that Hanson Robotics studies Behaviorist theory before making more silly claims about puppets.

 

Finally, a ROBOT THAT JUMPS!

I’ve been writing about the (incorrect) direction of much of robotics, along with the fakery which implies robots are more than they are for 15 years. The alternative I’ve pushed is building an agile body over “Ai” or “mind” popular with almost everyone. In my opinion, a mindless body that is physically agile – one that can reproduce the things even simple animals can do on a daily basis – is vastly more useful than a so-called “intelligent” robot.

Well now, Boston Dynamics, which has followed the “body first, then mind” approach, has finally come through. The newest version of the Atlas robot can jump and even do backflips in what looks to be an agile way:

The results are impressive. This is a huge, heavy metal robot showng agility. Small motions of the arms are being used to stabilize, which makes it look like a pretty complex control system. Though I am guessing this is pretty staged (change the box positions and the robot falls), it is a candidate for a true Robot that Jumps.

This follows on other robots designed to imitate living thins, in pursuit of agile behavior.

While these robots are finally meaningful (in terms of being real robots instead of fakey electric puppets), they are far from practical. The biggest issue is power supply – they use IMHO 20-100x times as much power as a comparable animal body to move. Electric battery density simply isn’t high enough for a human-sized robot to remain powered for more than a few minutes at a time. At present, you would probably need to use nuclear power to move these robots for reasonable operations – or big Diesel engines.

The other issue is muscles. Robots using electric motors aren’t very agile. Those using pneumatics can achieve lifelike motion, but it is impractical to power them standalone. I’m guessing the gasoline engines on the early BD robots supplied compression for the leg actuators – NOISY. These are not quiet, like the would be if a human did the same motions. The holy grail – elastic plastic and contractile muscle tissue – is a long way off. You see visions of such muscles in movies like Ghost in the Shell, but they don’t actually exist.

https://vitalybulgarov.com/ghost-in-the-shell/

Ghost in the Shell - major shelling sequence

The closest we have is something like this combined electro (heating?) hydraulic muscle:

https://www.sciencedaily.com/releases/2017/09/170919091000.htm

It took 30 years for walking robots to learn to jump, good muscles may easily take that long.

I’m guessing another 50 before something like Ghost in the Shell could be built.

However, a mental point might have been reached. With the debut of the BD robot, people may be less accepting of the stupid fake robots pedaled at SF conventions as somehow “real” or “the future”.  Mayhaps we’ll start ignoring the “mind without body robot”, typlified by the creepy sexist machines popular in the media:

 

In a word…this is a big fake puppet implying human abilities that don’t exist. And this machine can’t jump.

 

Nope, they’re NOT coming to get us…

Great shot of the Boston Dynamics robot falling offstage, after milling around confusedly at the back of the stage. Does wall = curtain?

http://www.dailymail.co.uk/sciencetech/article-4769298/Boston-Dynamic-s-humanoid-robot-Atlas-takes-tumble.html

Note the real problems – a body without any motivation in its head. Also, notice the pseudo-child reaction of the audience when the creators pick their bot up, and have it lift something. This pretended to be similar to an experience of a child learning to walk, when it was more like a puppet falling off a table during an earthquake.

The video points out the deceitful way that people currently discuss robots – lots of “I’m scared, coupled with a desperate desire to see their creature rise off the slab.” It’s a bit like tech movies (like Jurassic Park) that warn us not to “tamper with nature”, while giving money shot after money shot of the awesomeness nature tampering one can imagine.

At present, humanoid robots demonstrate they are still electric puppets, for all the agile advances (comparable to a bug walking, possibly). Yet they still fuel dreams of techno-religion as “man creates its god”. How different is this from sculpting an idol with a place for the priests to talk through?

88 Years of Show Robots

The typical way most people have experienced a “robot” is at a trade show or other meeting. The showboat robot’s purpose is to perform as a mechanical entertainment, always remote-controlled for more humanlike operation. The end result is a public that thinks humanoid Robots That Jump are far closer to reality than they really are.

My best example in the past has been Elektro, a showbot developed for the 1939 World’s Fair.

Electro at 1939 World's Fair answers questions

While this robot has a few interesting senses (for example, it can detect a lit cigarette and smoke) it is essentially a remote-controlled metal puppet, with a few automatic routines.

elektro-interior-x640

But now, the grandaddy of all showbots (if you don’t count the imaginary Boilerplate), Eric, a circa 1928 year robot, is about to be re-created.Eric, a robot built in the UK during the 1920s

http://www.npr.org/sections/alltechconsidered/2016/05/16/478223173/london-museum-hopes-to-reboot-eric-britains-first-robot

The thing that’s cool about Eric is that there is some real mechanical agility in its behavior. Unlike the relatively stiff Elektro, Eric can move is a vaguely human way (though he can’t walk). He was remote via a wireless connection, but could also respond to humans uttering numbers if they were careful speaking. In other words, an early version of speech recognition! From the Cybernetic Zoo:

It appears that Richards deployed two methods of control. One was the use of remote wireless where a hidden person was able to answer the questions asked…

Later, the creators even considered putting in light-sensitive cells to act as ‘eyes’ – which was part of Elektro in the next decade.

Another cool thing about Eric is that he has “R.U.R.” on his chest – short for Rossum’s Universal Robots. This in turn comes from the 1921 stage play by Czech writer Karel Čapek which has the best early example of the notion that (1) We will create robots, (2) They will become superior to us, and (3) They will overthrow us and establish a new robot Eden. Isaac Asimov clearly borrowed this concept for his positronic robots, with a company called US Robots and Mechanical Men featuring prominently in his I, Robot series.

Now, in reality, R.U.R. robots were actually “constructed” humanoids with some machine parts mixed with biology. A cell culture vat or 3D printer (like the classic 5th Element scene) was used to create them. So, they weren’t cold steel, at least in their later versions. In the play, they they look more like genetic clones, ultimately develop advanced human emotions, and pass the final test for robo-worthyness to succeed humanity.

But everyone at the time seemed to understand devices like Eric as an early version or an R.U.R. androids. The Ridley Scott movie Bladerunner (taken in turn from Philip K. Dick’s Do Androids Dream of Electric Sheep) features these kind of “manufactured” humanlike robots, as does his Alien series (skin coloring on the androids similar to R.U.R. play makeup).

And, if you look at the matte behind Elektro, you see a goddess? at the upper right, wearing a clear plastic dress strangely similar to one worn by Zhora the android in Bladerunner:

39-elektro

(check in about 30 seconds)

The appearance of R.U.R. on Eric shows how widespread the belief was that robots were “just around the corner” nearly a century ago. Today, with major advances in technology, robots are still “just around the corner”.

So, have we really progressed? We have sophisticated industrial robots, but they are more like the parts of an animal, or a cell enzyme in complexity. The first industrial robots date from 1801 with the Jacquard Loom...

And some of our robots can walk, which took Honda hundreds of millions of dollars.

Asimo robot walking forward with left palm upraised in friendship

(interestingly, Honda hasn’t realized that Flash is Dead – still forms the Ui on their website.

And…we have big electric puppets which, if anything are even less of a true robot than Eric was. The tech is better, for Robothespian (another U.K. showbot), but the fakery is greater.

Robothespian pretends to be alive

…But…they’re not Robots that Jump.

Like Electro and Eric, modern robots pretend to think…to keep alive the techno-religion concept that the robot age, first told of in R.U.R., is upon us.

 

NASA Robot – Tele-Operation for Mars Mission?

Some good photos today of the Valkyrie humanoid robot body, a testbed for developing more agile Robots That Jump. The robot was developed at  NASA’s Johnson Space Center in Houston.

Valkrie Robot

http://www.roboticstrends.com/article/mit_helping_nasa_build_valkyrie_robots_for_space_missions

The R5 robot is supposed to serve on missions to the planet Mars, and “beyond”. The article describes a future with the robot working autonomously on a base which NASA would establish on Mars years ahead of human astronauts. In other words, the astronauts wouldn’t land in Martian wilderness, instead coming down to a nascent Martian town created by robots of this type.

The actual description of use is described in NASA’s current plan for Mars, at this link:

https://www.nasa.gov/sites/default/files/atoms/files/journey-to-mars-next-steps-20151008_508.pdf

NASA plans a gradual approach to the planet, which might include astronauts in “transit” vehicles who fly by or orbit Mars, but never reach the surface. In addition, robots will be used to maintain equipment landed on the Martian surface long before the astronauts arrive.

Two questions, though…

  1. If we have robots, why send humans to Mars at all?
  2. If NASA is farming out these robots to universities due to lousy software, do we have any realistic hope of autonomous robots, even in the 2030s timeframe that NASA has set for Mars missions?

The answer to the first question is easy. Maintenance by a humanoid robot would require fine motor control, and no humanoid robot has that currently. NASA entered this robot in the DARPA Humanoid challenge, but it, like the other robots, didn’t fare so well, even when in tele-operation instead of autonomous mode. Hence, farming out the robot bodies to schools. While is is possible there will be a breakthrough from these efforts, it is also quite possible that the delicate motor control required for, say, adjusting solar panels or tightening gears would be beyond even a 2030s robot. So, humans will be essential.

Second, the NASA plan shows what is really being looked for is tele-operation. If NASA plans a stage where astronauts orbit Mars without landing (possibly staying at one of the Martian moons) then their likely job will be to make the robots work down on the surface. From earth, the delays in sending and receiving radio signals prevent tele-operation. But in orbit, the delays would be a fraction of a second, making it practical to control the robots in real time.

So, what we really have going to Mars is not a true Robot That Jumps, but instead a robot body, probably controlled via a virtual reality interface with haptics (touch feedback). This, unlike self-governed robots, really seems practical – or at least as practical as a Mars trip in general.

So, the human counterpart the Mars robot might be a very excited astronaut, as below:

Beardboy has great excitement in his VR world.

Of course, NAS probably won’t allow beard, but you get the picture.