There’s been a lot of excitement about robots and artificial intelligence in recent years. One is pretty irrational – humanoid robots becoming the equal of humans. One, more rational, is various service robots, as well as “artificial intelligences” taking over tasks like driving cars and airplanes.
The 1950s and 1960s spawned two views of future computing – computers as “mind” and computers as human “intelligence amplifiers.
Now that we are actually getting some functional AI systems, we’re also beginning to learn about the consequences which have nothing to do with their capabilities, but with human perception of their capabilities. As I’ve discussed many times before here, people tend to see any mimicry of human behavior by a machine as ‘proof’ there is a fully human, conscious, choosing entity behind the scenes. They then extrapolate this and begin treating the machine as if it had qualities it doesn’t have (like a mind).
Case in point. People interacting with robots immediately apply a “human mind in silicon” model to the robot’s actions. Making the robot “friendly” or “humanoid” just encourages this. And it leads to people assuming the robot is taking charges – so they take risks. In other words, their own work deteriorates as they imagine the robot is picking up the slack.
The research, published in the journal Cyberpsychology, Behavior, and Social Networking, involved 180 undergraduate students taking the Balloon Analogue Risk Task (BART), a computer assessment that asks participants to press the spacebar on a keyboard to inflate a balloon displayed on the screen. For some sessions, a robot as present, providing encouraging statements to keep pumping.
The results showed that the group who were encouraged by the robot took more risks, blowing up their balloons significantly more frequently than those in the other groups did. “Popping” a virtual balloon caused the controls with a silent robot present to scale back their pumping – but in the presence of the robot, test subjects continued to pump, even when the balloons routinely popped.
This is a great example of how people map humanlike qualities to objects, provided the objects provide cues that they are human. The students mapped a “mind” onto the code creating the robot’s speech, and further took that speech as evidence they should continue pumping.
Dr Yaniv Hanoch, Associate Professor in Risk Management at the University of Southampton who led the study explained noted that the robots apparently exerted peer pressure on the students, similar to that provided by an actual human egging the students on. However, he also saw a silver lining.
“On the one hand, our results might raise alarms about the prospect of robots causing harm by increasing risky behavior. On the other hand, our data points to the possibility of using robots and AI in preventive programs, such as anti-smoking campaigns in schools, and with hard to reach populations, such as addicts.”
There’s a clear moral hazard here. My guess is that a sign with a picture of a person telling you what to do creates peer pressure – think of Uncle Sam:
However, in this image, along with statues and other obviously dead human representations, the person almost certainly weighs their response based on the fact that this is clearly something created by humans, not a human. In the case of the robot, this is less certain. There’s a widespread belief, encouraged by science fiction and so-called ‘science’ writers that Artificial Intelligence is close to creating human minds, or even superhuman minds. If a person interacts with a robot and maps their response to ‘human’ or ‘superhuman’ they may be more likely to follow along.
This in turn means that the behind-the-scenes actors can use robot puppets to push their goals in a way superior to old media. The robot is more than an abstract ‘brand representative’ – it is seen as a person.
Near-future society will have a more powerful method to help its citizens in robot spokesholes substituting for graphic design. But, of course, society may not have the best results of its citizens in mind.
This can’t be good…