Article image
07-01-2019

Uncanny Valley: Why some people feel so unsettled by robots

A team of researchers have identified mechanisms within the human brain that may explain the “Uncanny Valley,” the phenomenon that causes some more than others to feel unsettled by robots and virtual agents that are too human-like. This discovery is published in the Journal of Neuroscience.

“Resembling the human shape or behavior can be both an advantage and a drawback,” explained Professor Astrid Rosenthal-von der Pütten, Chair for Individual and Technology at RWTH Aachen University. “The likability of an artificial agent increases the more human-like it becomes, but only up to a point: sometimes people seem not to like it when the robot or computer graphic becomes too human-like.”

The Uncanny Valley phenomenon was translated from a Japanese expression coined by robotics professor Masahiro Mori in 1978. Although the phenomenon is well known and documented, the mechanisms behind it were a mystery until now.

“For a neuroscientist, the Uncanny Valley is an interesting phenomenon,” said Dr. Fabian Grabenhorst, a Sir Henry Dale Fellow and Lecturer in the Department of Physiology, Development and Neuroscience at the University of Cambridge. “It implies a neural mechanism that first judges how close a given sensory input, such as the image of a robot, lies to the boundary of what we perceive as a human or non-human agent. This information would then be used by a separate valuation system to determine the agent’s likability.”

To see which mechanisms are responsible for the Uncanny Valley phenomenon, the team studied brain patterns in 21 healthy individuals during two separate tests, each of which used functional magnetic resonance imaging (fMRI), which measures blood flow changes within the brain as a proxy for how active different regions are.

During the first test, the participants were asked to rate images of humans, artificial humans, android robots, humanoid robots and mechanoid robots, in terms of likability and human-likeness. And during the second test, participants were then asked which of these agents they would trust to purchase a personal gift for them. The research team found that participants generally preferred gifts from humans or from the more human-like artificial agents, but not those that were closest to the human/non-human boundary.

Having measured the participants’ brain activity during these two tests, the researchers could identify which areas of the brain caused the Uncanny Valley phenomenon and connected it to brain circuits that are crucial in processing and evaluating social cues, like facial expressions. They noted that some of the brain areas near the visual cortex deciphered how human-like the images were, and was able to create a spectrum of “human-likeness” based on the activity of the artificial agents. 

The team also found that the medial prefrontal cortex, the wall of neural tissue sandwiched between the left and right brain hemispheres, was also important in the Uncanny Valley phenomenon. The medial prefrontal cortex contains a generic valuation system that judges all kinds of stimuli, such as reward values and social stimulus like pleasant touch.

In this study, the medial prefrontal cortex worked in two separate parts to stimulate the Uncanny Valley — one area converted the human-likeness signal into a ”human detection” signal, which over-emphasized the line between human and non-human stimuli and reacted most strongly to human agents compared to artificial agents.

And the second part, called the ventromedial prefrontal cortex (VMPFC), meshed this signal with a likability evaluation to produce an activity pattern similar to the Uncanny Valley response.

“We were surprised to see that the ventromedial prefrontal cortex responded to artificial agents precisely in the manner predicted by the Uncanny Valley hypothesis, with stronger responses to more human-like agents but then showing a dip in activity close to the human/non-human boundary — the characteristic ‘valley,’” Dr. Grabenhorst said.

Furthermore, the same areas in the brain signaled evaluations when participants decided whether or not to accept a gift from a robot. When participants rejected gifts from human-like artificial agents, the amygdala, a region of the brain responsible for emotional responses, was activated.

“We know that valuation signals in these brain regions can be changed through social experience,” Dr. Grabenhorst continued. “So, if you experience that an artificial agent makes the right choices for you — such as choosing the best gift — then your ventromedial prefrontal cortex might respond more favorably to this new social partner.” This could have implications for how artificial intelligence is designed in the future.

“This is the first study to show individual differences in the strength of the Uncanny Valley effect, meaning that some individuals react overly and others less sensitively to human-like artificial agents,” Professor Rosenthal-von der Pütten said. “This means there is no one robot design that fits — or scares — all users. In my view, smart robot behavior is of great importance, because users will abandon robots that do not prove to be smart and useful.”

By Olivia Harvey, Earth.com Staff Writer

Image Credit: Shutterstock/Phonlamai Photo

News coming your way
The biggest news about our planet delivered to you each day
Subscribe