Should robots and autonomous vehicles take the blame for wrongdoing?
After an autonomous car struck and killed a pedestrian in Arizona last year, the woman’s family is suing the city of Tempe for negligence. Meanwhile, cognitive and computer scientists are debating whether self-driven vehicles should be held responsible for their own actions and wrongdoing.
Study co-author Yochanan Bigman is an expert in social and moral psychology at the University of North Carolina at Chapel Hill.
“We’re on the verge of a technological and social revolution in which autonomous machines will replace humans in the workplace, on the roads, and in our homes,” said Bigman. “When these robots inevitably do something to harm humans, how will people react? We need to figure this out now, while regulations and laws are still being formed.”
The researchers believe that the perceived competence of some machines, including the ability to act independently of humans, could make people more likely to hold them morally responsible for their own actions.
Beyond autonomy, the very appearance of a robot has an influence on moral judgement. For example, a human-looking robot will likely be perceived as having a human mind. Realistic robots are also expected to have an awareness of their surroundings, and are thought to have the freedom to act with intent.
These issues have important implications for the people and companies who develop and operate autonomous machines. The study authors argue that there could be cases where robots act as a scapegoat for those who are responsible for programming them.
Autonomous technology continues to advance, and the subject of whether the rights of robots need to be legally protected has already been addressed. The experts pointed out that the American Society for the Prevention of Cruelty to Robots and a 2017 European Union report have laid the groundwork for extending certain moral standards to include machines.
“We suggest that now – while machines and our intuitions about them are still in flux – is the best time to systematically explore questions of robot morality,” wrote the study authors. “By understanding how human minds make sense of morality, and how we perceive the mind of machines, we can help society think more clearly about the impending rise of robots and help roboticists understand how their creations are likely to be received.”
The study is published in the journal Trends in Cognitive Sciences.