Self-driving cars may soon be capable of making ethical decisions
The reality of self-driving cars seems a bit far-fetched even with today’s technology. However, a new study has determined that cars are not only capable of driving themselves but also of making moral and ethical decisions.
Researchers have established that human morality can be modeled by machines. Leon Sütfeld led a study which reveals that algorithms can be applied to mimic the moral decisions of humans. This was previously thought to be too complex of a process to be represented by an algorithm.
“Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object,” explains Sütfeld.
The team used virtual reality to create situations where participants faced various predicaments. The tests simulated driving through a neighborhood in foggy conditions, and the individuals had to make decisions that could result in colliding with animals, inanimate objects, or other people.
The results of the evaluations were analyzed to develop statistical models based on the innate reactions of the participants. It turns out that the decisions made by the drivers could be easily predicted and modeled.
Professor Gordon Pipa is a senior author of the study. He says that the results of this study raise new questions regarding the future of self-driving vehicles.
Pipa says, “We need to ask whether autonomous systems should adopt moral judgements, if yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?”
The authors of the study express an urgent need for clear cut regulations for autonomous cars before the cars can make their own decisions.
“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” explains Professor Peter König, a senior author of the paper. “Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines act just like humans.”
The research was conducted at The Institute of Cognitive Science at the University of Osnabrück, and is published in the journal Frontiers in Behavioral Neuroscience.