Alexa, how do we make AI socially responsible?
The rise of artificial intelligence has many wondering just how will it impact our society, industries, and the economy.
The quick-self learning of artificial intelligence using complex algorithms could one day help doctors make accurate diagnoses or reduce the number of accidents on the road with safer self- driving cars.
However, there are also major concerns that as AI becomes more and more a part of our lives, it could have irreversible consequences.
According to Elon Musk, the inventor, engineer, and billionaire who wants to colonize Mars, AI is humanity’s “biggest existential threat.”
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish,” Musk told the Guardian.
Musk isn’t alone in thinking that there need to be serious policies put in place to regulate AI.
A newly published policy report from the University of Manchester, titled “On AI and Robotics: Developing policy for the Fourth Industrial Revolution,” discusses the potential bias and marginalization that AI could bring.
Researchers from the Manchester Institute of Innovation Research conducted the report.
One of the main arguments of the report is that investments in AI will mostly be paid by taxpayers and as such, there needs to a significant effort made to ensure that the benefits of Artificial Intelligence are shared equally.
In other words, as artificial intelligence advances, it must do so in a democratic and socially responsible way.
“In these ‘data-driven’ decision-making processes some social groups may be excluded, either because they lack access to devices necessary to participate or because the selected datasets do not consider the needs, preferences, and interests of marginalized and disadvantaged people,” said Barbara Ribeiro, a contributor of the report.
The report also emphasizes that people need to be better educated on the differences between robotic technology and artificial intelligence.
“Although the challenges that companies and policymakers are facing with respect to AI and robotic systems are similar in many ways, these are two entirely separate technologies – something which is often misunderstood, not just by the general public, but policymakers and employers too. This is something that has to be addressed,” said Anna Scaife, Co-Director of the University’s Policy@Manchester team, who published the report.
There are numerous potential and positive advancements that could happen with AI and robotics and experts agree that both technologies will have a serious impact on many different fields in the near future.
The policy report authors hope that this research will convey the wide spectrum of possibilities with AI and robotics, and demonstrate how vital it is for policymakers, regulators, and researchers to ensure that advancements do not come at the expense of any one group of people.