How do we ensure that artificial intelligence is ethically designed?
The beginning phases of artificial intelligence are all around us, from the algorithms that determine which advertisements you see online, to spam filters, to virtual assistants like Alexa and Siri.
Have we reached the era of AI? As its name implies, artificial intelligence is a system that learns on its own, improving on past mistakes.
Although machine learning and AI seem to be two terms that are used interchangeably, there are some definite differences. Most of today’s touted AI programs are actually using machine learning and filtering through readily available data.
Google’s DeepMind is probably one of the most authentic examples of artificial intelligence, and in 2016, the DeepMind machine beat an expert human player at “Go,” an extremely complicated game.
But despite DeepMind’s many accomplishments, the race for AI is ongoing.
Many countries are scrambling to be the first to develop a true artificial intelligence system capable of learning on its own and taking on any task it’s given.
These innovations are just around the corner, and as true AI becomes more and more of a reality, the question remains — how do we ensure that these machines don’t go rogue or take over of the world as so many popular sci-fi films like to imagine?
Regehr and Omohundro will discuss why ethical questions need to be addressed now rather than later as advancements in AI technology continue.
For example, if a company designs an AI system to win at chess no matter what, will it be able to teach itself how not to get switched off? What ethical safety measures should be built into an AI system?
“Should these systems be allowed to vote? Should they be full citizens? Should they be viewed as servants? Should they be viewed as slaves? Are they just machines?” asked Steve Omohundro.
Image Credit: Shutterstock/Tatiana Shepeleva