Article image
02-15-2022

People trust synthetic faces over real ones

New research shows that people can’t tell the difference between images of real people’s faces and those generated by artificial intelligence (AI) programs. In fact, the study shows, people are more likely to trust the synthetic, AI-generated face than the real one. The researchers are calling for safeguards to prevent “deep fakes.”

Already such AI-generated images, audio and video have been used to create propaganda, fraud, “revenge porn” and even comedic pieces. 

Dr. Sophie Nightingale from Lancaster University and Professor Hany Farid from the University of California, Berkeley asked people to try and distinguish between AI-generated faces and images of real people.

In the first experiment, 315 people evaluated a subset of 125 images from a group of 800. The accuracy rate of identifying synthetic or real was 48 percent. In a second experiment, when 219 different participants were trained to identify synthetic faces, the accuracy rate rose to a mere 59 percent.  

“Our evaluation of the photo realism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable – and more trustworthy – than real faces,” said the researchers.

Next, the scientists decided to have people evaluate how trustworthy they found the faces to see if there was a difference between synthetic and real. More than 200 participants evaluated trustworthiness in 128 facial images on a scale from 1(very untrustworthy) to 7 (very trustworthy). On average, the synthetic faces were rated as 7.7 percent more trustworthy than the real ones. 

“Perhaps most pernicious is the consequence that in a digital world in which any image or video can be faked, the authenticity of any inconvenient or unwelcome recording can be called into question,” said the researchers.

“Safeguards could include, for example, incorporating robust watermarks into the image- and video-synthesis networks that would provide a downstream mechanism for reliable identification.”

“Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often-laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

“At this pivotal moment, and as other scientific and engineering fields have done, we encourage the graphics and vision community to develop guidelines for the creation and distribution of synthetic-media technologies that incorporate ethical guidelines for researchers, publishers, and media distributors.”

The study is published in the journal Proceedings of the National Academy of Sciences.

By Zach Fitzner, Earth.com Staff Writer

News coming your way
The biggest news about our planet delivered to you each day
Subscribe