A recent study conducted by the University of Waterloo has revealed that people struggle to differentiate between real and AI-generated images, with only 61% accuracy. The research, led by PhD candidate Andreea Pocol, found that participants were duped by AI-generated images nearly 40% of the time, raising concerns about the reliability of visual information in today’s digital age.
In the study, 260 participants were presented with 20 unlabeled pictures, half of which were real people sourced from Google searches, and the other half generated by AI programs like Stable Diffusion and DALL-E. Despite researchers expecting an 85% accuracy rate in distinguishing between real and AI-generated images, only 61% of participants were able to do so.
Participants focused on details such as fingers, teeth, and eyes as potential indicators of AI-generated content, but their assessments were often incorrect. Pocol noted that the rapid development of AI technology makes it challenging to understand the potential for malicious use of AI-generated images, especially in the realm of disinformation.
AI-generated images pose a significant threat as a political and cultural tool, allowing users to create fake images of public figures in compromising situations. Pocol emphasized the need for tools to identify and counter AI-generated content, likening it to a new AI arms race.
The study, titled “Seeing Is No Longer Believing: A Survey on the State of Deepfakes, AI-Generated Humans, and Other Nonveridical Media,” was published in the journal Advances in Computer Graphics. The researchers hope that their findings will prompt further research and development of tools to combat the spread of AI-generated disinformation.