Can you trust your own eyes? These ANU researchers say spotting AI images may be more difficult than ever.
‘Pics or it didn’t happen’ has been a glib but common phrase said in response to stories shared online that were deemed unbelievable or exaggerated. A demand for proof. But with the creation of AI image generators, do we have what it takes to tell what is real and what is fake?
Most recently, Meta — the parent company behind Facebook, Instagram, Threads and WhatsApp — placed its AI assistant front and centre for users trying to access search functions within their apps. Prompts asking the assistant to imagine something will generate an AI image. This latest move may bring more AI generated content into our social media feeds.
Sometimes it can be obvious that an image is produced by AI, such as when Trump supporters (not his campaign team) created AI images meant to target black voters — with one image displaying a black Trump supporter with three arms. And sometimes the resulting image doesn’t seem to cause any harm, such as the viral pictures of the Pope wearing a puffer jacket instead of his usual robes.
But what should we do in cases where AI’s fingerprints are less obvious than extra limbs? At the extreme end, deepfake technology has been linked to misinformation during elections, false celebrity endorsements for dodgy health products and even the spread of nonconsensual faked nudes.
And while online guides to spotting whether an image has been created by AI suggest looking out for details like unusual hands, patterns and nonsensical text, are these guides enough to educate people?
Playing digital detective
It turns out most of us don’t have what it takes to spot an AI image. Amy Dawel, an Associate Professor at the ANU School of Medicine and Psychology, says that as AI images become increasingly sophisticated, it will become harder and harder to sort real from fake.
“Our research found that people are overconfident in their ability to recognise AI images,” Dawel says.
“We’re still relying on cues that used to be helpful but have now disappeared.
“Most concerningly, our research found that the people who are worst at spotting AI imposter faces are the most confident!”
None of these faces belong to real people. Photo: StyleGAN2 images/Open Science Framework (https://osf.io/ru36d/)
Associate Professor Eryn Newman, also from the ANU School of Medicine and Psychology, researches how our biases influence how we evaluate the truth and credibility of information. She agrees that relying on or hoping to find obvious cues is not going to work in the long run.
“As AI images become more sophisticated, and these imperfections are less frequent, such tactics will need revising,” Newman says.
Ironically, technology companies, including Intel and OpenAI, have started work on AI-powered tools to detect AI images. Images produced with OpenAI tools, like DALL-E, will include metadata indicating their AI origin. Meta, on the other hand, plans to tag AI content with ‘Made with AI’ and ‘Imagined with AI’ labels once detected.
According to Dawel, the proliferation of AI generated images places more onus on us to check the media we consume, lest we are duped.
“In many cases, humans are not able to spot AI images, and other algorithms aren’t great at it either,” she says. “Because of this, we all need to be digital detectives and look for other ways to verify that the information we engage with online is correct. This may include using reputable websites and confirming people’s online identities with real life interactions.”
Seeing is believing
When we struggle to spot an AI image, it can influence how we interpret information.
Research has found that images help us to comprehend and visualise written claims they are paired with.
As the saying goes, a picture is worth a thousand words. With AI generators making it easier to create convincing imagery, such pictures can help to make information appear more believable.
“When an image is biased or slanted and represents a conclusion — not otherwise stated in the text — it can lead us to generate that conclusion ourselves, by shifting comprehension. This leads to distortions in belief and memory,” Newman explains.
“Some of our research on ‘truthiness’ also shows that a conceptually related photo that does not provide any evidence of a claim, but simply decorates a claim, can also increase our tendency to believe a claim is true.
“The influence of AI images may be quite insidious in influencing the information environment and how information is consumed and shared.”
But AI images don’t have to be convincing to have influence. Another possible consequence of technology lowering barriers of entry to creating imagery is that it could cause people to be sceptical of all imagery. But Newman notes that more empirical work is needed to understand this proposition.
“It is possible that an increasing awareness of the presence of AI on platforms could lead to distrust. If that is the case, this could impact people’s general receptivity to messages,” she says.
“This would be concerning if that impacted important communications around public health.”
Research from the Stanford Internet Observatory suggests most AI-generated images currently on social media are designed to attract clicks and shares from users, in an attempt to build audiences and increase trust. As AI images become more ubiquitous, we will have to wait to see if people become more sceptical.
Until then, we will be playing digital detectives and holding up the magnifying glass to what we see on social media.
Top image: Lisa Top/shutterstock.com
You may also like
AI faces look more real than actual human faces
More people thought AI-generated White faces were human than the faces of real people, according to a new study.
Can AI produce decent podcasts?
Everybody wants to start a podcast these days – even AI. What does this mean for the future of science communication?
Keeping knowledge alive in wartime
After fleeing Ukraine, Dr Andrey Iljin has found a haven for idea generation at ANU.