Skip to content
Information Technology, Science

People are overconfident about spotting AI faces, study finds

UNSW Sydney 4 mins read
Key Facts:
  • Even people with exceptional face-recognition abilities (super-recognisers) struggle to distinguish AI-generated faces from real ones, performing only marginally better than average people
  • Most people are overconfident in their ability to spot AI-generated faces, relying on outdated visual cues that no longer apply to modern AI face generation
  • Modern AI-generated faces are characterised by being too perfect - displaying unusual symmetry and average proportions rather than obvious flaws
  • Visual judgement alone is no longer reliable for detecting AI faces, which has implications for social media, online dating, recruitment and other areas where profile pictures are used
  • Research suggests there may be super-AI-face-detectors who have natural abilities to spot AI-generated faces, though their detection strategies are not yet understood

Many of us rely on outdated visual cues when trying to distinguish real faces from highly realistic AI-generated ones, with even people who have exceptional face-recognition skills being fooled.

 

Most people believe they can spot AI-generated faces, but that confidence is out of date, research from UNSW Sydney and the Australian National University (ANU) has demonstrated.

With AI-generated faces now almost impossible to distinguish from real ones, this misplaced confidence could make individuals and organisations more vulnerable to scammers, fraudsters and bad actors, the researchers warn.

“Up until now, people have been confident of their ability to spot a fake face,” says UNSW School of Psychology researcher Dr James Dunn. “But the faces created by the most advanced face-generation systems aren’t so easily detectable anymore.”

In a research paper published in the British Journal of Psychology, researchers from UNSW and the ANU recruited 125 participants – including 36 people with exceptional face-recognition ability, known as super recognisers, and 89 control participants – to complete an online test in which they were shown a series of faces and asked to judge whether each image was real or AI-generated. Obvious visual flaws were screened out beforehand.

“What we saw was that people with average face-recognition ability performed only slightly better than chance,” Dr Dunn says. “And while super-recognisers performed better than other participants, it was only by a slim margin. What was consistent was people’s confidence in their ability to spot an AI-generated face – even when that confidence wasn’t matched by their actual performance.”

 

>>> Think you know how to spot an AI-generated face? Try this free online test to find out <<<

 

The end of artefacts

Much of that confidence comes from cues that used to work. Early AI-generated faces were often given away by obvious visual artefacts – distorted teeth, glasses that merged into faces, ears that didn’t quite attach properly, or strange backgrounds that bled into hair and skin.

But as face-generation systems have improved, those kinds of errors have become far less common. The most realistic outputs no longer show obvious flaws, leaving faces that look convincing at a glance, and far harder to judge using the cues people are familiar with.

“A lot of people think they can still tell the difference because they’ve played with popular AI tools like ChatGPT or DALL·E,” says ANU psychologist, Associate Professor Amy Dawel. “But those examples don’t reflect how realistic the most advanced face-generation systems have become, and relying on them can give people a false sense of confidence.”

What interested the researchers was how readily even super-recognisers were fooled. While this group did perform better on average, the advantage was modest, and their accuracy remained far below what they typically achieved when recognising real human faces. There was also substantial overlap between groups, with some non-super-recognisers outperforming super-recognisers – demonstrating this is not simply an experts-versus-everyone-else problem.

 

Too good to be true

But if AI faces are this convincing, are there any tells we should be looking for?

“Ironically, the most advanced AI faces aren’t given away by what’s wrong with them, but by what’s too right,” A/Prof Dawel says. “Rather than obvious glitches, they tend to be unusually average – highly symmetrical, well-proportioned and statistically typical.”

Qualities such as symmetry and average proportions usually signal attractiveness and familiarity. But in the current study, they become a red flag for artificiality.

“It’s almost as if they’re too good to be true as faces,” says A/Prof Dawel.

 

What to do about it

Super-recognisers didn’t stand out the way they typically do in tests involving real human faces, showing only a modest advantage. What differentiated them was a greater sensitivity to the same qualities identified in the study – plausible, unusually average and highly symmetrical faces. Even so, their limited success suggests spotting AI faces is not a skill that can be easily trained or learned.

The findings also carry practical implications – as relying on visual judgement alone is no longer reliable. This matters in contexts ranging from social media and online dating to professional networking and recruitment, where people often assume they can ‘just tell’ when a profile picture looks fake. Misplaced confidence may leave individuals and organisations more vulnerable to scams, fake profiles and fabricated identities.

“There needs to be a healthy level of scepticism,” Dr Dunn says. “For a long time, we’ve been able to look at a photograph and assume we’re seeing a real person. That assumption is now being challenged.”

Rather than teaching people tricks to spot synthetic faces, the broader lesson is about updating assumptions. The visual rules many of us rely on were shaped by earlier, less sophisticated systems.

“As face-generation technology continues to improve, the gap between what looks plausible and what is real may widen – and recognising the limits of our own judgement will become increasingly important,” A/Prof Dawel says.

 

Looking ahead

Interestingly, Dr Dunn wonders whether the research team has stumbled upon a new kind of face recogniser.

“Our research has revealed that some people are already sleuths at spotting AI-faces, suggesting there may be ‘super-AI-face-detectors’ out there.

“We want to learn more about how these people are able to spot these fake faces, what clues they are using, and see if these strategies can be taught to the rest of us.”

 

  • Good with faces? Visit the UNSW Face Test page where you can test your face recognition skills and see how well you can spot AI-faces.

 

Answers to AI-face spotting challenge in attached image: 2, 3, 5, 8, 9, 11 are all AI-generated faces


Contact details:

Lachlan Gilbert

UNSW News & Content
t: +61 2 9065 5241
e:
[email protected]

Media

More from this category

  • Information Technology
  • 19/02/2026
  • 01:11
Mavenir Systems, Inc.

Mavenir Collaborates with Red Hat to Deliver Carrier-Grade On-Premise Conversational AI and Agentic AI Service Assurance for Telcos

RICHARDSON, Texas, Feb. 18, 2026 (GLOBE NEWSWIRE) -- Mavenir, the software company building AI-by-design mobile networks, today announced a collaboration with Red Hat, the world's leading provider of open-source solutions, to deliver Conversational AI and Agentic AI service assurance solutions using Red Hat AI. These carrier-grade generative AI (gen AI) offerings are powered by Red Hat OpenShift AI running on-premise, empowering telecommunications service providers with a highly scalable, security-focused, resilient and cost-optimized foundation for network operations.This collaboration is focused on transforming the telecommunications industry by providing service providers with AI solutions that leverage their existing platform investments while delivering additional…

  • Information Technology
  • 19/02/2026
  • 01:10
Securonix

Securonix Introduces Agentic Mesh and the First Productivity-Based AI Model for the SOC

Powered by Sam, the AI SOC Analyst, the Securonix Agentic Mesh delivers governed, explainable AI that measurably improves SOC productivity and enables board-ready outcomes. Melbourne, AUSTRALIA. 18th February 2026 – Securonix, Inc., in collaboration with Amazon Web Services (AWS), today announced Sam, the AI SOC Analyst, and the Securonix Agentic Mesh, introducing a new operating model for security operations designed to scale analyst productivity, govern AI in production, and deliver board-ready outcomes. At a time when security operations are overwhelmed by alert volume, analyst shortages, and rising SIEM costs, Securonix is shifting the conversation from AI features and consumption metrics to measurable work delivered. With Sam and Agentic Mesh, security leaders can now…

  • Information Technology
  • 18/02/2026
  • 23:11
Datavault AI Inc

From Seoul’s Global K-Wave to Web3 Leadership: Datavault AI and TBURN Chain Align K-Pop, Esports, and Korean Cultural Exports with Enterprise-Grade Data Asset Infrastructure and Tokenized Real World Assets

PHILADELPHIA, PENNSYLVANIA / ACCESS Newswire / February 18, 2026 / Datavault AI Inc. (NASDAQ:DVLT) ("Datavault AI" or the "Company"), a leader in data monetization,…

  • Contains:

Media Outreach made fast, easy, simple.

Feature your press release on Medianet's News Hub every time you distribute with Medianet. Pay per release or save with a subscription.