Can AI Tell When We're Lying? MSU Study Unveils Surprising Results (2025)

Picture this: A futuristic scenario where artificial intelligence effortlessly uncovers deceit in humans – but hold on, is AI really up to the task, and even if it is, can we truly put our faith in it? This is the thrilling yet contentious frontier we're exploring in a groundbreaking study from Michigan State University (MSU), which investigates just how effectively AI can spot when someone's not telling the truth. And here's where it gets controversial: While AI shows promise in certain detection scenarios, the findings reveal it's far from perfect, raising big questions about relying on machines for something as nuanced as human honesty.

Artificial intelligence has been advancing at a rapid pace, constantly expanding its abilities and applications. This new MSU-led research delves deeper into AI's capacity to comprehend human behavior by employing it as a tool for detecting lies. Published in the Journal of Communication, the study involved collaboration between MSU and the University of Oklahoma, conducting 12 distinct experiments with over 19,000 AI participants. These trials put AI 'personas' – essentially digital identities designed to mimic real people in their responses and decision-making – to the test, evaluating their skill in distinguishing between truthful statements and deceptive ones from human volunteers.

The overarching goal of this research is twofold: First, to assess how AI might assist in real-world deception detection, such as in investigations or interviews. Second, to explore AI's potential for simulating human data in social science studies, while also warning experts about the pitfalls of using advanced language models for lie-spotting tasks. As David Markowitz, an associate professor of communication at MSU's College of Communication Arts and Sciences and the study's lead author, explains, it's about understanding AI's limitations in human-like judgment.

To compare AI's performance against that of humans, the researchers drew inspiration from Truth-Default Theory (TDT). This psychological framework posits that people are inherently inclined to be honest most of the time, and we're naturally biased toward believing others are telling the truth unless proven otherwise. It's like assuming a friend is being sincere in a casual conversation – we don't automatically suspect deceit, as that would make social interactions exhausting and damaging to relationships. Markowitz highlights this human tendency: 'Humans have a natural truth bias – we generally assume others are being honest, regardless of whether they actually are. This tendency is thought to be evolutionarily useful, since constantly doubting everyone would take much effort, make everyday life difficult, and be a strain on relationships.' For beginners diving into this topic, think of it as our brain's energy-saving shortcut: Trust first, question later, which helps us navigate the world without paranoia.

To put AI to the test, the team utilized the Viewpoints AI research platform. They presented AI judges with audiovisual or audio-only clips of human subjects, tasking the AIs with deciding whether each person was lying or telling the truth, and requiring them to explain their reasoning. The experiments varied key factors to see how they influenced AI's accuracy, including the type of media (full video with sound versus just audio), the contextual background (extra details that provide insight into the situation, like why someone might be nervous), the lie-truth base-rates (the overall ratio of honest versus deceptive statements in the dataset), and even the AI's persona (customized profiles that make the AI behave more like a specific type of person, such as a skeptical detective or a trusting friend). This approach allowed the researchers to dissect what affects AI's lie-detection prowess.

Related Stories:

  • Researchers have found that AI can identify early signs of depression by analyzing subtle twitches in facial muscles – a fascinating example of how machine learning is enhancing mental health diagnostics.
  • Studies indicate that compounds in citrus fruits and grapes might shield against type 2 diabetes, offering practical insights into how everyday foods could support long-term wellness.
  • A new U.S. health initiative is addressing biases in data from wearable devices, ensuring that health research becomes more inclusive and accurate for diverse populations.

One key experiment illustrated AI's intriguing quirks: It proved highly accurate at detecting lies (hitting 85.8% success), but struggled terribly with truths (only 19.5% accuracy). In controlled interrogation-style scenarios, AI's lie-spotting matched human levels, but in more relaxed, everyday contexts – like judging statements about friends – it shifted to a truth-bias, mirroring human tendencies more closely. Overall, the results painted AI as more prone to assuming deception (lie-biased) and generally less reliable than humans. And this is the part most people miss: Despite these capabilities, AI's judgments often fell short of true human intuition.

'Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments. In this study, and with the model we used, AI turned out to be sensitive to context – but that didn't make it better at spotting lies,' Markowitz noted. The key takeaway is that AI's outcomes don't align with human accuracy, suggesting that human-like qualities might be a crucial boundary for deception detection theories. The study emphasizes that while AI might appear objective and free from bias, the field needs substantial advancements before generative AI can be trusted for lie detection.

'It’s easy to see why people might want to use AI to spot lies – it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we’re not there yet,' Markowitz cautioned. 'Both researchers and professionals need to make major improvements before AI can truly handle deception detection.'

This raises a provocative point: Is our eagerness to embrace AI for such sensitive tasks blinding us to its flaws? Some might argue that AI could one day surpass humans with the right refinements, while others contend it might never capture the emotional depth that humans bring to lie detection. What do you think – should we keep pushing for AI in truth-telling arenas, or is human judgment irreplaceable? Do you agree with the study's findings, or do you believe AI is closer to being ready than this suggests? Share your opinions and spark a debate in the comments below!

Source:

Journal reference:

Markowitz, D. M., & Levine, T. R. (2025). The (in)efficacy of AI personas in deception detection experiments. Journal of Communication. doi.org/10.1093/joc/jqaf034

Can AI Tell When We're Lying? MSU Study Unveils Surprising Results (2025)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Carlyn Walter

Last Updated:

Views: 5943

Rating: 5 / 5 (50 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Carlyn Walter

Birthday: 1996-01-03

Address: Suite 452 40815 Denyse Extensions, Sengermouth, OR 42374

Phone: +8501809515404

Job: Manufacturing Technician

Hobby: Table tennis, Archery, Vacation, Metal detecting, Yo-yoing, Crocheting, Creative writing

Introduction: My name is Carlyn Walter, I am a lively, glamorous, healthy, clean, powerful, calm, combative person who loves writing and wants to share my knowledge and understanding with you.