Talking Sense: How to spot AI-generated content
Go Deeper.
Create an account or log in to save stories.
Like this?
Thanks for liking this story! We have added it to a list of your favorite stories.
The old proverb “seeing is believing” has been put to the test in recent years with the rise of sophisticated artificial intelligence tools that have given just about anyone the power to create fake images.
Meanwhile, ChatGPT has spawned fake news sites, while AI audio tools are being used to create fake podcasts. A telemarketing company is being investigated for illegal voter suppression after it was discovered it was using AI to generate robocalls that sounded a whole lot like President Joe Biden. The calls suggested New Hampshire voters shouldn’t vote in last month’s primary election.
Internet trolls and web bots have been used in previous elections to help spread disinformation. During the 2016 presidential race, misinformation spread by bots was a powerful way to divide voters, said Munmun De Choudhury, an associate professor of computing at Georgia Institute of Technology.
“But at that point in time, those were much simpler technologies,” she said. “A bot is something that you would program, and you would feed certain kinds of content to it and it would just parrot it back on the Twitter platform, for instance.”
Turn Up Your Support
MPR News helps you turn down the noise and build shared understanding. Turn up your support for this public resource and keep trusted journalism accessible to all.
AI tools this election year are far more sophisticated and De Choudhury expects online disinformation campaigns will be even more potent as a result.
“All of these things can be done in a much better way today,” she said. “You don’t have to recruit people to serve as trolls. You could literally have these AI agents who could be trolls, and possibly they could probably write more creative trolling messages.”
AI-generated content is convincing. But with a little common sense and attention to detail, she said consumers can be savvy enough to spot fake news, pictures and audio.
‘Cat-and-mouse chase’
De Choudhury’s recent research focuses on AI-generated text. She’s found that fake articles written with tools like ChatGPT are incredibly convincing — and often completely untrue.
“For instance, it would write in a way that would cite certain sources of scientific evidence, except that those scientific evidences don’t actually exist,” she said. “It would write in a tone that you would feel empowered that, ‘Oh, it’s letting me think critically about this issue.’ But in reality, it would be directing you to think in a certain way, in this case, to perpetuate the misinformation.”
De Choudhury said testing whether text is fake requires common sense and basic news literacy. She recommends:
Do a gut check: If it sounds too good to be true, it probably is.
Check multiple other reputable news sources to see if they’re reporting the same story.
Examine sources closely. A quick Google search will help determine if a source is real and reputable or not.
De Choudhury said there are tools online to help detect AI-generated content. But they’re flawed.
“It’s a cat-and-mouse chase,” she said. “The kind of tools that are being used for detection itself is an AI program, which fundamentally is very similar to what is generated by AI content.”
Get out your magnifying glass
Fake pictures can be as difficult to detect as text, said Jack Brewster, who is enterprise editor for NewsGuard, a website that studies the proliferation of disinformation and makes tools to stop it.
The challenge in vetting images is partly due to the so-called authenticity of those spreading that information.
For instance, a NewsGuard report from last fall found that “verified” users on X (formerly known as Twitter) were superspreaders of disinformation surrounding the Israel-Hamas war.
While AI-generated pictures may be very convincing, some aren’t perfect.
“Two prominent tells right now are extra fingers and toes. That’s typically a sign that it’s been AI generated, or that there is text that’s been messed up or logos messed up.”
Brand logos are worth scrutiny, too. If they appear slightly blurry, distorted or otherwise off, the picture may be fake.
Meanwhile, Google and Meta both have emerging tools to help identify AI-generated content.
‘Pre-bunk,’ don’t ‘de-bunk’
The proliferation of convincing fake images, text and audio is making it more difficult to spot AI-generated content, said Brewster.
“The sheer fact that these [AI] tools exist now has so muddied the waters that people now don’t know whether to believe real images anymore,” he said. “And maybe the most dangerous part about this is that people now are questioning more readily whether real images are real or generated by AI.”
And once something fake is unleashed onto the internet, it’s really hard to erase it or avoid it.
That’s why Brewster is a fan of ‘pre-bunking.’
“Pre-bunking is the notion that we can empower users online with information about the sources where they’re getting their news,” he said. “They can have their antennas up when they see something being put out from an account that has previously spread misinformation.”
NewsGuard has a free browser extension that ranks websites based on their reliability and accuracy.
Brewster said online news consumers can be proactive by cross-checking names and information when they see something viral on social media, too. If a person, account or platform has spread disinformation before, he said they likely will again.
Think you can tell the difference between these five examples of AI-versus-reality? Take our quiz!