© 2024 WLRN
SOUTH FLORIDA
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
WLRN has partnered with PolitiFact to fact-check Florida politicians. The Pulitzer Prize-winning team seeks to present the true facts, unaffected by agenda or biases.

PolitiFact FL: Social media accounts use AI-generated audio to push 2024 election misinformation

A booth is ready for a voter, Feb. 24, 2020, at City Hall in Cambridge, Mass., on the first morning of early voting in the state.
Elise Amendola
/
AP
FILE - A booth is ready for a voter, Feb. 24, 2020, at City Hall in Cambridge, Mass., on the first morning of early voting in the state.

WLRN has partnered with PolitiFact to fact-check Florida politicians. The Pulitzer Prize-winning team seeks to present the true facts, unaffected by agenda or biases.

A recent social media post made what appeared to be a surprising announcement about the 2024 presidential race.

"Breaking news: Donald Trump has declared that he is pulling out of the run for president," the narrator said in a Feb. 22 TikTok video, which was viewed more than 161,000 times before TikTok removed it from the platform.

Of course, Trump hasn’t dropped out of the 2024 race. As of March 12, he’d secured enough delegates to clinch the Republican presidential nomination.

Besides the false claim, the video had another distinctive element: The audio appeared to be made with artificial intelligence.

PolitiFact identified several TikTok accounts spreading false narratives about the 2024 election and Trump through its partnership with TikTok to counter inauthentic, misleading or false content. (Read more about PolitiFact's partnership with TikTok.)

Videos on YouTube made similar false claims. These videos, which also appeared to have AI-generated audio, were reshared with lower engagement on Facebook and X.

Generative AI is a broad term that describes when computers create new content, such as text, photos, videos and audio, by identifying patterns in existing data. This year, the United States and more than 50 other countries are holding national elections, and generative AI is likely to play an outsized role. In February in Indonesia, AI-generated cartoons helped rehabilitate the image of a former military general linked to human rights abuses, who won the country’s presidential election.

Recent advances in generative AI have made it harder to determine whether online content is real or fake. We talked to experts about how this technology is changing the information landscape and how to spot AI-generated audio. Unlike video, there aren’t visual abnormalities that can hint at manipulation.

When contacted for comment, a TikTok spokesperson said, "To apply our harmful misinformation policies, we detect misleading content and send it to fact-checking partners for factual assessment." Once alerted to the content highlighted in this story, TikTok removed it from the platform.

Experts analyze videos’ use of AI-generated audio
We asked generative AI experts to analyze five TikTok videos that made false and misleading claims across multiple accounts to determine whether we’d accurately surmised that the audio was AI-generated.

Hafiz Malik, a University of Michigan-Dearborn electrical and computer engineering professor who studies deepfakes, said his AI detection tool classified four of the videos as "synthetic," or AI-generated, audio. The fifth was flagged as "low confidence," which Malik said means some parts were labeled as "deepfake," while others weren’t.

Siwei Lyu, a University at Buffalo computer science and engineering professor who specializes in digital media forensics, said his AI detection algorithms also classified a majority of the five videos as using AI-generated audio.

TikTok videos make outrageous, false claims
Trump was a focus of these TikTok accounts; several of the accounts used photos of the former president as their profile photos.

(Screengrabs from TikTok)

These accounts followed a similar playbook: eye-catching headlines often displayed against red backgrounds; videos containing clips or still photos of famous figures, including Trump and Supreme Court Justice Clarence Thomas; and a disembodied narrator.

Some videos made false claims about the well-being of high-profile people — that Trump had a heart attack and New York Attorney General Letitia James was hospitalized for gunshot wounds. James sued Trump in 2022 accusing him of fraudulently inflating his net worth; a judge ruled in February that Trump must pay a $454 million penalty.

Another video displayed text reading, "Supreme Court Justice Clarence Thomas joins other justices to remove 2024 race candidate." This appears to misleadingly refer to the Supreme Court’s case on Trump’s ballot eligibility, in which Thomas and the other justices unanimously ruled that individual states cannot bar presidential candidates, including Trump, from the ballot.

(Screengrabs from TikTok)
(Screengrabs from TikTok)

All of these TikTok accounts were created within the past few months. The one that appeared the oldest had videos dating to November 2023; the newest began posting content March 1. Three of the accounts had TikTok’s default username of "user" followed by 13 numbers.

These accounts collectively posted hundreds of videos and garnered hundreds of thousands of views, likes and followers before TikTok removed the accounts and videos.

We also found videos making the same false claims as the TikTok videos on YouTube. These videos mimicked the TikTok videos’ format: sensational headlines, Trump photos, audio that sounded AI-generated.

(Screengrabs from YouTube)
(Screengrabs from YouTube)

Most of the YouTube videos were viewed hundreds or thousands of times. The two YouTube accounts that posted the videos were created in 2021 and 2022 and amassed tens of thousands of followers before we contacted YouTube for comment and the company removed the accounts from the platform.

Before YouTube removed the videos, they were reshared on other social media platforms, including Facebook and X, where very small numbers of people liked or viewed them. (Read more about our partnership with Meta, which owns Facebook and Instagram.)

A YouTube spokesperson did not provide comment for this story by our deadline.

How generative AI is contributing to more misinformation online

Misinformation experts say the newest generations of generative AI are making it easier for people to create and share misleading social media content. And AI-generated audio tends to be cheaper than its video counterparts.

"Now, anyone with access to the internet can have the power of thousands of writers, audio technicians and video producers, for free, and at the push of a button," said Jack Brewster, enterprise editor at Newsguard, a company tracking online misinformation.

"In the right hands, that power can be used for good," Brewster said. "In the wrong hands, that power can be used to pollute our information ecosystem, destabilize democracies and undermine public trust in institutions."

NewsGuard reported in September 2023 that AI voice technology was being used to spread conspiracy theories en masse across TikTok. The report said Newsguard "identified a network of 17 TikTok accounts using AI text-to-speech software to generate videos advancing false and unsubstantiated claims, with hundreds of millions of views."

A 2023 University of British Columbia study that used a dataset from TikTok found that AI text-to-speech technology simplified content creation, motivating content creators to produce more videos.

Study authors Xiaoke Zhang and Mi Zhou told PolitiFact that increased productivity means generative AI "can be deliberately exploited to generate misinformation at a low cost."

The technology can also help users conceal their identities, which can "diminish their sense of responsibility towards ensuring information accuracy," Zhang and Zhou said.

TikTok requires users to label content that contains AI-generated images, videos or audio "to help viewers contextualize the video and prevent the potential spread of misleading content." TikTok’s community guidelines bar "inaccurate, misleading, or false content that may cause significant harm to individuals or society."

No TikTok videos we reviewed had this generative AI label, although some included labels to learn more about U.S. elections. Brewster said NewsGuard also observed many TikTok users bypassing this policy about identifying AI-generated content.

YouTube’s community guidelines don’t allow "misleading or deceptive content that poses a serious risk of egregious harm." YouTube requires disclosure for election advertising containing "digitally altered or generated materials." The company said in December that it plans to expand this generative AI disclosure to other content.

How to detect AI-generated audio
Experts say existing AI detection tools are imperfect. They add that as detection tools improve, so does generative AI technology.

AI-generated audio lacks the more obvious visual cues of AI-generated images or videos, such as mouth movements that aren’t synced to audio or distorted physical features.

However, there are ways people can identify AI-generated audio.

Malik, the University of Michigan-Dearborn professor, said to listen for abnormalities in vocal tone, articulation or pacing.

"(AI-generated voices) lack emotions. They lack the rise and fall in the audio that you typically have when you talk," Malik said. "They are pretty monotonic."

Brewster also advised that the "old tactics" are still the best way to avoid AI-generated misinformation. Those include cross-checking information with other sites; being attuned to grammatical errors and odd phrasing; and searching for the names of those who posted to see if they have shared false information in the past.

Our Sources

Sara Swann is a freelance journalist based in Washington, D.C.
More On This Topic