Concerns about “deepfakes”—digitally altered videos and audio clips that mimic real people—are growing on both sides of the political spectrum ahead of the 2024 election.
The spread of disinformation online has already undermined public trust in recent elections, and many fear that artificial intelligence (AI) could make the problem worse.
Earlier this election season, 20,000 New Hampshire residents received a robocall impersonating President Joe Biden, urging them to skip the state’s primary. More recently, Elon Musk faced criticism for sharing an ad that used voice-cloning technology to mimic Vice President Kamala Harris.
In response to these concerns, a bipartisan bill to regulate AI-generated deepfake political ads was introduced in the U.S. Senate last fall. The Federal Election Commission has also proposed regulations for these ads on TV and radio, and at least 39 states are either considering or have already enacted similar legislation.
Should we worry?
But how much should we really worry? According to research from Washington University in St. Louis, the answer depends on whether deepfakes are more convincing than other types of disinformation.
The study found that deepfakes can indeed persuade a significant portion of the American public—over 40% of a representative sample—to believe in fake scandals. However, they are no more convincing than false information spread through text or audio.
The researchers also found that while deepfake videos do lead to more negative attitudes toward the targeted individual, their impact is similar to other forms of fake news and to negative campaign ads, which have been around for decades.
No superpowers
“Overall, our research shows that deepfake videos don’t have a unique ability to deceive voters or change their views of politicians,” the researchers explain. “Instead, people’s reactions are often driven by partisan biases—individuals are more likely to dismiss a scandal if it harms their own party, regardless of the evidence.”
The study involved two experiments conducted in fall 2020 with a nationally representative sample of 5,724 respondents.
In the first experiment, participants were shown a social media newsfeed with real stories about candidates in the 2020 Democratic primary, along with various types of fake content, including deepfake videos, audio clips, and text-based disinformation about candidate Elizabeth Warren. The study found that deepfake videos had a 42% deception rate, similar to that of fake audio (44%) and text (42%).
Deepfake videos did increase negative views of Warren, but only slightly more than the other types of fake content. Interestingly, the deepfakes were no more provocative than traditional campaign attack ads.
Under the influence
The study also found that older adults were more likely to be influenced by fake news, though they were just as good at identifying deepfakes as younger people. Meanwhile, people with higher political knowledge didn’t perform any better at detecting fake content.
In a second experiment, the same group was asked to spot deepfakes among a series of real and fake news videos. Some participants received media literacy training before the task. The study found that those with high digital literacy were significantly better at identifying deepfakes, and that more sensational deepfakes were easier to detect. For instance, a deepfake of Hillary Clinton making a routine statement was only recognized as fake 21% of the time, while a deepfake of Donald Trump announcing his resignation was correctly identified by 89% of respondents.
“Our research suggests that as a deepfake becomes more controversial, its believability decreases, making it less likely to cause a political scandal,” the authors say.
An interesting finding was that people were more likely to misjudge real news as fake when it portrayed their own party negatively. For example, 50% of Republicans thought real footage of Obama discussing a post-election deal with the Russian president was authentic, compared to only 20% of Democrats. Similarly, 58% of Democrats mistakenly believed a real clip of Trump urging caution during the COVID-19 pandemic was fake, while 81% of Republicans recognized it as genuine.
“Partisan biases heavily influence how we judge political news, and this applies to deepfakes as well,” the authors conclude.
“In summary, deepfakes aren’t uniquely deceptive, but their presence could undermine trust in real media. Our research also shows that people often mistake real news for fake, especially when it portrays their party negatively.”