Explainer: How AI and Deepfakes Are Impacting the 2024 Election

(Tiếng Việt)

An increasing number of voters fear the way artificial intelligence (AI) could influence the U.S. presidential election this November, a new AI & Politics survey found. 

Nearly three-quarters of respondents said they fear the impact of political deepfakes: highly realistic videos, photos, or audio recordings that have been manipulated with generative AI technology to promote false narratives, incite civil unrest, influence public perception, or suppress voter turnout.

There have already been several incidents this year of voters being exposed to political deepfakes. In January, a fake robocall impersonating the voice of President Joe Biden encouraged New Hampshire Democrats to skip the primary so they could “save” their vote for November.

Former President Donald Trump has also reposted several deepfakes on his social media accounts to support his campaign, including a fake photo of Vice President Kamala Harris speaking at a communist rally in Chicago and a series of fake images implying Taylor Swift endorsed him for president.

The U.S. Department of Homeland Security warned state and local officials in May about the increasing threat of voter manipulation, since AI technology has rapidly advanced since the 2020 election. It now takes fewer than three seconds of audio for cybercriminals to use generative AI to clone someone’s voice, and anyone can use free services like ChatGPT to quickly create high-quality deepfakes.

Two-thirds of survey respondents said they were not confident that voters have the ability to pick out AI-generated content, and research has supported this concern. A 2021 study found that most people overestimated their ability to distinguish deepfake videos. A 2018 study found that the more times people are exposed to fake content, the more likely they are to remember it as real.

Researchers are also concerned about how AI can impact smaller, local elections. An AI-generated video of Utah Gov. Spencer Cox circulated on social media in June, falsely showing him admitting to fraudulently gathering signatures in the gubernatorial race. Several prominent state figures reposted this video.

Deepfakes could also take on the identity of a neighborhood political organizer, or infiltrate Listservs (electronic mailing lists) in certain cities to target minority communities with AI-generated text messages. For example, someone could rip data about local polling locations to generate a message to all the people in that area that their polling place has changed.

Tim Harper, a senior policy analyst at the Center for Democracy and Technology, told the Washington State Standard that this strategy could be highly effective, since “people are less familiar with the idea of getting targeted disinformation directly sent to them.”

How can I protect myself from being manipulated by AI?

The best way to combat misinformation is to verify any claims with known reputable sources, including fact-checking services and government websites. CanIVote.org is an official website for U.S. voting information. Google’s reverse image search can sometimes be used to track the original source of a photo.

“Always try to find the original source of the material,” advises Anya Williams of the Poynter Institute for Media Studies.“Use a keyword search and do some lateral reading [compare information from multiple sources] to find out if the video has been altered before you believe what you’re watching.”

Don’t trust information only seen on social media, even if it appears to be from a legitimate source. Search tools and algorithms have been known to perpetuate misinformation. For example, several secretaries of state wrote a letter to Elon Musk in August urging him to improve the AI search assistant on X (formerly Twitter) after it falsely informed millions of users that Vice President Harris was ineligible to appear on the presidential ballot in nine states.

Outside politics, what are some other AI-related threats I should watch out for?

Phone scams and financial fraud are more widespread than ever. Never click on links from unrecognized emails or phone numbers, and avoid answering texts or phone calls from people you don’t know.

If you answer a call that turns out to be spam, hang up immediately. They want as much information from you as possible. If you get a suspicious-sounding phone call from someone you know, confirm that person’s identity by asking about the last time you spoke. 

Cybercriminals can also leave voicemails or send texts from known numbers. Even if you recognize the phone number a message is from, call that person directly to verify any information before responding.