With the generative artificial intelligence (AI) industry booming, the 2024 election cycle is shaping up to be a watershed moment for technology’s role in political campaigning.
The proliferation of artificial intelligence, a technology that can create text, images and video, raises concerns about the spread of disinformation and how voters will react to artificially generated content in the politically polarized environment.
Already the presidential campaigns for former President Trump and Florida Governor Ron DeSantis (R) have produced high-profile videos with artificial intelligence.
Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, said the proliferation of publicly available AI systems, awareness of how simple they are to use, and “the erosion of the sense that creating things like deepfakes is something good, honest people would never do” will make 2024 a “significant inflection point” for how AI is being used in campaigns.
“I think now, more and more, there’s an attitude that, ‘Well, it’s just the way it goes, you can’t tell what’s true anymore,'” Barrett said.
The use of AI-generated campaign videos is already becoming more normalized in the Republican primaries.
After DeSantis announced his campaign during a Twitter Spaces conversation with company CEO Elon Musk, Trump posted a deepfake video — which is a digital representation made by artificial intelligence that fabricates realistic-looking images and sounds — which parodies the ad on Truth Social. Donald Trump Jr. posted a deepfake video of DeSantis edited into a scene from the TV show ‘The Office’ and the former president shared AI pictures of him.
Last week, the DeSantis campaign ran an ad that used apparently AI-generated images of Trump hugging Anthony Fauci, the former director of the National Institute of Allergy and Infectious Diseases.
“If I had pitched this 10 years ago, I think people would have said, ‘That’s crazy, it’s going to backfire,'” Barrett said. “But today it happens as if it were normal.”
Critics noted that DeSantis’ use of the generated photo of Trump and Fauci was deceptive because the video does not reveal the use of AI technology.
“Using AI to create a threatening background or strange imagery is categorically no different than what advertising has long been,” said Robert Weissman, chairman of the progressive consumer rights advocacy group Public Citizen. “It does not involve any deception of the voters.”
“[The DeSantis ad] it is fundamentally deceptive,” he said. “That’s the big concern that voters are being led to believe things that are not true.”
Someone familiar with DeSantis’ operation noted that the governor’s presidential campaign wasn’t the only campaign using AI in video.
“This wasn’t an announcement, it was a social media post,” the person with knowledge of the operation said. “If the Trump team is upset about this, I would ask them why they have continuously posted fake images and fake talking points to vilify the governor.”
While AI advocates acknowledge the technology’s risks, they argue that it will ultimately play a consequential role in election campaigning.
“I believe there will be new tools that will simplify content creation and distribution, and likely tools that will help with data-intensive tasks like understanding voter sentiment,” said Mike Nellis, founder and CEO of progressive agency Authentic.
Nellis partnered with Higher Ground Labs to create Quiller.ai, an AI tool that has the ability to write and send campaign fundraising emails.
“At the end of the day, Quiller will help us write better content faster,” Nellis told The Hill. “What happens in a lot of campaigns is that they hire young people, teach them how to write fundraising emails and then ask them to write hundreds more, and that’s not sustainable. Tools like Quiller take us to a better place and improve the efficiency of our campaigns.”
As generative AI text and video become more common and increasingly difficult to discern as the generated content appears more plausible, there is also concern that voters are becoming more skeptical of all AI-generated content.
Sarah Kreps, director of the Cornell Tech Policy Institute, said people may begin to “assume nothing is true” or “just believe their biased signals.”
“None of these are really useful for democracy. If you don’t believe anything, the whole pillar of trust that we really have for democracy is eroded,” Kreps said.
ChatGPT, which is OpenAI’s AI-powered chatbot, has entered the scene with an exponential increase in usage since its launch in November, alongside rival products such as Google’s Bard chatbot, and image and video-based tools. These products have the administration and Congress scrambling to consider how to approach the industry while remaining competitive on a global scale.
But as Congress ponders regulation, between scheduled Senate briefings and a series of hearings, industry has largely been left to create the rules of the road. On the campaign front, the rise of AI-generated content is amplifying already prevalent concerns about election disinformation spreading across social media.
Meta, the parent company of Facebook and Instagram, published a blog post in January 2020 saying it would remove “misleading manipulated media” that meet certain criteria, including content that is the “product of artificial intelligence or learning automatic” that “replace or overlay content on a video, making it appear authentic”.
Ultimately, though, Barrett said the burden of deciphering what is AI-generated or not will fall on voters.
“This type of material will be released, even if limited in some way; it’ll probably be out in the world for a while before it’s restricted or labeled, and people need to be wary,” she said.
Others point out that it is still too difficult to predict how AI will be integrated into campaigns and other organizations.
“I think the real story is that new technologies should be integrated into the business at a deliberate and careful pace, and the inappropriate/near-immoral uses are the ones that are going to get all the attention in the first inning, but it’s a long game and most some of the useful and productive integrations will evolve more slowly and are unlikely to even be noticed,” said Nick Everhart, a Republican policy adviser in Ohio and president of Content Creative Media.
Weissman noted that Public Citizen has asked the Federal Election Commission to issue a rule to the extent of its authority to ban the use of deceptive deepfakes.
“We think the agency has authority with respect to candidates but not political or other committees,” Weissman said. “That would be nice, but it’s not enough.”
However, it’s unclear how quickly campaigns will adopt AI technology this cycle.
“A lot of people are saying this is going to be the election of AI,” Nellis said. “I’m not entirely sure that’s true. Smart and innovative campaigns will embrace AI, but many campaigns are often slow to adopt new and emerging technologies. I think 2026 will be the real election of AI.
Copyright 2023 Nextstar Media Inc. All rights reserved. This material may not be published, transmitted, rewritten or redistributed.