AI deepfakes poised to wreak havoc in the 2024 presidential election: Experts


June 14, 2023 | 5:11

An onslaught of high-quality, AI-generated political deepfakes has already begun ahead of the 2024 presidential election, and Big Tech companies aren’t prepared for the chaos, experts told The Post.

The rise of generative AI platforms like ChatGPT and the photo-centric Midjourney has made it easy to create fake or misleading posts, images or even videos from doctored footage of politicians giving controversial speeches to fake images and videos of events that never really happened .

Egregious examples of AI-generated disinformation have already been circulating on the web, including a deepfake video of President Biden verbally attacking transgender people, fake images of former President Donald Trump resisting arrest, and viral photos of Pope Francis wearing a Balenciaga puffer jacket.

The result, according to experts, is uncharted territory for tech companies like Facebook, Twitter, Google-owned YouTube and TikTok, which are set to face an unprecedented surge of high-quality deepfake content from social media users. Americans and nefarious foreign actors. .

So far, the companies have provided few details about their plans to keep users safe.

According to BradleyTusk, policy advisor and CEO of TuskVenture Partners, the Silicon Valley giants are unwilling to confront election-related deepfakes because they have no incentive to address the issue.

Advances in generative AI have prompted a wave of deepfake images.
Twitter/Eliot Higgins

In fact, the incentives are basically nullified if someone creates a Trump or Biden deepfake that ends up going viral, that’s more engagement and eyes on that social media platform, Tusk told The Post.

Platforms have been unable and unwilling to prevent the spread of harmful human-generated content. This problem gets exponentially worse as generative AI proliferates, he added.

Candidates have also started using Generative AI. Last month, Trump shared a deepfake video that depicted CNN anchor Anderson Cooper claiming that the former president had just finished ripping the net a new hole.

GOP presidential contender and Florida Gov. Ron DeSantis’ campaign team shared ad featuring manipulated images depicting Trump embracing Dr. Anthony Fauci during the COVID-19 pandemic.

AI photos of Pope Francis dressed in a Balenciaga jacket fooled millions of users.

Misleading AI-generated posts from political campaigns are only part of the problem.

The bigger problem, according to many experts, is the likelihood that foreign adversaries and rogue elements will use generative AI to manipulate voters or otherwise influence the integrity of US elections.

In May, a probable AI-generated photo of a fake explosion at the Pentagon went viral on Twitter, where it was shared by the Kremlin-backed RT news outlet and prompted a brief sell-off in the stock market.

Rapid advances in generative AI mean the misinformation rate could rise sharply since the recent election, according to Dan Hendrycks, director of the Center for AI Safety, whose nonprofit recently organized a letter comparing the AI ​​threat to nuclear weapons or pandemics.

A fake video showed President Biden ranting against transgender people.

They were creating content without today’s AI systems, Hendrycks said. He imagines how much more effective they will be when they have AI to help them generate stories, rewrite them to be more persuasive, and tailor them to specific audiences.

Some of the tech world’s top figures, including Elon Musk and OpenAI CEO Sam Altman, have pointed to AI-generated disinformation as one of the gravest risks posed by burgeoning technology.

In May, Altman told a Senate meeting that he was nervous about AI disrupting elections and called it a major area of ​​concern requiring federal regulation.

Other experts, including AI godfather Geoffrey Hinton and Microsoft chief economist Michael Schwarz, have also publicly warned against bad actors using AI to manipulate voters during elections.

When reached for comment, a Google representative pointed to recent signs by CEO Sundar Pichai, who touted the company’s investments in tools to detect and label synthetic content.

An AI-generated photo of a fake Pentagon explosion triggered a brief stock sell-off in May.

Last month, the company said it would begin tagging AI-generated images with identifying metadata and watermarks.

YouTube’s content policies prohibit posting content manipulated to manipulate other users and remove offensive posts using machine learning and human reviewers.

A TikTok spokesperson noted that the ByteDance-owned app launched a synthetic media policy earlier this year, which requires any AI-generated or otherwise manipulated content that depicts a realistic scene to be clearly labeled. .

We are firmly committed to developing guardrails for the safe and transparent use of AI, which is why we announced a new synthetic media policy in March 2023, the TikTok spokesperson said in a statement. Like most of our industry, we continue to work with experts, monitor the advancement of this technology and evolve our approach.

A Snapchat rep said the company regularly rates[s] our policies to ensure our protections keep pace with evolving technologies, including AI.

Some of the fake photos showed Trump resisting arrest.
Twitter/Eliot Higgins

Representatives from other major tech platforms, including Twitter, Meta and Microsoft, did not respond to requests for comment.

Aside from the unprecedented technical difficulty of battling AI-generated content, tech companies must walk a fine line between blocking disinformation and deepening censorship, according to Sheldon Jacobson, a public policy consultant and computer science professor. at the University of Illinois at Urbana-Campagna.

Efforts to stop AI deepfakes could be interpreted as political bias against a particular party or candidate, Jacobson said.

Furthermore, tech companies have very little control over the actions of foreign adversaries who decide to misuse the technology for nefarious reasons.

We’re not in China, where we’re trying to control things, Jacobson said. This is a free communication system, but there are risks with it and there will be misinformation being communicated. And now that you introduce Generative AI, that’s a whole new level.

Earlier this year, a whole series of AI-generated photos featuring Donald Trump circulated.
Twitter/Eliot Higgins

With the election still more than a year away, Jacobson said tech leaders at major companies are likely rushing to develop a strategy to combat AI-generated deepfakes.

I don’t think they are saying anything because they don’t know what they can do. That’s the problem, he added.

According to Tusks, Big Tech companies will not take decisive action to prevent the flow of misinformation through AI-generated content unless lawmakers repeal Section 230, the controversial clause that protects companies from viability for published harmful content on their platforms.

In May, the Supreme Court decided to leave Section 230 intact in a pair of cases that were considered the most significant liability shield challenges to date. However, lawmakers on both sides are still calling for section 230 to be changed or repealed.

If the financial repercussions of doing nothing are large enough, platforms will actually take action and help prevent harmful content that negatively impacts our democracy, Tusk said.

Load more…

Copy the URL to share

#deepfakes #poised #wreak #havoc #presidential #election #Experts

Leave a Comment