Morning Overview

AI-generated campaign ads are spreading, raising new election concerns

Federal regulators chose to clarify old rules rather than write new ones when it came to artificial intelligence in political advertising, leaving voters to sort real from fake as AI-generated campaign content spreads across U.S. elections. The Federal Election Commission decided in September 2024 not to open a formal rulemaking on AI in campaign ads, instead adopting a narrow interpretive rule that applies existing fraud and misrepresentation standards to AI-produced material. That decision, more than a year after a public petition drew thousands of comments urging stronger action, has sharpened a debate over whether current safeguards can keep pace with rapidly improving synthetic media tools.

The FEC’s Long Road to a Narrow Rule

The push for federal action began when the advocacy group Public Citizen petitioned the FEC on July 13, 2023, asking the commission to regulate deceptive AI-generated content in political ads. The petition triggered a public comment period that attracted over 2,000 responses, a sign of broad concern about the technology’s potential to mislead voters.

Yet the commission’s eventual response was restrained. On September 19, 2024, the FEC voted not to launch a full rulemaking process. Instead, it adopted an interpretive rule stating that existing prohibitions on fraudulent misrepresentation in campaign communications already cover AI-generated content. The commission’s disposition confirmed this approach, effectively telling campaigns that no new disclosure mandate would be coming from the federal level. For voters, this means no federal requirement forces campaigns to label ads as AI-generated, even when synthetic voices or images are used to depict real candidates.

Supporters of the narrow approach argue that the FEC is bound by its statute and that stretching existing law too far could invite legal challenges or partisan deadlock. They note that the commission already has authority to punish candidates and committees that falsely claim to speak on behalf of an opponent, whether the deception is executed through traditional editing or sophisticated AI tools. Critics counter that the current rule focuses on who is speaking, not what is being shown, leaving a loophole for deepfake videos and audio that do not explicitly misrepresent their sponsor but still fabricate words or actions.

The FEC’s interpretive rule also places heavy weight on enforcement after the fact. Complaints must be filed, evidence gathered, and votes taken before any sanction is imposed. In a fast-moving campaign, a viral AI-generated clip can reach millions before regulators determine whether it violates misrepresentation standards. That lag raises questions about whether deterrence alone can meaningfully curb abuses when the underlying technology continues to get cheaper, faster, and more accessible.

States Step In Where Federal Action Falls Short

The gap left by the FEC has pushed state legislatures to act on their own. California passed AB 2355 during the 2023–2024 legislative session, requiring certain political advertisements to carry a clear label reading “Ad generated or substantially altered using artificial intelligence.” The law represents one of the most direct state-level attempts to give voters a signal when they are watching or hearing content shaped by AI tools.

But California’s approach has limits. Enforcement depends on identifying violations after ads have already circulated, and campaigns that operate across state lines face a patchwork of rules rather than a single standard. A candidate running digital ads in multiple states might need to comply with California’s disclosure requirement in one market while facing no comparable obligation in another. This inconsistency creates practical confusion for campaigns and, more critically, for voters trying to evaluate what they see online. The result is a regulatory environment where the strongest protections depend entirely on geography.

Other states are watching closely, weighing whether to emulate California’s disclosure model or pursue stricter rules that might ban certain forms of deepfakes altogether. State election officials also face resource constraints: monitoring thousands of online ads, many micro-targeted to narrow audiences, is a daunting task. Without shared technical tools or federal coordination, the states that move first may find themselves struggling to enforce the standards they have set.

AI Robocalls Forced Faster Federal Action

While the FEC deliberated for over a year, a different federal agency moved faster after a concrete incident forced its hand. In January 2024, voters in New Hampshire received robocalls featuring a synthetic voice designed to sound like a sitting president, urging them not to vote in the state’s primary. The calls demonstrated how cheaply and convincingly AI voice-cloning tools could be weaponized to suppress turnout.

The Federal Communications Commission responded on February 8, 2024, by declaring AI-generated voices in robocalls illegal under the Telephone Consumer Protection Act. The FCC’s action was narrower than a broad ban on AI in political messaging. It targeted automated telephone calls specifically. Still, it marked the first time a federal agency drew a clear enforcement line around one category of AI-generated election content. The speed of the FCC’s response, compared to the FEC’s slower timeline, highlights how regulators tend to act only after a high-profile abuse makes inaction politically untenable.

The robocall episode also underscored how existing laws can sometimes be adapted quickly to new technologies. By treating AI-cloned voices as a prohibited kind of prerecorded message, the FCC avoided a lengthy rulemaking fight over new definitions. But that approach leaves many other channels (social media videos, manipulated images, synthetic radio spots) governed by older rules that may not anticipate the ease with which AI can fabricate convincing political speech.

Disinformation Without Borders

The challenge extends well beyond U.S. borders. Reporting from the Associated Press in March 2024 documented how artificial intelligence was supercharging election disinformation worldwide, making it easy for almost anyone with basic technical skills to produce convincing fake audio, video, and images of political figures. The investigation described AI-driven disinformation as one of the world’s biggest short-term threats to democratic processes.

What makes this threat different from older forms of political manipulation is the cost curve. Producing a convincing deepfake video or voice clone no longer requires a well-funded state actor or a sophisticated propaganda operation. Open-source AI models have driven the price of generating synthetic media close to zero, meaning local campaigns, independent groups, and foreign actors all have access to the same tools. That democratization of deception capacity is what distinguishes the current moment from earlier cycles of election misinformation.

Campaign strategists are already experimenting with these capabilities. In one account, local races have begun to feature AI-generated ads crafted by consultants with backgrounds in the tech industry, blurring the line between traditional messaging and algorithmically tailored persuasion. As tools become more accessible, the risk grows that low-level contests, far from national scrutiny, will serve as testing grounds for aggressive synthetic media tactics.

Awareness Campaigns and Their Limits

Some organizations have tried to get ahead of the problem through public education. In October 2023, a nonprofit group announced plans for an ad campaign that would use AI-generated misinformation as a teaching tool, showing voters what deceptive content looks like so they could better recognize it in the wild. “Our democracy is at risk if people do not understand the AI basics and how it is being used in this campaign,” said Gonzales, a leader of the effort, in comments reported by Politico.

The strategy relies on the idea that informed voters can develop a kind of media literacy shield, learning to question too-perfect audio, scrutinize suspicious visuals, and look for corroboration before sharing politically charged clips. Civic groups have paired these efforts with basic guidance on checking official election information at sites like USA.gov, which aggregates links to state and local authorities, and on using public tools such as the FEC’s campaign finance API to verify who is funding political messages.

Education alone, however, has clear limits. Not every voter will see or remember public-awareness campaigns, and even media-savvy audiences can be fooled by highly realistic deepfakes, especially when they confirm existing partisan beliefs. The sheer volume of content circulating in an election season makes it unrealistic to expect individuals to fact-check every clip or recording they encounter. That reality has fueled calls for stronger platform policies, more robust labeling standards, and, for some advocates, renewed federal rulemaking that goes beyond the FEC’s narrow interpretive step.

For now, the regulatory landscape around AI in political advertising remains fragmented and reactive. The FEC has signaled that it will treat the most egregious deceptions as violations of long-standing misrepresentation rules, but it has stopped short of imposing broad disclosure mandates or technology-specific bans. States like California are experimenting with their own solutions, even as federal agencies such as the FCC carve out targeted prohibitions in response to headline-grabbing abuses. Against that backdrop, voters are entering another election cycle in which the line between authentic and synthetic political speech is increasingly difficult to see, let alone regulate, before it shapes public opinion.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.