Legislation

Vermont mandates AI labels on election deepfakes—but research shows they might backfire

Getting your Trinity Audio player ready...

A nearly identical law in California has already been struck down by a federal court as unconstitutional and the federal government is actively working to preempt state-level AI rules.

by Compass Vermont

Vermont has joined a growing number of states trying to get ahead of artificial intelligence in elections. Senate Bill 23, which passed both chambers with a Committee of Conference report adopted on February 11, 2026, requires anyone who distributes AI-manipulated images, audio, or video in political campaigns to slap a disclosure label on it.

The idea is straightforward: if a political ad uses AI to make a candidate appear to say or do something they didn’t, voters should know about it.

But the law arrives at a complicated moment. A nearly identical law in California has already been struck down by a federal court as unconstitutional. Academic research suggests the disclaimers the bill requires may actually backfire. And the federal government is actively working to preempt state-level AI rules. Here’s what Vermonters need to know.

How Vermont Got Here

The Legislature’s first attempt at AI regulation was far more ambitious. House Bill 710, introduced in January 2024, tried to regulate AI developers and deployers broadly, imposing duties of “reasonable care” across sectors including credit, criminal justice, education, employment, and healthcare. That bill died in the House Committee on Commerce and Economic Development without a vote.

The failure of H.710 led to a narrower approach. Rather than trying to regulate how AI systems are built, lawmakers zeroed in on what AI produces — specifically, when that output is used to influence voters. That pivot produced S.23.

A third bill, House Bill 846, introduced on January 30, 2026, would go further by requiring “high-traffic online platforms” to proactively block deceptive AI content during election windows. That bill was referred to the House Government Operations Committee and has not advanced.

What S.23 Actually Requires

The bill targets what it calls “deceptive and fraudulent synthetic media” — AI-created or AI-manipulated images, audio, or video of a real person that would appear realistic to a reasonable viewer and is distributed with the intent to injure a candidate’s reputation, influence an election, or deceive a voter.

Under the Conference Committee report, anyone distributing such media within 90 days of an election must include the statement: “This media has been created or intentionally manipulated by digital technology or artificial intelligence.”

The formatting rules are specific. For images and video, the text must be large enough for an average viewer to read, and in video it must appear for the full duration of the clip. For audio recordings, the disclosure must be clearly spoken aloud. If the audio runs longer than two minutes, the disclosure must be repeated at the beginning and end.

Fines start at up to $1,000 for a knowing violation. If the violation is committed with intent to cause violence or bodily harm, the fine rises to $5,000. Repeat offenses or violations involving high-value ads can bring fines between $10,000 and $15,000. Candidates whose likenesses are misrepresented can also go to court to seek injunctions stopping further distribution.

The bill does include an exception for content clearly labeled as satire or parody.

Who Testified and What They Said

The House Committee on Government Operations and Military Affairs, chaired by Representative Matthew Birong, heard from a range of witnesses during the bill’s development.

Ilana Beller of Public Citizen told lawmakers that deepfakes like an AI-manipulated likeness of Vice President Kamala Harris showed how easily synthetic media could be weaponized in campaigns. She noted that 21 states had already passed similar legislation by early 2025.

Quinn Houston of the Vermont Public Interest Research Group argued the disclosure model was a necessary safeguard, drawing a parallel to Vermont’s 2024 law criminalizing non-consensual sexually explicit deepfakes.

The most operationally significant pushback came from the Vermont Association of Broadcasters. Executive Director Wendy Mays explained that under the Federal Communications Commission’s “No Censorship Rule,” broadcast stations are prohibited from modifying or editing political ads submitted by candidates. A Vermont law requiring stations to add a disclaimer that federal law forbids them from adding would put broadcasters in an impossible position.

The Legislature addressed this by including exemptions for broadcasters and cable operators acting as “neutral conduits” for candidate-sponsored content.

The ACLU of Vermont, represented by Advocacy Director Falko Schilling, raised concerns about vague definitions of “deceptive” media potentially chilling legitimate political expression. The organization has broadly urged lawmakers to adopt a risk-based approach that balances technology regulation against potential harm to civil liberties and marginalized communities.

Other witnesses included Deputy Secretary of State Lauren Hibbert, Assistant Attorney General Leslie Welts, John Kidder of Norwich University’s Applied Research Institutes, and Adam Kuckuk of the National Conference of State Legislatures.

The California Problem

Vermont’s law closely resembles California’s Assembly Bill 2839, signed into law in September 2024. That law lasted about two weeks before a federal judge blocked it — and it was permanently struck down in August 2025.

The case arose when content creator Christopher Kohls, who had produced an AI-generated video featuring a “voice swap” of Vice President Kamala Harris that he had clearly labeled as parody, sued on the day the law was signed.

Senior U.S. District Judge John A. Mendez found the California law failed on multiple constitutional grounds. Because it was a content-based restriction on political speech, it triggered strict scrutiny — the highest level of judicial review. The court found the law was not narrowly tailored to use the least restrictive means available.

The judge emphasized what’s known as the “counter-speech doctrine”: the principle that the remedy for offensive or misleading political speech is more speech — fact-checking, public debate, rebuttal — not government censorship.

The court also found the law’s language around “materially deceptive” content to be unconstitutionally vague. And it rejected California’s argument that the law merely required disclaimers rather than banning speech, finding that compelled disclaimers would effectively kill satirical and parodic expression.

Vermont’s S.23 and California’s AB 2839 share core features: mandatory disclaimers, a “reasonable person” standard for what counts as deceptive, and exceptions for labeled satire. Vermont’s election window is shorter — 90 days before an election compared to California’s 120 days before and 60 days after — and its enforcement mechanism relies on candidate-sought injunctions and Attorney General fines rather than private lawsuits. But the fundamental legal architecture is similar enough that S.23 is likely to face a comparable constitutional challenge.

Research Says Disclaimers May Not Work — and Could Backfire

Separate from the legal questions, there is a growing body of research raising doubts about whether AI disclaimers accomplish what lawmakers intend.

The NYU Center on Technology Policy conducted experiments in 2024 and 2025 testing the impact of AI labels on political ads. The findings were not encouraging for the disclosure model.

AI labels hurt candidates who used generative AI, regardless of whether the content was actually deceptive. Respondents rated candidates as less trustworthy and less appealing when an ad carried a disclaimer. The damage was most pronounced among a candidate’s own party members — the voters who had previously been supportive became more skeptical. Labels had almost no impact on how people viewed opposing candidates, whose ratings were already low.

The research also found that many viewers simply don’t notice the labels. Effectiveness depended heavily on placement, wording, and design. A disclaimer at the beginning of an ad increased trust more than one placed at the end. Unless a label specifically mentioned “Generative AI,” many viewers assumed the content was made using standard video editing. And poorly implemented labels could decrease trust in the broader information system, fostering a cynicism where voters assume all content is manipulated.

In short, the research suggests disclaimers may primarily penalize candidates who follow the rules and use modern digital tools, while doing little to stop bad actors willing to ignore the law entirely.

Real-World Cases That Drove the Debate

Several national incidents have illustrated why lawmakers feel urgency around this issue.

In January 2026, the official White House social media account posted a digitally altered image of civil rights attorney Nekima Levy Armstrong following her arrest at a Minnesota church protest. The original photo showed her with a neutral expression; the altered version depicted her appearing to sob. The Guardian confirmed through overlay analysis that law enforcement agents and a badge visible in the background were in identical positions in both images, establishing it was the same photo with a manipulated facial expression.

In February 2026, an AI-generated video posted on Truth Social portrayed Barack and Michelle Obama as apes. The White House initially dismissed the outcry, calling it a “meme.” The incident illustrated the difficulty of enforcing S.23’s “knowingly” standard in a high-speed social media environment where responsibility can be deflected to staffers.

Federal Barriers Ahead

Even setting aside the constitutional questions, Vermont’s law faces federal headwinds.

White House executive order titled “Ensuring a National Policy Framework for Artificial Intelligence” directs federal agencies to challenge state laws that “obstruct innovation.” It creates an AI Litigation Task Force and signals that the FCC and FTC will issue policy statements to preempt state laws requiring what the order characterizes as “onerous” disclosures. Vermont’s S.23, with its specific font-size and duration requirements for disclaimers on digital media, is a potential target.

On the technical side, witnesses including Norwich University’s John Kidder testified that identifying the origin of a deepfake remains an ongoing challenge. Proposed solutions like invisible watermarking can be bypassed by sophisticated actors, making enforcement of any AI disclosure law inherently difficult.

What Happens Next

S.23 was adopted by both chambers on February 11, 2026. It now goes to the Governor for signature or veto. If signed, the disclosure requirements would take effect for the next election cycle.

A legal challenge appears likely. The law’s reliance on compelled disclaimers and a “reasonable person” standard for deception — the same mechanisms that doomed California’s nearly identical law — makes it vulnerable to a First Amendment lawsuit on similar grounds.

Meanwhile, H.846, the broader bill that would require online platforms to proactively block deceptive AI content, remains in committee. If it advances, it would significantly expand the state’s regulatory reach — and almost certainly face its own legal battles.

At the federal level, the executive order directing agencies to scrutinize state AI regulations creates additional uncertainty. Vermont may find itself defending S.23 not only against private lawsuits but against federal preemption challenges as well.

For Vermonters, the practical question remains open: Can a small state’s disclosure law meaningfully protect voters from AI-generated deception in a borderless digital environment? Or does the law’s real value lie in signaling that the problem matters — even if the solution is, for now, incomplete?


Discover more from Vermont Daily Chronicle

Subscribe to get the latest posts sent to your email.

Categories: Legislation, Media

All topics and opinions welcome! No mocking or personal criticism of other commenters. No profanity, explicitly racist or sexist language allowed. Real, full names are now required. All comments without real full names will be unapproved or trashed.