Elections

Bill on AI political ads passes into House, awakens First Amendment concerns

Getting your Trinity Audio player ready...

Most witnesses have supported the bill, but a few concerns have come up about free speech and news media’s liabilities.

by Noah Diedrich

A bill that would require political candidates in Vermont to disclose their use of generative AI in campaign advertisements passed into the House in late March. The proposal is moving along, though not without questions about its effect on free speech and news media.   

The bill, S.23, looks to regulate the use of “synthetic media” — an image, video or audio recording that creates a realistic yet false representation of a candidate — in political ads during Vermont’s election cycles. Failure to comply would result in a fine based on the severity of the violation.  

The measure now sits in the House Committee on Government Operations and Military Affairs, which has been hearing testimony from media experts and affected parties over the past few weeks. While most witnesses support the bill, some questioned its potential ramifications for free speech. 

Falko Schilling, advocacy director for the American Civil Liberties Union of Vermont, told legislators April 16 that S.23 as it stands would prohibit political speech and potentially violate the First Amendment. 

“Government-mandated disclosures on political speech are highly suspect under the First Amendment because they compel the speaker to engage in speech that they would otherwise like to avoid,” Schilling said. 

But that very aspect of S.23, the disclosure, earned plaudits from others testifying.

“The disclosure of synthetic media in elections is essential to protecting our voters from misinformation that could affect our elections,” said Quinn Houston, democracy associate at Vermont Public Interest Research Group.  “This framework just demonstrates Vermont’s willingness to take a stand against harmful AI-generated misinformation.”

A substantial line of questioning has come from Vermont’s television industry, which nevertheless supports the bill. 

Dylan Zwicky, a partner at Leonine Public Affairs who testified on behalf of the New England Connectivity and Telecommunications Association, which represents cable companies, said the organization is happy with the bill as is but sees room for improvement. 

The group is concerned about legal liability for spreading synthetic media, as television companies can’t always distinguish between AI-generated media and otherwise with absolute certainty, Zwicky said in an April 3 meeting. 

“The onus should be on the political candidate’s party and not on the news station or print media that is going to be printing that political advertising,” Zwicky said. 

The bill includes exceptions for journalists reporting on suspected synthetic media, but the language slightly differs between broadcast and print news. The exceptions for print media refer to “commentary of general interest,” while the broadcast exceptions only mention news coverage.  

Zwicky asked lawmakers to better match the language.

“That would cover, for example, ‘Vermont This Week,’ where there may be a discussion about a political ad that’s running,” Zwicky said. “(The bill) would not want to limit the ability for those types of programs to discuss, analyze and comment on those advertisements that originated those ads.” 

Sean Sheehan, director of elections and campaign finance at Vermont’s Office of the Secretary of State, said he supports the bill because of its nonpartisan nature. He also likes that it opts for a disclosure requirement rather than banning the use of AI, which could prompt First Amendment concerns. 

The secretary of state’s office believes there’s value in AI for candidates, Sheehan said. The emerging technology could even the playing field for candidates with fewer resources, allowing them to be more efficient in their outreach or streamline processes that would otherwise be labor intensive. 

“However, we know there’s room for abuse and ambiguity,” Sheehan said. “We feel it’s important that Vermonters have trust in elections, that voters must have notice when election materials use AI. That will promote transparency during an election cycle.”

Schilling from the ACLU said a disclosure requirement exposes the bill to blockage by the courts unless it meets two requirements. The ads must be paid communications in mass media and the disclosure must relate to the identity and source of the ad, not the content itself.

In its current form, S.23 doesn’t meet those two criteria, Schilling said. 

Some terms in the bill were far too vague for Schilling, who said it didn’t pin down generative AI as the sole culprit in hurting election integrity. The ambiguity of a term like “digital media” casts too large a net and risks catching tools like Adobe Photoshop or Lightroom under its definition, he said. 

In Schilling’s eyes, political speech gets a lot of leeway because of the potential fallout in allowing regulation that chills speech. Such regulation could be weaponized against candidates, he said. 

“Sometimes that is in the eye of the beholder,” he said. “By putting more legal means to chill the speech of another candidate or a candidate you disagree with, that could have implications on their ability to fully communicate with the electorate.”

Via Community News Service, a University of Vermont journalism internship


Discover more from Vermont Daily Chronicle

Subscribe to get the latest posts sent to your email.

4 replies »

  1. I haven’t given this much thought and I’m not closing my ears to the other side, but my first take on this bill is a simple one.
    How is it an infringement on free speech if all the bill is asking is that a candidate tell the potential voters if they are using generative AI
    in a campaign advertisement?
    What’s wrong with demanding honesty and transparency in the contents of a campaign ad?
    I would be suspect of anyone voting against this.

  2. I suppose that someone might find a way to hack into it and ruin a candidate.

  3. AI still can’t speak names or some words right, but yes it can be used to mess with folks too. this shows how lazy some have gotten.

  4. If AI or other methods materially misrepresent an opponent’s statements, that may rise to the level of a tort, which is not protected under the first Amendment.