In the leadup to the New Hampshire presidential primary, an AI-generated robocall that sounded eerily similar to President Joe Biden spoke to voters. “It’s important to save your vote for the November election. Voting this Tuesday only helps Republicans in their quest to elect Donald Trump again,” said the Biden soundalike. 

Three days later, on Jan. 24, in Georgia 12 state senators filed Senate Bill 392, which makes it a felony to create a deepfake image, video or audio recording with the intent of influencing an election–punishable by one to five years in prison and a fine of up to $50,000.

Georgia’s bill alarms some critics, who warn that it could infringe on people’s First Amendment rights. “Applying felony charges to the publication of online content sets an unnerving and excessive precedent,” Sarah Hunt-Blackwell of the Georgia American Civil Liberties Union (ACLU) testified to the state Senate Judiciary Committee in a hearing last month.

Defining Deepfakes

Deepfakes have emerged as a tool of digital deception, utilizing artificial intelligence (AI) to craft fabricated video, audio, or photo content—blurring the line between reality and fiction. It’s perceived as a major problem in this year’s election cycle. On Feb. 16, a group of 20 tech companies—including Microsoft, Meta, Google, and Amazon—announced a joint commitment to combat election-themed misinformation generated by AI. 

Georgia is far from the only state attempting to regulate deepfakes.  As the 2024 election cycle gains momentum, a surge of state legislatures are responding to the myriad challenges presented by AI-generated deepfakes in political campaigns. As of Feb. 7, there were 407 total AI-related bills before more than 40 state legislatures, up from 67 bills a year ago, reported Axios. That includes 33 states tackling election-related AI material. 

These bills tend to fall into two categories: bans or disclosure requirements. The disclosure bills require campaigns to present a disclaimer with deepfakes that reads, for example: “This (image/video/audio) has been manipulated or generated by artificial intelligence.”

Georgia’s SB 392 calls for a ban. It would criminalize the creation or dissemination of any manipulated video, sound recording, electronic image, or photograph within 90 days of an election that falsely depicts a real person engaged in actions or speech that did not occur in reality with the intent of deceiving electors. The bill explicitly terms deep fakes related to elections as “election interference,” which the bill would make a felony. Six Republican state representatives filed a similar bill in the Georgia House, House Bill 986, that’s since been  rerouted from that chamber’s Governmental Affairs committee to Technology and Infrastructure Innovation.

“This is real-time. This is going to happen in this election cycle as we have never seen it before,” said state Sen. John Albers, R-Roswell, the Senate bill’s sponsor. “If something’s not illegal, you better believe people on that moral and ethical edge are going to use that to their perverted advantage.”

A First Amendment concern?

But Clare Norins, a law professor at the University of Georgia who directs its First Amendment Clinic, told Atlanta Civic Circle that Senate Bill 392 is a well-intentioned bill that “presents real First Amendment concerns.”

“I think everyone agrees that election interference is a legitimate concern. But at the same time, we have First Amendment rights that have to be respected, and the current draft of this bill really does not do that,” said Norins. 

One flaw, she said, is that the First Amendment guarantees the right to speak and publish false information. That includes a news program that replays a deep fake as part of reporting on it. “Under SB 392, as it currently stands, there’s no carve out for the media.”

The law also does not address political commentary or satire. What if, for instance, “Saturday Night Live” made a deep fake of Biden or former President Donald Trump during the election year—couldn’t that be spun as intended to influence an election? 

“You can see how this might seriously chill speech that is protected,” said Norins. “People may be afraid to create art, comedy, or commentary that could get challenged, and tech companies could be afraid to allow such content on their platforms.”

The Georgia Senate has read and referred the deepfakes bill to committee, but no vote has been held as of Feb. 20. 

WANT TO HAVE A SAY ON PROPOSED LAWS?

  • Click here for the names of your state representative and senator.

Click here for their phone numbers and emails.

Join the Conversation

2 Comments

  1. There is no first amendment concern, as was dishonestly stated in the article. The exact example proposed (An SNL skit) already falls under protected speech guarded by existing case law (the “satire exemption”) and on top of that, Saturday Night Live has enormous resources that would guard it against any such issue. More explicitly, the First has limits that are ALSO clear, and not all false information is protected. The tired example of “screaming “fire!” in a crowded building” has changed over time, to include the metaphorically similar actions which deepfakes are a part of.

    Make it a crime. Make it a crime. Make. It. A. Crime. The only people wanting to use these, are people looking to explicitly and intentionally defraud, mislead, confuse and disable what little working community we have. There’s no good reason to protect that.

Leave a comment

Your email address will not be published. Required fields are marked *