ru24.pro
News in English
Ноябрь
2024

Deepfake Crackdowns Threaten Free Speech

0

Brent Skorup

AI-generated media today is astonishingly high-quality and produces images and audio that are nearly indistinguishable from reality, and video isn’t far behind. But with this progress comes a new wave of legal and ethical battles.

,

Lawmakers are alarmed by deepfakes—synthetic media that mimic reality—fearing their potential to destroy reputations, especially in high-stakes election campaigns. Yet some of the new state deepfake laws raise serious First Amendment concerns.

While “political misinformation” has become a focus of the Democratic Party in the past few years, Republicans also object to AI-assisted media deployed opportunistically to harm their candidates’ reputations. Deepfake fears have sparked rare bipartisan action, with nearly one-third of states passing laws to regulate their use during elections.

Most laws targeting deepfakes stick to civil penalties, but at least Texas and Minnesota take it further, criminalizing synthetic media intended to “influence an election.” Texas’ law resembles a criminal defamation statute and violations can mean a year in jail.

,
,

Minnesota’s law is even harsher: simply “disseminating” a deepfake—resharing on social media might suffice—could land repeat offenders in prison for up to five years. Further, a government official or nominee guilty of disseminating a deepfake can be removed from office.

From vague terms (“deepfake,” “disseminate”) to harsh criminal penalties, these laws clash with First Amendment protections, especially since they fail to exempt parodies or satire.

Fortunately, in September, a state appellate court declared Texas’ law facially unconstitutional. Regarding the overbreadth of the Texas law, the state court said, “Given that influencing elections is the essence of political speech, it is difficult to imagine what speech would not be included under the statute.”

But even the state laws with civil liability have many of the same problems. It’s worth examining California’s new deepfake law, AB 2839, which bans the distribution of altered political media that could mislead a “reasonable person,” provided it’s done “with malice.” The law sweeps broadly to include popular political content. California Governor Newsom has made clear, for instance, that prohibited media include commonplace memes and edited media.

California’s law requires the creators of parodies or satire to label their media as such. There are carve-outs for broadcasters and newspapers but no express carve-outs for social media companies. Social media firms “distribute” political memes and deepfakes, so it appears they could be liable for damages.

A controversial and shocking twist in AB 2839 is its “bounty hunter” provision, allowing any “recipient of materially deceptive content” to sue “the person, committee, or other entity” that distributed the content. The winning party also wins attorneys fees, so this law creates a potential litigation frenzy over digital content.

The California law essentially invites millions of social media users to sue people who create political memes and edited videos. Even someone just sharing a post on social media could be liable because “distribution” is left undefined.

Like the Minnesota and Texas laws, the California law has serious First Amendment problems. It’s apparently designed to function as a prior restraint for political online media. As one nonprofit official who helped draft the law told TechCrunch:

,

The real goal is actually neither the damages nor the injunctive relief. It’s just to have people not do it in the first place. That actually would be the best outcome…to just have these deepfakes not fraudulently impact our elections.

,

AB 2839 was signed and went into effect in September. Christopher Kohls, the conservative meme creator whose edited satirical video was singled out by Governor Newsom, sued to block the law. In early October a federal judge enjoined enforcement of almost all of the meme bounty hunter law, in the case Kohls v. Bonta.

Some of these laws may survive, particularly if they only require clear and simple disclosures. The Minnesota and Texas laws, however, still raise serious First Amendment concerns because they criminalize election-related content.

In the words of a federal judge, these deepfake laws often act “as a hammer instead of a scalpel,” chilling far too much speech.