ru24.pro
News in English
Август
2024

Court Sees Through California’s ‘Protect The Children’ Ruse, Strikes Down Kids Code

0

Friday morning gave us a nice victory for free speech in the 9th Circuit, where the appeals court panel affirmed most of the district court’s ruling finding California’s “Age Appropriate Design Code” unconstitutional as it regulated speech.

There’s a fair bit of background here that’s worth going over, so bear with me. California’s Age Appropriate Design Code advanced through the California legislature somewhat quietly, with little opposition. Many of the bigger companies, like Meta and Google, were said to support it, mainly because they knew they could easily comply with their buildings full of lawyers, whereas smaller competitors would be screwed.

Indeed, for a period of time it felt like only Professor Eric Goldman and I were screaming about the problems of the law. The law was drafted in part by a British Baroness and Hollywood movie director who fell deep for the moral panic that the internet and mobile phones are obviously evil for kids. Despite the lack of actual evidence supporting this, she has been pushing for laws in the UK and America to suppress speech she finds harmful to kids.

In the US, some of us pointed out how this violates the First Amendment. I also pointed out that the law is literally impossible to comply with for smaller sites like Techdirt.

The Baroness and the California legislators (who seem oddly deferential to her) tried to get around the obvious First Amendment issues by insisting that the bill was about conduct and design and not about speech. But as we pointed out, that was obviously a smokescreen. The only way to truly comply with the law was to suppress speech that politicians might later deem harmful to children.

California Governor Gavin Newsom eagerly signed the bill into law, wanting to get some headlines about how he was “protecting the children.” When NetChoice challenged the law, Newsom sent them a very threatening letter, demanding they drop the lawsuit. Thankfully, they did not, and the court saw through the ruse and found the entire bill unconstitutional for the exact reasons we had warned the California government about.

The judge recognized that the bill required the removal of speech, despite California’s claim that it was about conduct and privacy. California (of course) appealed, and now we have the 9th Circuit which has mostly (though not entirely) agreed with the district court.

The real wildcard in all of this was the Supreme Court’s decision last month in what is now called the Moody case, which also involved NetChoice challenging Florida’s and Texas’ social media laws. The Supreme Court said that the cases should be litigated differently as a “facial challenge” rather than an “as-applied challenge” to the law. And it seems that decision is shaking up a bunch of these cases.

But here, the 9th Circuit interpreted it to mean that it could send part of the case back down to the lower court to do a more thorough analysis on some parts of the AADC that weren’t as clearly discussed or considered. In a “facial challenge,” the courts are supposed to consider all aspects of the law, and whether or not they all violate the Constitution, or if some of them are salvageable.

On the key point, though, the 9th Circuit panel rightly found that the AADC violates the First Amendment. Because no matter how much California claims that it’s about conduct, design, or privacy, everyone knows it’s really about regulating speech.

Specifically, they call out the DPIA requirement. This is a major portion of the law, which requires certain online businesses to create and file a “Data Protection Impact Assessment” with the California Attorney General. Part of that DPIA is that you have to explain how you plan to “mitigate the risk” that “potentially harmful content” will reach children (defined as anyone from age 0 to 18).

And we’d have to do that for every “feature” on the website. Do I think that a high school student might read Techdirt’s comments and come across something the AG finds harmful? I need to first explain our plans to “mitigate” that risk. That sure sounds like a push for censorship.

And the Court agrees this is a problem. First, it’s a problem because of the compelled speech part of it:

We agree with NetChoice that the DPIA report requirement, codified at §§ 1798.99.31(a)(1)–(2) of the California Civil Code, triggers review under the First Amendment. First, the DPIA report requirement clearly compels speech by requiring covered businesses to opine on potential harm to children. It is well-established that the First Amendment protects “the right to refrain from speaking at all.”

California argued that because the DPIA reports are not public, it’s not compelled speech, but the Court (rightly) says that’s… not a thing:

The State makes much of the fact that the DPIA reports are not public documents and retain their confidential and privileged status even after being disclosed to the State, but the State provides no authority to explain why that fact would render the First Amendment wholly inapplicable to the requirement that businesses create them in the first place. On the contrary, the Supreme Court has recognized the First Amendment may apply even when the compelled speech need only be disclosed to the government. See Ams. for Prosperity Found. v. Bonta, 594 U.S. 595, 616 (2021). Accordingly, the district court did not err in concluding that the DPIA report requirement triggers First Amendment scrutiny because it compels protected speech.

More importantly, though, the Court recognizes that the entire underlying purpose of the DPIA system is to encourage websites to remove First Amendment-protected content:

Second, the DPIA report requirement invites First Amendment scrutiny because it deputizes covered businesses into serving as censors for the State. The Supreme Court has previously applied First Amendment scrutiny to laws that deputize private actors into determining whether material is suitable for kids. See Interstate Cir., Inc. v. City of Dallas, 390 U.S. 676, 678, 684 (1968) (recognizing that a film exhibitor’s First Amendment rights were implicated by a law requiring it to inform the government whether films were “suitable” for children). Moreover, the Supreme Court recently affirmed “that laws curtailing [] editorial choices [by online platforms] must meet the First Amendment’s requirements.” Moody, 144 S. Ct. at 2393.

The state’s argument that this analysis is unrelated to the underlying content is easily dismissed:

At oral argument, the State suggested companies could analyze the risk that children would be exposed to harmful or potentially harmful material without opining on what material is potentially harmful to children. However, a business cannot assess the likelihood that a child will be exposed to harmful or potentially harmful materials on its platform without first determining what constitutes harmful or potentially harmful material. To take the State’s own example, data profiling may cause a student who conducts research for a school project about eating disorders to see additional content about eating disorders. Unless the business assesses whether that additional content is “harmful or potentially harmful” to children (and thus opines on what sort of eating disorder content is harmful), it cannot determine whether that additional content poses a “risk of material detriment to children” under the CAADCA. Nor can a business take steps to “mitigate” the risk that children will view harmful or potentially harmful content if it has not identified what content should be blocked.

Accordingly, the district court was correct to conclude that the CAADCA’s DPIA report requirement regulates the speech of covered businesses and thus triggers review under the First Amendment.

I’ll note that this is an issue that is coming up in lots of other laws as well. For example, KOSA has defenders who insist that it is only focused on design, and not content. But at the same time, it talks about preventing harms around eating disorders, which is fundamentally a content issue, not a design issue.

The Court says that the DPIA requirement triggers strict scrutiny. The district court ruling had looked at it under intermediate scrutiny (a lower bar), found that it didn’t pass that bar, and said even if strict scrutiny is appropriate, it wouldn’t pass since it couldn’t even meet the lower bar. The Appeals court basically says we can jump straight to strict scrutiny:

Accordingly, the court assumed for the purposes of the preliminary injunction “that only the lesser standard of intermediate scrutiny for commercial speech applies” because the outcome of the analysis would be the same under both intermediate commercial speech scrutiny and strict scrutiny. Id. at 947–48. While we understand the district court’s caution against prejudicing the merits of the case at the preliminary injunction stage, there is no question that strict scrutiny, as opposed to mere commercial speech scrutiny, governs our review of the DPIA report requirement.

And, of course, the DPIA requirement fails strict scrutiny in part because it’s obviously not the least speech restrictive means of accomplishing its goals:

The State could have easily employed less restrictive means to accomplish its protective goals, such as by (1) incentivizing companies to offer voluntary content filters or application blockers, (2) educating children and parents on the importance of using such tools, and (3) relying on existing criminal laws that prohibit related unlawful conduct.

In this section, the court also responds to the overhyped fears that finding the DPIAs unconstitutional here would mean that they are similarly unconstitutional in other laws, such as California’s privacy law. But the court says “um, guys, one of these is about speech, and one is not.”

Tellingly, iLit compares the CAADCA’s DPIA report requirement with a supposedly “similar DPIA requirement” found in the CCPA, and proceeds to argue that the district court’s striking down of the DPIA report requirement in the CAADCA necessarily threatens the same requirement in the CCPA. But a plain reading of the relevant provisions of both laws reveals that they are not the same; indeed, they are vastly different in kind.

Under the CCPA, businesses that buy, receive, sell, or share the personal information of 10,000,000 or more consumers in a calendar year are required to disclose various metrics, including but not limited to the number of requests to delete, to correct, and to know consumers’ personal information, as well as the number of requests from consumers to opt out of the sale and sharing of their information. 11 Cal. Code Regs. tit. 11, § 7102(a); see Cal Civ. Code § 1798.185(a)(15)(B) (requiring businesses to conduct regular risk assessments regarding how they process “sensitive personal information”). That obligation to collect, retain, and disclose purely factual information about the number of privacy-related requests is a far cry from the CAADCA’s vague and onerous requirement that covered businesses opine on whether their services risk “material detriment to children” with a particular focus on whether they may result in children witnessing harmful or potentially harmful content online. A DPIA report requirement that compels businesses to measure and disclose to the government certain types of risks potentially created by their services might not create a problem. The problem here is that the risk that businesses must measure and disclose to the government is the risk that children will be exposed to disfavored speech online.

Then, the 9th Circuit basically gives up on the other parts of the AADC. The court effectively says that since the briefing was so focused on the DPIA part of the law, and now (thanks to the Moody ruling) a facial challenge requires a full exploration of all aspects of the law, the rest should be sent back to the lower court:

As in Moody, the record needs further development to allow the district court to determine “the full range of activities the law[] cover[s].” Moody, 144 S. Ct. at 2397. But even for the remaining provision that is likely to trigger First Amendment scrutiny in every application because the plain language of the provision compels speech by covered businesses, see Cal. Civ. Code §§ 1798.99.31(a)(7), we cannot say, on this record, that a substantial majority of its applications are likely to fail First Amendment scrutiny.

For example, the Court notes that there’s a part of the law dealing with “dark patterns” but there’s not enough information to know whether or not that could impact speech or not (spoiler alert: it absolutely can and will).

Still, the main news here is this: the law is still not going into effect. The Court recognizes that the DPIA part of the law is pretty clearly an unconstitutional violation of the First Amendment (just as some of us warned Newsom and the California legislature).

Maybe California should pay attention next time (he says sarcastically as a bunch of new bad bills are about to make their way to Newsom’s desk).