ru24.pro
News in English
Июль
2024

Clarence Thomas Learned Nothing From The Mess He Helped Create Regarding Section 230, Blogs Ignorantly About 230 Yet Again

0

Have we considered giving Supreme Court justices their own blogs in which they can vent their ill-informed brain farts, rather than leaving them to use official Supreme Court order lists as a form of a blog?

Justice Clarence Thomas has been the absolute worst on this front, using various denials of certiorari on other topics to add in a bunch of anti-free speech, anti-Section 230 commentary, on topics he clearly does not understand.

Thomas started this weird practice of Order List blogging in 2019, when he used the denial of cert on a defamation case to muse unbidden on why we should get rid of the (incredibly important) actual malice standard for defamation cases involving public figures.

Over the last few years, however, his main focus on these Order List brain farts has been to attack Section 230, each time demonstrating the many ways he doesn’t understand Section 230 or how it works (and showing why justices probably shouldn’t be musing randomly on culture war topics on which they haven’t actually been briefed by any parties).

He started his Section 230 brigade in 2020, in which he again chose to write his unbidden musings after the court decided not to hear a case that touched on Section 230. At that point, it became clear that he was doing this as a form of “please send me a case in which I can try to convince my fellow Justices to greatly limit the power of Section 230.”

Not having gotten what he wanted, he did it again in 2021, in a case that really didn’t touch on Section 230 at all, but where he started musing that maybe Section 230 itself was unconstitutional and violated the First Amendment.

He did it again a year later, citing his own previous blog posts.

Finally, later that year, the Supreme Court actually took on two cases that seemed to directly target what Thomas was asking for: the Gonzalez and Taamneh cases targeted internet companies over terrorist attacks based on claims that the terrorists made use of those websites, and therefore the sites could be held civilly liable, at least in part, for the attacks.

When those cases were finally heard, it became pretty obvious pretty damn quickly how ridiculous the premise was, and that the Supreme Court Justices seemed to regret the decision to even hear the cases. Indeed, when the rulings finally came out, it was something of a surprise that the main ruling, in Taamneh, was written by Thomas himself, explaining why the entire premise of suing tech companies for unrelated terrorist attacks made no sense, but refusing to address specifically the Section 230 issue.

However, as we noted at the time, Thomas’ ruling in Taamneh reads like a pretty clear support for Section 230 (or at least a law like Section 230) to quickly kick out cases this stupid and misdirected. I mean, in Taamneh, he wrote (wisely):

The mere creation of those platforms, however, is not culpable. To be sure, it might be that bad actors like ISIS are able to use platforms like defendants’ for illegal—and sometimes terrible—ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider’s conference-call or video-call features made the sale easier.

And, I mean, that’s exactly why we have Section 230. To get cases that make these kinds of tenuous accusations into legal claims tossed out quickly.

But, it appears that Thomas has forgotten all of that. He’s forgotten how his own ruling in Taamneh explains why intermediary liability protections (of which 230 is the gold standard) are so important. And he’s forgotten how his lust for a “let’s kill Section 230” case resulted in the Court taking the utterly ridiculous Taamneh case in the first place.

So, now, when the Court rejected another absolutely ridiculous case, Thomas is blogging yet again about how bad 230 is and how he wishes the Court would hear a case that lets him strike it down.

This time, the case is Doe v. Snap, and it is beyond stupid. It may be even stupider than the Taamneh case. Eric Goldman had a brief description of the issues in this case:

A high school teacher allegedly used Snapchat to groom a sophomore student for a sexual relationship. (Atypically, the teacher was female and the victim was male, but the genders are irrelevant to this incident).

The teacher was sentenced to ten years in jail, so the legal system has already held the wrongdoer accountable. Nevertheless, the plaintiff has pursued additional defendants, including the school district (that lawsuit failed) and Snap.

In a new post, Goldman makes even clearer just how stupid this case is:

We should be precise about Snap’s role in this tragedy. The teacher and student exchanged private messages on Snap. Snap typically is not legally entitled to read or monitor the contents of those messages. Thus, any case predicated on the message contents runs squarely into Snap’s limitations to know those contents. To get around this, the plaintiff said that Snap should have found a way to keep the teacher and student from connecting on Snap. But these users already knew each other offline; it’s not like some stranger-to-stranger connection. Further, Snap can keep these individuals from connecting on its network only if it engages in invasive user authentication, like age authentication (to segregate minors from adults). However, the First Amendment has said for decades that services cannot be legally compelled to do age authentication online. The plaintiff also claimed Snapchat’s “ephemeral” message functionality is a flawed design, but the Constitution doesn’t permit legislatures to force messaging services to maintain private messages indefinitely. Indeed, Snapchat’s ephemerality enhances socially important privacy considerations. In other words, this case doesn’t succeed however it’s framed: either it’s based on message contents Snap can’t read, or it’s based on site design choices that aren’t subject to review due to the Constitution.

See? It’s just as, if not more, stupid than the Taamneh case. It’s yet another “Steve Dallas” lawsuit, in which civil lawsuits are filed against large companies who are only tangentially related to the issues at play, solely because they have deep pockets.

The historical posture of this case is also bizarre. The lower courts also recognized it was a dumb case, sorta. The district court rejected the case on 230 grounds. The 5th Circuit affirmed that decision but (bizarrely) suggested the plaintiff seek an en banc review from the full contingent of Fifth Circuit judges. That happened, and while the Fifth Circuit refused to hear the case en banc, seven out of the fifteen judges (just under half) wrote a “dissent,” citing Justice Thomas’s unbriefed musings, and suggesting Section 230 should be destroyed.

Justice Thomas clearly noticed that. While the Supreme Court has now (thankfully) rejected the cert petition, Thomas has used the opportunity to renew his grievances regarding Section 230.

It’s as wrong and incoherent as his past musings, but somehow even worse, given what we had hoped he’d learned from the Taamneh mess. On top of that, it has a new bit of nuttery, which we’ll get to eventually.

First, he provides a much more generous to the plaintiff explanation of what he believed happened:

When petitioner John Doe was 15 years old, his science teacher groomed him for a sexual relationship. The abuse was exposed after Doe overdosed on prescription drugs provided by the teacher. The teacher initially seduced Doe by sending him explicit content on Snapchat, a social-media platform built around the feature of ephemeral, selfdeleting messages. Snapchat is popular among teenagers. And, because messages sent on the platform are selfdeleting, it is popular among sexual predators as well. Doe sued Snapchat for, among other things, negligent design under Texas law. He alleged that the platform’s design encourages minors to lie about their age to access the platform, and enables adults to prey upon them through the self-deleting message feature. See Pet. for Cert. 14–15. The courts below concluded that §230 of the Communications Decency Act of 1996 bars Doe’s claims

Again, given his ruling in Taamneh, where he explicitly noted how silly it was to blame the tool for its misuse, you’d think he’d be aware that he’s literally describing the same scenario. Though, in this case it’s even worse, because as Goldman points out, Snap is prohibited by law from monitoring the private communications here.

Thomas then goes on to point out how there’s some sort of groundswell for reviewing Section 230… by pointing to each of his previous unasked-for, unbriefed musings as proof:

Notwithstanding the statute’s narrow focus, lower courts have interpreted §230 to “confer sweeping immunity” for a platform’s own actions. Malwarebytes, Inc. v. Enigma Software Group USA, LLC, 592 U. S. ___, ___ (2020) (statement of THOMAS, J., respecting denial of certiorari) (slip op., at 1). Courts have “extended §230 to protect companies from a broad array of traditional product-defect claims.” Id., at ___–___ (slip op., at 8–9) (collecting examples). Even when platforms have allegedly engaged in egregious, intentional acts—such as “deliberately structur[ing]” a website “to facilitate illegal human trafficking”—platforms have successfully wielded §230 as a shield against suit. Id., at ___ (slip op., at 8); see Doe v. Facebook, 595 U. S. ___, ___ (2022) (statement of THOMAS, J., respecting denial of certiorari) (slip op., at 2).

And it’s not like he’s forgotten the mess with Taamneh/Gonzalez, because he mentions it here, but somehow it doesn’t ever occur to him that this is the same sort of situation, or that his ruling in Taamneh is a perfect encapsulation of why 230 is so important. Instead, he bemoans that the Court didn’t have a chance to even get to the 230 issues in that case:

The question whether §230 immunizes platforms for their own conduct warrants the Court’s review. In fact, just last Term, the Court granted certiorari to consider whether and how §230 applied to claims that Google had violated the Antiterrorism Act by recommending ISIS videos to YouTube users. See Gonzalez v. Google LLC, 598 U. S. 617, 621 (2023). We were unable to reach §230’s scope, however, because the plaintiffs’ claims would have failed on the merits regardless. See id., at 622 (citing Twitter, Inc. v. Taamneh, 598 U. S. 471 (2023)). This petition presented the Court with an opportunity to do what it could not in Gonzalez and squarely address §230’s scope

Except no. If the Taamneh/Gonzalez cases didn’t let you get to the 230 issue because the cases “would have failed on the merits regardless,” the same is doubly true here, where there is no earthly reason why Snap should be held liable.

Then, hilariously, Thomas whines that SCOTUS is taking too long to address this issue with which he is infatuated, even though all it’s done so far is have really, really dumb cases sent to the Court:

Although the Court denies certiorari today, there will be other opportunities in the future. But, make no mistake about it—there is danger in delay. Social-media platforms have increasingly used §230 as a get-out-of-jail free card.

And that takes us to the “new bit of nuttery” I mentioned above. Thomas picks up on a point that Justice Gorsuch raised during oral arguments in the NetChoice cases, and I’ve now seen being pushed by grifters and nonsense peddlers. Specifically, that the posture that NetChoice took in fighting state content moderation laws is in conflict with the arguments made companies making use of Section 230.

Here, we’ll let Thomas explain his argument before picking it apart to show just how wrong it is, and how this demonstrates the risks of unbriefed musings by an ideological and outcomes-motivated Justice.

Many platforms claim that users’ content is their own First Amendment speech. Because platforms organize users’ content into newsfeeds or other compilations, the argument goes, platforms engage in constitutionally protected speech. See Moody v. NetChoice, 603 U. S. ___, ___ (2024). When it comes time for platforms to be held accountable for their websites, however, they argue the opposite. Platforms claim that since they are not speakers under §230, they cannot be subject to any suit implicating users’ content, even if the suit revolves around the platform’s alleged misconduct. See Doe, 595 U. S., at ___–___ (statement of THOMAS, J.) (slip op., at 1–2). In the platforms’ world, they are fully responsible for their websites when it results in constitutional protections, but the moment that responsibility could lead to liability, they can disclaim any obligations and enjoy greater protections from suit than nearly any other industry. The Court should consider if this state of affairs is what §230 demands.

So, the short answer is, yes, this is exactly the state of affairs that Section 230 demands, and the authors of Section 230, Chris Cox and Ron Wyden, have said so repeatedly.

Where Thomas is getting tripped up, is in misunderstanding whose speech we’re talking about in which scenarios. Section 230 is quite clear that sites cannot be held liable for the violative nature of third-party expression (i.e., the content created by users). But the argument in Moody was about the editorial discretion of social media companies to express themselves in terms of what content they allow.

Two different things in two different scenarios. The platforms are not “arguing the opposite.” They are being specific and explicit where Thomas is being sloppy and confused.

Section 230 means no liability for the third party uses of the tool (which you’d think Thomas would understand given his opinion in Taamneh). But Moody isn’t about liability for third party content. It was about whether or not the sites have the right to determine which content they host and which they won’t, and whether or not those choices (not the underlying content) is itself expressive. The court answered (correctly) that it was expressive.

But that doesn’t change the simple fact that the sites still should not be liable for any tort violation created by a user.

Thomas is right, certainly, that more such cases will be sent to the Supreme Court, given all the begging he’s been doing for them.

But he would be wise to actually learn a lesson or two from what happened with Taamneh and Gonzalez, and maybe recognize (1) he shouldn’t spout off on topics that haven’t been fully briefed, (2) there’s a reason why particularly stupid cases like this one and Taamneh are the ones that reach the Supreme Court and (3) that what he said in Taamneh actually explains why Section 230 is so necessary.

And then we can start to work on why he’s conflating two different types of expression in trying to attack the (correct) position of the platforms with regards to their own editorial discretion and 230 protections.