texas social media law

supreme-court-vacates-rulings-on-texas-and-florida-social-media-laws

Supreme Court vacates rulings on Texas and Florida social media laws

The US Supreme Court building is seen on a sunny day. Kids mingle around a small pool on the grounds in front of the building.

Enlarge / The Supreme Court of the United States in Washington, DC, in May 2023.

Getty Images | NurPhoto

The US Supreme Court has avoided making a final decision on challenges to the Texas and Florida social media laws, but the majority opinion written by Justice Elena Kagan criticized the Texas law and made it clear that content moderation is protected by the First Amendment.

The Texas law “is unlikely to withstand First Amendment scrutiny,” the Supreme Court majority wrote. “Texas has thus far justified the law as necessary to balance the mix of speech on Facebook’s News Feed and similar platforms; and the record reflects that Texas officials passed it because they thought those feeds skewed against politically conservative voices. But this Court has many times held, in many contexts, that it is no job for government to decide what counts as the right balance of private expression—to ‘un-bias’ what it thinks biased, rather than to leave such judgments to speakers and their audiences. That principle works for social-media platforms as it does for others.”

A Big Tech lobby group that challenged the state laws said it was pleased by the ruling. “In a complex series of opinions that were unanimous in the outcome, but divided 6-3 in their reasoning, the Court sent the cases back to lower courts, making clear that a State may not interfere with private actors’ speech,” the Computer & Communications Industry Association said.

Today’s Supreme Court ruling vacated decisions by two courts. The US Court of Appeals for the 5th Circuit previously upheld the Texas state law that prohibits large social media companies from moderating posts based on a user’s “viewpoint.” By contrast, the US Court of Appeals for the 11th Circuit blocked a Florida law that prohibits large social media sites from banning politicians and requires platforms to “apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform.”

Lower courts failed to do full analysis

The Supreme Court said it remanded the cases to the appeals courts because the courts didn’t do a full analysis of the laws’ effects. “Today, we vacate both decisions for reasons separate from the First Amendment merits, because neither Court of Appeals properly considered the facial nature of [tech industry lobby group] NetChoice’s challenge,” the court majority wrote.

Justices found that the lower courts focused too much on the biggest platforms, like Facebook and YouTube, without considering the wider effects of the laws. The majority wrote:

The courts mainly addressed what the parties had focused on. And the parties mainly argued these cases as if the laws applied only to the curated feeds offered by the largest and most paradigmatic social-media platforms—as if, say, each case presented an as-applied challenge brought by Facebook protesting its loss of control over the content of its News Feed. But argument in this Court revealed that the laws might apply to, and differently affect, other kinds of websites and apps. In a facial challenge, that could well matter, even when the challenge is brought under the First Amendment.

The courts need to examine ways in which the laws might affect “how an email provider like Gmail filters incoming messages, how an online marketplace like Etsy displays customer reviews, how a payment service like Venmo manages friends’ financial exchanges, or how a ride-sharing service like Uber runs,” justices wrote.

Supreme Court vacates rulings on Texas and Florida social media laws Read More »

kagan:-florida-social-media-law-seems-like-“classic-first-amendment-violation”

Kagan: Florida social media law seems like “classic First Amendment violation”

The US Supreme Court building is seen on a sunny day. Kids mingle around a small pool on the grounds in front of the building.

Enlarge / The Supreme Court of the United States in Washington, DC, in May 2023.

Getty Images | NurPhoto

The US Supreme Court today heard oral arguments on Florida and Texas state laws that impose limits on how social media companies can moderate user-generated content.

The Florida law prohibits large social media sites like Facebook and Twitter (aka X) from banning politicians and says they must “apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform.” The Texas statute prohibits large social media companies from moderating posts based on a user’s “viewpoint.” The laws were supported by Republican officials from 20 other states.

The tech industry says both laws violate the companies’ First Amendment right to use editorial discretion in deciding what kinds of user-generated content to allow on their platforms and how to present that content. The Supreme Court will decide whether the laws can be enforced while the industry lawsuits against Florida and Texas continue in lower courts.

How the Supreme Court rules at this stage in these two cases could give one side or the other a big advantage in the ongoing litigations. Paul Clement, a lawyer for Big Tech trade group NetChoice, today urged justices to reject the idea that content moderation conducted by private companies is censorship.

“I really do think that censorship is only something that the government can do to you,” Clement said. “And if it’s not the government, you really shouldn’t label it ‘censorship.’ It’s just a category mistake.”

Companies use editorial discretion to make websites useful for users and advertisers, he said, arguing that content moderation is an expressive activity protected by the First Amendment.

Justice Kagan talks anti-vaxxers, insurrectionists

Henry Whitaker, Florida’s solicitor general, said that social media platforms marketed themselves as neutral forums for free speech but now claim to be “editors of their users’ speech, rather like a newspaper.”

“They contend that they possess a broad First Amendment right to censor anything they host on their sites, even when doing so contradicts their own representations to consumers,” he said. Social media platforms should not be allowed to censor speech any more than phone companies are allowed to, he argued.

Contending that social networks don’t really act as editors, he said that “it is a strange kind of editor that does not actually look at the material” before it is posted. He also said that “upwards of 99 percent of what goes on the platforms is basically passed through without review.”

Justice Elena Kagan replied, “But that 1 percent seems to have gotten some people extremely angry.” Describing the platforms’ moderation practices, she said the 1 percent of content that is moderated is “like, ‘we don’t want anti-vaxxers on our site or we don’t want insurrectionists on our site.’ I mean, that’s what motivated these laws, isn’t it? And that’s what’s getting people upset about them is that other people have different views about what it means to provide misinformation as to voting and things like that.”

Later, Kagan said, “I’m taking as a given that YouTube or Facebook or whatever has expressive views. There are particular kinds of expression defined by content that they don’t want anywhere near their site.”

Pointing to moderation of hate speech, bullying, and misinformation about voting and public health, Kagan asked, “Why isn’t that a classic First Amendment violation for the state to come in and say, ‘we’re not going to allow you to enforce those sorts of restrictions?'”

Whitaker urged Kagan to “look at the objective activity being regulated, namely censoring and deplatforming, and ask whether that expresses a message. Because they [the social networks] host so much content, an objective observer is not going to readily attribute any particular piece of content that appears on their site to some decision to either refrain from or to censor or deplatform.”

Thomas: Who speaks when an algorithm moderates?

Justice Clarence Thomas expressed doubts about whether content moderation conveys an editorial message. “Tell me again what the expressive conduct is that, for example, YouTube engages in when it or Twitter deplatforms someone. What is the expressive conduct and to whom is it being communicated?” Thomas asked.

Clement said the platforms “are sending a message to that person and to their broader audience that that material” isn’t allowed. As a result, users are “not going to see material that violates the terms of use. They’re not going to see a bunch of material that glorifies terrorism. They’re not going to see a bunch of material that glorifies suicide,” Clement said.

Thomas asked who is doing the “speaking” when an algorithm performs content moderation, particularly when “it’s a deep-learning algorithm which teaches itself and has very little human intervention.”

“So who’s speaking then, the algorithm or the person?” Thomas asked.

Clement said that Facebook and YouTube are “speaking, because they’re the ones that are using these devices to run their editorial discretion across these massive volumes.” The need to use algorithms to automate moderation demonstrates “the volume of material on these sites, which just shows you the volume of editorial discretion,” he said.

Kagan: Florida social media law seems like “classic First Amendment violation” Read More »