First Amendment

facing-ban-next-month,-tiktok-begs-scotus-for-help

Facing ban next month, TikTok begs SCOTUS for help

TikTok: Ban is slippery slope to broad US censorship

According to TikTok, the government’s defense of the ban to prevent China from wielding a “covert” influence over Americans is a farce invented by lawyers to cover up the true mission of censorship. If the lower court’s verdict stands, TikTok alleged, “then Congress will have free rein to ban any American from speaking simply by identifying some risk that the speech is influenced by a foreign entity.”

TikTok doesn’t want to post big disclaimers on the app warning of “covert” influence, claiming that the government relied on “secret evidence” to prove this influence occurs on TikTok. But if the Supreme Court agrees that the government needed to show more than “bare factual assertions” to back national security claims the lower court said justified any potential speech restrictions, then the court will also likely agree to reverse the lower court’s decision, TikTok suggested.

It will become much clearer by January 6 whether the January 19 ban will take effect, at which point TikTok would shut down, booting all US users from the app. TikTok urged the Supreme Court to agree it is in the public interest to delay the ban and review the constitutional claims to prevent any “extreme” harms to both TikTok and US users who depend on the app for news, community, and income.

If SCOTUS doesn’t intervene, TikTok said that the lower court’s “flawed legal rationales would open the door to upholding content-based speech bans in contexts far different than this one.”

“Fearmongering about national security cannot obscure the threat that the Act itself poses to all Americans,” TikTok alleged, while suggesting that even Congress would agree that a “modest delay” in enforcing the law wouldn’t pose any immediate risk to US national security. Congress is also aware that a sale would not be technically, commercially, or legally possible in the timeframe provided, TikTok said. A temporary injunction would prevent irreparable harms, TikTok said, including the irreparable harm courts have long held is caused by restricting speech of Americans for any amount of time.

“An interim injunction is also appropriate because it will give the incoming Administration time to determine its position, as the President-elect and his advisors have voiced support for saving TikTok,” TikTok argued.

Ars could not immediately reach TikTok for comment.

Facing ban next month, TikTok begs SCOTUS for help Read More »

judge-slams-florida-for-censoring-political-ad:-“it’s-the-first-amendment,-stupid”

Judge slams Florida for censoring political ad: “It’s the First Amendment, stupid”


Florida threatened TV stations over ad that criticized state’s abortion law.

A woman holding an MRI displaying a brain tumor.

Screenshot of political advertisement featuring a woman describing her experience having an abortion after being diagnosed with brain cancer. Credit: Floridians Protecting Freedom

US District Judge Mark Walker had a blunt message for the Florida surgeon general in an order halting the government official’s attempt to censor a political ad that opposes restrictions on abortion.

“To keep it simple for the State of Florida: it’s the First Amendment, stupid,” Walker, an Obama appointee who is chief judge in US District Court for the Northern District of Florida, wrote yesterday in a ruling that granted a temporary restraining order.

“Whether it’s a woman’s right to choose, or the right to talk about it, Plaintiff’s position is the same—’don’t tread on me,'” Walker wrote later in the ruling. “Under the facts of this case, the First Amendment prohibits the State of Florida from trampling on Plaintiff’s free speech.”

The Florida Department of Health recently sent a legal threat to broadcast TV stations over the airing of a political ad that criticized abortion restrictions in Florida’s Heartbeat Protection Act. The department in Gov. Ron DeSantis’ administration claimed the ad falsely described the abortion law, which could be weakened by a pending ballot question.

Floridians Protecting Freedom, the group that launched the TV ad and is sponsoring a ballot question to lift restrictions on abortion, sued Surgeon General Joseph Ladapo and Department of Health general counsel John Wilson. Wilson has resigned.

Surgeon general blocked from further action

Walker’s order granting the group’s motion states that “Defendant Ladapo is temporarily enjoined from taking any further actions to coerce, threaten, or intimate repercussions directly or indirectly to television stations, broadcasters, or other parties for airing Plaintiff’s speech, or undertaking enforcement action against Plaintiff for running political advertisements or engaging in other speech protected under the First Amendment.”

The order expires on October 29 but could be replaced by a preliminary injunction that would remain in effect while litigation continues. A hearing on the motion for a preliminary injunction is scheduled for the morning of October 29.

The pending ballot question would amend the state Constitution to say, “No law shall prohibit, penalize, delay, or restrict abortion before viability or when necessary to protect the patient’s health, as determined by the patient’s healthcare provider. This amendment does not change the Legislature’s constitutional authority to require notification to a parent or guardian before a minor has an abortion.”

Walker’s ruling said that Ladapo “has the right to advocate for his own position on a ballot measure. But it would subvert the rule of law to permit the State to transform its own advocacy into the direct suppression of protected political speech.”

Federal Communications Commission Chairwoman Jessica Rosenworcel recently criticized state officials, writing that “threats against broadcast stations for airing content that conflicts with the government’s views are dangerous and undermine the fundamental principle of free speech.”

State threatened criminal proceedings

The Floridians Protecting Freedom advertisement features a woman who “recalls her decision to have an abortion in Florida in 2022,” and “states that she would not be able to have an abortion for the same reason under the current law,” Walker’s ruling said.

Caroline, the woman in the ad, states that “the doctors knew if I did not end my pregnancy, I would lose my baby, I would lose my life, and my daughter would lose her mom. Florida has now banned abortion even in cases like mine. Amendment 4 is going to protect women like me; we have to vote yes.”

The ruling described the state government response:

Shortly after the ad began running, John Wilson, then general counsel for the Florida Department of Health, sent letters on the Department’s letterhead to Florida TV stations. The letters assert that Plaintiff’s political advertisement is false, dangerous, and constitutes a “sanitary nuisance” under Florida law. The letter informed the TV stations that the Department of Health must notify the person found to be committing the nuisance to remove it within 24 hours pursuant to section 386.03(1), Florida Statutes. The letter further warned that the Department could institute legal proceedings if the nuisance were not timely removed, including criminal proceedings pursuant to section 386.03(2)(b), Florida Statutes. Finally, the letter acknowledged that the TV stations have a constitutional right to “broadcast political advertisements,” but asserted this does not include “false advertisements which, if believed, would likely have a detrimental effect on the lives and health of pregnant women in Florida.” At least one of the TV stations that had been running Plaintiff’s advertisement stopped doing so after receiving this letter from the Department of Health.

The Department of Health claimed the ad “is categorically false” because “Florida’s Heartbeat Protection Act does not prohibit abortion if a physician determines the gestational age of the fetus is less than 6 weeks.”

Floridians Protecting Freedom responded that the woman in the ad made true statements, saying that “Caroline was diagnosed with stage four brain cancer when she was 20 weeks pregnant; the diagnosis was terminal. Under Florida law, abortions may only be performed after six weeks gestation if ‘[t]wo physicians certify in writing that, in reasonable medical judgment, the termination of the pregnancy is necessary to save the pregnant woman’s life or avert a serious risk of substantial and irreversible physical impairment of a major bodily function of the pregnant woman other than a psychological condition.'”

Because “Caroline’s diagnosis was terminal… an abortion would not have saved her life, only extended it. Florida law would not allow an abortion in this instance because the abortion would not have ‘save[d] the pregnant woman’s life,’ only extended her life,” the group said.

Judge: State should counter with its own speech

Walker’s ruling said the government can’t censor the ad by claiming it is false:

Plaintiff’s argument is correct. While Defendant Ladapo refuses to even agree with this simple fact, Plaintiff’s political advertisement is political speech—speech at the core of the First Amendment. And just this year, the United States Supreme Court reaffirmed the bedrock principle that the government cannot do indirectly what it cannot do directly by threatening third parties with legal sanctions to censor speech it disfavors. The government cannot excuse its indirect censorship of political speech simply by declaring the disfavored speech is “false.”

State officials must show that their actions “were narrowly tailored to serve a compelling government interest,” Walker wrote. A “narrowly tailored solution” in this case would be counterspeech, not censorship, he wrote.

“For all these reasons, Plaintiff has demonstrated a substantial likelihood of success on the merits,” the ruling said. Walker wrote that a ruling in favor of the state would open the door to more censorship:

This case pits the right to engage in political speech against the State’s purported interest in protecting the health and safety of Floridians from “false advertising.” It is no answer to suggest that the Department of Health is merely flexing its traditional police powers to protect health and safety by prosecuting “false advertising”—if the State can rebrand rank viewpoint discriminatory suppression of political speech as a “sanitary nuisance,” then any political viewpoint with which the State disagrees is fair game for censorship.

Walker then noted that Ladapo “has ample, constitutional alternatives to mitigate any harm caused by an injunction in this case.” The state is already running “its own anti-Amendment 4 campaign to educate the public about its view of Florida’s abortion laws and to correct the record, as it sees fit, concerning pro-Amendment 4 speech,” Walker wrote. “The State can continue to combat what it believes to be ‘false advertising’ by meeting Plaintiff’s speech with its own.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Judge slams Florida for censoring political ad: “It’s the First Amendment, stupid” Read More »

rfk-jr’s-anti-vaccine-group-can’t-sue-meta-for-agreeing-with-cdc,-judge-rules

RFK Jr’s anti-vaccine group can’t sue Meta for agreeing with CDC, judge rules

Independent presidential candidate Robert F. Kennedy Jr.

Enlarge / Independent presidential candidate Robert F. Kennedy Jr.

The Children’s Health Defense (CHD), an anti-vaccine group founded by Robert F. Kennedy Jr, has once again failed to convince a court that Meta acted as a state agent when censoring the group’s posts and ads on Facebook and Instagram.

In his opinion affirming a lower court’s dismissal, US Ninth Circuit Court of Appeals Judge Eric Miller wrote that CHD failed to prove that Meta acted as an arm of the government in censoring posts. Concluding that Meta’s right to censor views that the platforms find “distasteful” is protected by the First Amendment, Miller denied CHD’s requested relief, which had included an injunction and civil monetary damages.

“Meta evidently believes that vaccines are safe and effective and that their use should be encouraged,” Miller wrote. “It does not lose the right to promote those views simply because they happen to be shared by the government.”

CHD told Reuters that the group “was disappointed with the decision and considering its legal options.”

The group first filed the complaint in 2020, arguing that Meta colluded with government officials to censor protected speech by labeling anti-vaccine posts as misleading or removing and shadowbanning CHD posts. This caused CHD’s traffic on the platforms to plummet, CHD claimed, and ultimately, its pages were removed from both platforms.

However, critically, Miller wrote, CHD did not allege that “the government was actually involved in the decisions to label CHD’s posts as ‘false’ or ‘misleading,’ the decision to put the warning label on CHD’s Facebook page, or the decisions to ‘demonetize’ or ‘shadow-ban.'”

“CHD has not alleged facts that allow us to infer that the government coerced Meta into implementing a specific policy,” Miller wrote.

Instead, Meta “was entitled to encourage” various “input from the government,” justifiably seeking vaccine-related information provided by the World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC) as it navigated complex content moderation decisions throughout the pandemic, Miller wrote.

Therefore, Meta’s actions against CHD were due to “Meta’s own ‘policy of censoring,’ not any provision of federal law,” Miller concluded. “The evidence suggested that Meta had independent incentives to moderate content and exercised its own judgment in so doing.”

None of CHD’s theories that Meta coordinated with officials to deprive “CHD of its constitutional rights” were plausible, Miller wrote, whereas the “innocent alternative”—”that Meta adopted the policy it did simply because” CEO Mark Zuckerberg and Meta “share the government’s view that vaccines are safe and effective”—appeared “more plausible.”

Meta “does not become an agent of the government just because it decides that the CDC sometimes has a point,” Miller wrote.

Equally not persuasive were CHD’s notions that Section 230 immunity—which shields platforms from liability for third-party content—”‘removed all legal barriers’ to the censorship of vaccine-related speech,” such that “Meta’s restriction of that content should be considered state action.”

“That Section 230 operates in the background to immunize Meta if it chooses to suppress vaccine misinformation—whether because it shares the government’s health concerns or for independent commercial reasons—does not transform Meta’s choice into state action,” Miller wrote.

One judge dissented over Section 230 concerns

In his dissenting opinion, Judge Daniel Collins defended CHD’s Section 230 claim, however, suggesting that the appeals court erred and should have granted CHD injunctive and declaratory relief from alleged censorship. CHD CEO Mary Holland told The Defender that the group was pleased the decision was not unanimous.

According to Collins, who like Miller is a Trump appointee, Meta could never have built its massive social platforms without Section 230 immunity, which grants platforms the ability to broadly censor viewpoints they disfavor.

It was “important to keep in mind” that “the vast practical power that Meta exercises over the speech of millions of others ultimately rests on a government-granted privilege to which Meta is not constitutionally entitled,” Collins wrote. And this power “makes a crucial difference in the state-action analysis.”

As Collins sees it, CHD could plausibly allege that Meta’s communications with government officials about vaccine-related misinformation targeted specific users, like the “disinformation dozen” that includes both CHD and Kennedy. In that case, it appears possible to Collins that Section 230 provides a potential opportunity for government to target speech that it disfavors through mechanisms provided by the platforms.

“Having specifically and purposefully created an immunized power for mega-platform operators to freely censor the speech of millions of persons on those platforms, the Government is perhaps unsurprisingly tempted to then try to influence particular uses of such dangerous levers against protected speech expressing viewpoints the Government does not like,” Collins warned.

He further argued that “Meta’s relevant First Amendment rights” do not “give Meta an unbounded freedom to work with the Government in suppressing speech on its platforms.” Disagreeing with the majority, he wrote that “in this distinctive scenario, applying the state-action doctrine promotes individual liberty by keeping the Government’s hands away from the tempting levers of censorship on these vast platforms.”

The majority agreed, however, that while Section 230 immunity “is undoubtedly a significant benefit to companies like Meta,” lawmakers’ threats to weaken Section 230 did not suggest that Meta’s anti-vaccine policy was coerced state action.

“Many companies rely, in one way or another, on a favorable regulatory environment or the goodwill of the government,” Miller wrote. “If that were enough for state action, every large government contractor would be a state actor. But that is not the law.”

RFK Jr’s anti-vaccine group can’t sue Meta for agreeing with CDC, judge rules Read More »

kids-online-safety-act-passes-senate-despite-concerns-it-will-harm-kids

Kids Online Safety Act passes Senate despite concerns it will harm kids

Kids Online Safety Act passes Senate despite concerns it will harm kids

The Kids Online Safety Act (KOSA) easily passed the Senate today despite critics’ concerns that the bill may risk creating more harm than good for kids and perhaps censor speech for online users of all ages if it’s signed into law.

KOSA received broad bipartisan support in the Senate, passing with a 91–3 vote alongside the Children’s Online Privacy Protection Action (COPPA) 2.0. Both laws seek to control how much data can be collected from minors, as well as regulate the platform features that could harm children’s mental health.

Only Senators Ron Wyden (D-Ore.), Rand Paul (R-Ky.), and Mike Lee (R-Utah) opposed the bills.

In an op-ed for The Courier-Journal, Paul argued that KOSA imposes a “duty of care” to mitigate harms to minors on their platforms that “will not only stifle free speech, but it will deprive Americans of the benefits of our technological advancements.”

“With the Internet, today’s children have the world at their fingertips,” Paul wrote, but if KOSA passes, even allegedly benign content like “pro-life messages” or discussion of a teen overcoming an eating disorder could be censored if platforms fear compliance issues.

“While doctors’ and therapists’ offices close at night and on weekends, support groups are available 24 hours a day, seven days a week for people who share similar concerns or have the same health problems. Any solution to protect kids online must ensure the positive aspects of the Internet are preserved,” Paul wrote.

During a KOSA critics’ press conference today, Dara Adkison—the executive director of a group providing resources for transgender youths called TransOhio—expressed concerns that lawmakers would target sites like TransOhio if the law also passed in the House, where the bill heads next.

“I’ve literally had legislators tell me to my face that they would love to see our website taken off the Internet because they don’t want people to have the kinds of vital community resources that we provide,” Adkison said.

Paul argued that what was considered harmful to kids was subjective, noting that a key flaw with KOSA was that “KOSA does not explicitly define the term ‘mental health disorder.'” Instead, platforms are to refer to the definition in “the fifth edition of the Diagnostic and Statistical Manual of Mental Health Disorders” or “the most current successor edition.”

“That means the scope of the bill could change overnight without any action from America’s elected representatives,” Paul warned, suggesting that “KOSA opens the door to nearly limitless content regulation because platforms will censor users rather than risk liability.”

Ahead of the vote, Senator Richard Blumenthal (D-Conn.)—who co-sponsored KOSA—denied that the bill strove to regulate content, The Hill reported. To Blumenthal and other KOSA supporters, its aim instead is to ensure that social media is “safe by design” for young users.

According to The Washington Post, KOSA and COPPA 2.0 passing “represent the most significant restrictions on tech platforms to clear a chamber of Congress in decades.” However, while President Joe Biden has indicated he would be willing to sign the bill into law, most seem to agree that KOSA will struggle to pass in the House of Representatives.

A senior tech policy director for Chamber of Progress—a progressive tech industry policy coalition—Todd O’Boyle, has said that currently there is “substantial opposition” in the House. O’Boyle said that he expects that the political divide will be enough to block KOSA’s passage and prevent giving “the power” to the Federal Trade Commission (FTC) or “the next president” to “crack down on online speech” or otherwise pose “a massive threat to our constitutional rights.”

“If there’s one thing the far-left and far-right agree on, it’s that the next chair of the FTC shouldn’t get to decide what online posts are harmful,” O’Boyle said.

Kids Online Safety Act passes Senate despite concerns it will harm kids Read More »

elon-musk’s-x-may-succeed-in-blocking-calif.-content-moderation-law-on-appeal

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal

Judgment call —

Elon Musk’s X previously failed to block the law on First Amendment grounds.

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal

Elon Musk’s fight defending X’s content moderation decisions isn’t just with hate speech researchers and advertisers. He has also long been battling regulators, and this week, he seemed positioned to secure a potentially big win in California, where he’s hoping to permanently block a law that he claims unconstitutionally forces his platform to justify its judgment calls.

At a hearing Wednesday, three judges in the 9th US Circuit Court of Appeals seemed inclined to agree with Musk that a California law requiring disclosures from social media companies that clearly explain their content moderation choices likely violates the First Amendment.

Passed in 2022, AB-587 forces platforms like X to submit a “terms of service report” detailing how they moderate several categories of controversial content. Those categories include hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference, which X’s lawyer, Joel Kurtzberg, told judges yesterday “are the most controversial categories of so-called awful but lawful speech.”

The law would seemingly require more transparency than ever from X, making it easy for users to track exactly how much controversial content X flags and removes—and perhaps most notably for advertisers, how many users viewed concerning content.

To block the law, X sued in 2023, arguing that California was trying to dictate its terms of service and force the company to make statements on content moderation that could generate backlash. X worried that the law “impermissibly” interfered with both “the constitutionally protected editorial judgments” of social media companies, as well as impacted users’ speech by requiring companies “to remove, demonetize, or deprioritize constitutionally protected speech that the state deems undesirable or harmful.”

Any companies found to be non-compliant could face stiff fines of up to $15,000 per violation per day, which X considered “draconian.” But last year, a lower court declined to block the law, prompting X to appeal, and yesterday, the appeals court seemed more sympathetic to X’s case.

At the hearing, Kurtzberg told judges that the law was “deeply threatening to the well-established First Amendment interests” of an “extraordinary diversity of” people, which is why X’s complaint was supported by briefs from reporters, freedom of the press advocates, First Amendment scholars, “conservative entities,” and people across the political spectrum.

All share “a deep concern about a statute that, on its face, is aimed at pressuring social media companies to change their content moderation policies, so as to carry less or even no expression that’s viewed by the state as injurious to its people,” Kurtzberg told judges.

When the court pointed out that seemingly the law simply required X to abide by content moderation policies for each category defined in its own terms of service—and did not compel X to adopt any policy or position that it did not choose—Kurtzberg pushed back.

“They don’t mandate us to define the categories in a specific way, but they mandate us to take a position on what the legislature makes clear are the most controversial categories to moderate and define,” Kurtzberg said. “We are entitled to respond to the statute by saying we don’t define hate speech or racism. But the report also asks about policies that are supposedly, quote, ‘intended’ to address those categories, which is a judgment call.”

“This is very helpful,” Judge Anthony Johnstone responded. “Even if you don’t yourself define those categories in the terms of service, you read the law as requiring you to opine or discuss those categories, even if they’re not part of your own terms,” and “you are required to tell California essentially your views on hate speech, extremism, harassment, foreign political interference, how you define them or don’t define them, and what you choose to do about them?”

“That is correct,” Kurtzberg responded, noting that X considered those categories the most “fraught” and “difficult to define.”

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal Read More »

meta-tells-court-it-won’t-sue-over-facebook-feed-killing-tool—yet

Meta tells court it won’t sue over Facebook feed-killing tool—yet

Meta tells court it won’t sue over Facebook feed-killing tool—yet

This week, Meta asked a US district court in California to toss a lawsuit filed by a professor, Ethan Zuckerman, who fears that Meta will sue him if he releases a tool that would give Facebook users an automated way to easily remove all content from their feeds.

Zuckerman has alleged that the imminent threat of a lawsuit from Meta has prevented him from releasing Unfollow Everything 2.0, suggesting that a cease-and-desist letter sent to the creator of the original Unfollow Everything substantiates his fears.

He’s hoping the court will find that either releasing his tool would not breach Facebook’s terms of use—which prevent “accessing or collecting data from Facebook ‘using automated means'”—or that those terms conflict with public policy. Among laws that Facebook’s terms allegedly conflict with are the First Amendment, section 230 of the Communications Decency Act, the Computer Fraud and Abuse Act (CFAA), as well as California’s Computer Data Access and Fraud Act (CDAFA) and state privacy laws.

But Meta claimed in its motion to dismiss that Zuckerman’s suit is too premature, mostly because the tool has not yet been built and Meta has not had a chance to review the “non-existent tool” to determine how Unfollow Everything 2.0 might impact its platform or its users.

“Besides bald assertions about how Plaintiff intends Unfollow Everything 2.0 to work and what he plans to do with it, there are no concrete facts that would enable this Court to adjudicate potential legal claims regarding this tool—which, at present, does not even operate in the real world,” Meta argued.

Meta wants all of Zuckerman’s claims to be dismissed, arguing that “adjudicating Plaintiff’s claims would require needless rulings on hypothetical applications of California law, would likely result in duplicative litigation, and would encourage forum shopping.”

At the heart of Meta’s defense is a claim that there’s no telling yet if Zuckerman will ever be able to release the tool, although Zuckerman said he was prepared to finish the build within six weeks of a court win. Last May, Zuckerman told Ars that because Facebook’s functionality could change while the lawsuit is settled, it’s better to wait to finish building the tool because Facebook’s design is always changing.

Meta claimed that Zuckerman can’t confirm if Unfollow Everything 2.0 would work as described in his suit precisely because his findings are based on Facebook’s current interface, and the “process for unfollowing has changed over time and will likely continue to change.”

Further, Meta argued that the original Unfollow Everything performed in a different way—by logging in on behalf of users and automatically unfollowing everything, rather than performing the automated unfollowing when the users themselves log in. Because of that, Meta argued that the new tool may not prompt the same response from Meta.

A senior staff attorney at the Knight Institute who helped draft Zuckerman’s complaint, Ramya Krishnan, told Ars that the two tools operate nearly identically, however.

“Professor Zuckerman’s tool and the original Unfollow Everything work in essentially the same way,” Krishnan told Ars. “They automatically unfollow all of a user’s friends, groups, and pages after the user installs the tool and logs in to Facebook using their web browser.”

Ultimately, Meta claimed that there’s no telling if Meta would even sue over the tool’s automated access to user data, dismissing Zuckerman’s fears as unsubstantiated.

Only when the tool is out in the wild and Facebook is able to determine “actual, concrete facts about how it works in practice” that “may prove problematic” will Meta know if a legal response is needed, Meta claimed. Without reviewing the technical specs, Meta argued, Meta has no way to assess the damages or know if it would sue over a breach of contract, as alleged, or perhaps over other claims not alleged, such as trademark infringement.

Meta tells court it won’t sue over Facebook feed-killing tool—yet Read More »

scotus-nixes-injunction-that-limited-biden-admin-contacts-with-social-networks

SCOTUS nixes injunction that limited Biden admin contacts with social networks

SCOTUS nixes injunction that limited Biden admin contacts with social networks

On Wednesday, the Supreme Court tossed out claims that the Biden administration coerced social media platforms into censoring users by removing COVID-19 and election-related content.

Complaints alleging that high-ranking government officials were censoring conservatives had previously convinced a lower court to order an injunction limiting the Biden administration’s contacts with platforms. But now that injunction has been overturned, re-opening lines of communication just ahead of the 2024 elections—when officials will once again be closely monitoring the spread of misinformation online targeted at voters.

In a 6–3 vote, the majority ruled that none of the plaintiffs suing—including five social media users and Republican attorneys general in Louisiana and Missouri—had standing. They had alleged that the government had “pressured the platforms to censor their speech in violation of the First Amendment,” demanding an injunction to stop any future censorship.

Plaintiffs may have succeeded if they were instead seeking damages for past harms. But in her opinion, Justice Amy Coney Barrett wrote that partly because the Biden administration seemingly stopped influencing platforms’ content policies in 2022, none of the plaintiffs could show evidence of a “substantial risk that, in the near future, they will suffer an injury that is traceable” to any government official. Thus, they did not seem to face “a real and immediate threat of repeated injury,” Barrett wrote.

“Without proof of an ongoing pressure campaign, it is entirely speculative that the platforms’ future moderation decisions will be attributable, even in part,” to government officials, Barrett wrote, finding that an injunction would do little to prevent future censorship.

Instead, plaintiffs’ claims “depend on the platforms’ actions,” Barrett emphasized, “yet the plaintiffs do not seek to enjoin the platforms from restricting any posts or accounts.”

“It is a bedrock principle that a federal court cannot redress ‘injury that results from the independent action of some third party not before the court,'” Barrett wrote.

Barrett repeatedly noted “weak” arguments raised by plaintiffs, none of which could directly link their specific content removals with the Biden administration’s pressure campaign urging platforms to remove vaccine or election misinformation.

According to Barrett, the lower court initially granting the injunction “glossed over complexities in the evidence,” including the fact that “platforms began to suppress the plaintiffs’ COVID-19 content” before the government pressure campaign began. That’s an issue, Barrett said, because standing to sue “requires a threshold showing that a particular defendant pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff’s speech on that topic.”

“While the record reflects that the Government defendants played a role in at least some of the platforms’ moderation choices, the evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment,” Barrett wrote.

Barrett was similarly unconvinced by arguments that plaintiffs risk platforms removing future content based on stricter moderation policies that were previously coerced by officials.

“Without evidence of continued pressure from the defendants, the platforms remain free to enforce, or not to enforce, their policies—even those tainted by initial governmental coercion,” Barrett wrote.

Judge: SCOTUS “shirks duty” to defend free speech

Justices Clarence Thomas and Neil Gorsuch joined Samuel Alito in dissenting, arguing that “this is one of the most important free speech cases to reach this Court in years” and that the Supreme Court had an “obligation” to “tackle the free speech issue that the case presents.”

“The Court, however, shirks that duty and thus permits the successful campaign of coercion in this case to stand as an attractive model for future officials who want to control what the people say, hear, and think,” Alito wrote.

Alito argued that the evidence showed that while “downright dangerous” speech was suppressed, so was “valuable speech.” He agreed with the lower court that “a far-reaching and widespread censorship campaign” had been “conducted by high-ranking federal officials against Americans who expressed certain disfavored views about COVID-19 on social media.”

“For months, high-ranking Government officials placed unrelenting pressure on Facebook to suppress Americans’ free speech,” Alito wrote. “Because the Court unjustifiably refuses to address this serious threat to the First Amendment, I respectfully dissent.”

At least one plaintiff who opposed masking and vaccines, Jill Hines, was “indisputably injured,” Alito wrote, arguing that evidence showed that she was censored more frequently after officials pressured Facebook into changing their policies.

“Top federal officials continuously and persistently hectored Facebook to crack down on what the officials saw as unhelpful social media posts, including not only posts that they thought were false or misleading but also stories that they did not claim to be literally false but nevertheless wanted obscured,” Alito wrote.

While Barrett and the majority found that platforms were more likely responsible for injury, Alito disagreed, writing that with the threat of antitrust probes or Section 230 amendments, Facebook acted like “a subservient entity determined to stay in the good graces of a powerful taskmaster.”

Alito wrote that the majority was “applying a new and heightened standard” by requiring plaintiffs to “untangle Government-caused censorship from censorship that Facebook might have undertaken anyway.” In his view, it was enough that Hines showed that “one predictable effect of the officials’ action was that Facebook would modify its censorship policies in a way that affected her.”

“When the White House pressured Facebook to amend some of the policies related to speech in which Hines engaged, those amendments necessarily impacted some of Facebook’s censorship decisions,” Alito wrote. “Nothing more is needed. What the Court seems to want are a series of ironclad links.”

“That is regrettable,” Alito said.

SCOTUS nixes injunction that limited Biden admin contacts with social networks Read More »

elon-musk’s-x-defeats-australia’s-global-takedown-order-of-stabbing-video

Elon Musk’s X defeats Australia’s global takedown order of stabbing video

Elon Musk’s X defeats Australia’s global takedown order of stabbing video

Australia’s safety regulator has ended a legal battle with X (formerly Twitter) after threatening approximately $500,000 daily fines for failing to remove 65 instances of a religiously motivated stabbing video from X globally.

Enforcing Australia’s Online Safety Act, eSafety commissioner Julie Inman-Grant had argued it would be dangerous for the videos to keep spreading on X, potentially inciting other acts of terror in Australia.

But X owner Elon Musk refused to comply with the global takedown order, arguing that it would be “unlawful and dangerous” to allow one country to control the global Internet. And Musk was not alone in this fight. The legal director of a nonprofit digital rights group called the Electronic Frontier Foundation (EFF), Corynne McSherry, backed up Musk, urging the court to agree that “no single country should be able to restrict speech across the entire Internet.”

“We welcome the news that the eSafety Commissioner is no longer pursuing legal action against X seeking the global removal of content that does not violate X’s rules,” X’s Global Government Affairs account posted late Tuesday night. “This case has raised important questions on how legal powers can be used to threaten global censorship of speech, and we are heartened to see that freedom of speech has prevailed.”

Inman-Grant was formerly Twitter’s director of public policy in Australia and used that experience to land what she told The Courier-Mail was her “dream role” as Australia’s eSafety commissioner in 2017. Since issuing the order to remove the video globally on X, Inman-Grant had traded barbs with Musk (along with other Australian lawmakers), responding to Musk labeling her a “censorship commissar” by calling him an “arrogant billionaire” for fighting the order.

On X, Musk arguably got the last word, posting, “Freedom of speech is worth fighting for.”

Safety regulator still defends takedown order

In a statement, Inman-Grant said early Wednesday that her decision to discontinue proceedings against X was part of an effort to “consolidate actions,” including “litigation across multiple cases.” She ultimately determined that dropping the case against X would be the “option likely to achieve the most positive outcome for the online safety of all Australians, especially children.”

“Our sole goal and focus in issuing our removal notice was to prevent this extremely violent footage from going viral, potentially inciting further violence and inflicting more harm on the Australian community,” Inman-Grant said, still defending the order despite dropping it.

In court, X’s lawyer Marcus Hoyne had pushed back on such logic, arguing that the eSafety regulator’s mission was “pointless” because “footage of the attack had now spread far beyond the few dozen URLs originally identified,” the Australian Broadcasting Corporation reported.

“I stand by my investigators and the decisions eSafety made,” Inman-Grant said.

Other Australian lawmakers agree the order was not out of line. According to AP News, Australian Minister for Communications Michelle Rowland shared a similar statement in parliament today, backing up the safety regulator while scolding X users who allegedly took up Musk’s fight by threatening Inman-Grant and her family. The safety regulator has said that Musk’s X posts incited a “pile-on” from his followers who allegedly sent death threats and exposed her children’s personal information, the BBC reported.

“The government backs our regulators and we back the eSafety Commissioner, particularly in light of the reprehensible threats to her physical safety and the threats to her family in the course of doing her job,” Rowland said.

Elon Musk’s X defeats Australia’s global takedown order of stabbing video Read More »

tiktok-vaguely-disputes-report-that-it’s-making-a-us-only-app

TikTok vaguely disputes report that it’s making a US-only app

Exploring a different route —

TikTok has spent months separating code for US-only algorithm, insiders claim.

TikTok vaguely disputes report that it’s making a US-only app

TikTok is now disputing a Reuters report that claims the short-video app is cloning its algorithm to potentially offer a different version of the app, which might degrade over time, just for US users.

Sources “with direct knowledge” of the project—granted anonymity because they’re not authorized to discuss it publicly—told Reuters that the TikTok effort began late last year. They said that the project will likely take a year to complete, requiring hundreds of engineers to separate millions of lines of code.

As these sources reported, TikTok’s tremendous undertaking could potentially help prepare its China-based owner ByteDance to appease US lawmakers who passed a law in April forcing TikTok to sell its US-based operations by January 19 or face a ban. But TikTok has maintained that the “qualified divestiture” required by the law would be impossible, and on Thursday, TikTok denied the accuracy of Reuters’ report while reiterating its stance that a sale is not in the cards.

“The Reuters story published today is misleading and factually inaccurate,” the TikTok Policy account posted on X (formerly Twitter). “As we said in our court filing, the ‘qualified divestiture’ demanded by the Act to allow TikTok to continue operating in the United States is simply not possible: not commercially, not technologically, not legally. And certainly not on the 270-day timeline required by the Act.”

It remains unclear precisely which parts of Reuters’ report are supposedly “misleading and factually inaccurate.” A Reuters spokesperson said that Reuters stands by its reporting.

A TikTok spokesperson told Ars that “while we have continued work in good faith to further safeguard the authenticity of the TikTok experience, it is simply false to suggest that this work would facilitate divestiture or that divestiture is even a possibility.”

TikTok is currently suing to block the US law on First Amendment grounds, and this week a court fast-tracked that legal challenge to attempt to resolve the matter before the law takes effect next year. Oral arguments are scheduled to start this September, with a ruling expected by December 6, which Reuters reported leaves time for a potential Supreme Court challenge to that ruling.

However, in the meantime, sources told Reuters that TikTok is seemingly exploring all its options to avoid running afoul of the US law, including separating its code base and even once considering open-sourcing parts of its algorithm to increase transparency.

Separating out the code is not an easy task, insiders told Reuters.

“Compliance and legal issues involved with determining what parts of the code can be carried over to TikTok are complicating the work,” one source told Reuters. “Each line of code has to be reviewed to determine if it can go into the separate code base.”

But creating a US-only content-recommendation algorithm could be worth it, as it could allow TikTok US to operate independently from the China-based TikTok app in a manner that could satisfy lawmakers worried about the Chinese government potentially spying on Americans through the app.

However, there’s no guaranteeing that the US-only version of TikTok’s content-recommendation algorithm will perform as well as the currently available app in the US, sources told Reuters. By potentially cutting off TikTok US from Chinese engineers who developed and maintain the algorithm, US users could miss out on code updates improving or maintaining desired functionality. That means TikTok’s popular For You Page recommendations may suffer, as “TikTok US may not be able to deliver the same level of performance as the existing TikTok,” sources told Reuters.

TikTok vaguely disputes report that it’s making a US-only app Read More »

judge-halts-texas-probe-into-media-matters’-reporting-on-x

Judge halts Texas probe into Media Matters’ reporting on X

Texas Attorney General Ken Paxton speaks during the annual Conservative Political Action Conference (CPAC) meeting on February 23, 2024.

Enlarge / Texas Attorney General Ken Paxton speaks during the annual Conservative Political Action Conference (CPAC) meeting on February 23, 2024.

A judge has preliminarily blocked what Media Matters for America (MMFA) described as Texas Attorney General Ken Paxton’s attempt to “rifle through” confidential documents to prove that MMFA fraudulently manipulated X (formerly Twitter) data to ruin X’s advertising business, as Elon Musk has alleged.

After Musk accused MMFA of publishing reports that Musk claimed were designed to scare advertisers off X, Paxton promptly launched his own investigation into MMFA last November.

Suing MMFA over alleged violations of Texas’ Deceptive Trade Practices Act—which prohibits “disparaging the goods, services, or business of another by false or misleading representation of facts”—Paxton sought a wide range of MMFA documents through a civil investigative demand (CID). Filing a motion to block the CID, MMFA told the court that the CID had violated the media organization’s First Amendment rights, providing evidence that Paxton’s investigation and CID had chilled MMFA speech.

Paxton had requested Media Matters’ financial records—including “direct and indirect sources of funding for all Media Matters operations involving X research or publications”—as well as “internal and external communications” on “Musk’s purchase of X” and X’s current CEO Linda Yaccarino. He also asked for all of Media Matters’ communications with X representatives and X advertisers.

But perhaps most invasive, Paxton wanted to see all the communications about Media Matters’ X reporting that triggered the lawsuits, which, as US District Judge Amit Mehta wrote in an opinion published Friday, was a compelled disclosure that “poses a serious threat to the vitality of the newsgathering process.”

Mehta was concerned that MMFA showed that “Media Matters’ editorial leaders have pared back reporting and publishing, particularly on any topics that could be perceived as relating to the Paxton investigation”—including two follow-ups on its X reporting. Because of Paxton’s alleged First Amendment retaliation, MMFA said it did not publish “two pieces concerning X’s placement of advertising alongside antisemitic, pro-Nazi accounts”—”not out of legitimate concerns about fairness or accuracy,” but “out of fear of harassment, threats, and retaliation.”

According to Mehta’s order, Paxton did not contest that Texas’ lawsuit had chilled MMFA’s speech. Further, Paxton had given at least one podcast interview where he called upon other state attorneys general to join him in investigating MMFA.

Because Paxton “projected himself across state lines and asserted a pseudo-national executive authority,” Mehta wrote and repeatedly described MMFA as a “radical anti-free speech” or “radical left-wing organization,” the court had seen sufficient “evidence of retaliatory intent.”

“Notably,” Mehta wrote, Paxton remained “silent” and never “submitted a sworn declaration that explains his reasons for opening the investigation.”

In his press release, Paxton justified the investigation by saying, “We are examining the issue closely to ensure that the public has not been deceived by the schemes of radical left-wing organizations who would like nothing more than to limit freedom by reducing participation in the public square.”

Ultimately, Mehta granted MMFA’s request for a preliminary injunction to block Paxton’s CID because the judge found that the investigation and the CID have caused MMFA “to self-censor when making research and publication decisions, adversely affected the relationships between editors and reporters, and restricted communications with sources and journalists.”

“Only injunctive relief will ‘prevent the [ongoing] deprivation of free speech rights,'” Mehta’s opinion said, deeming MMFA’s reporting as “core First Amendment activities.”

Mehta’s order also banned Paxton from taking any steps to further his investigation until the lawsuit is decided.

In a statement Friday, MMFA President and CEO Angelo Carusone celebrated the win as not just against Paxton but also against Musk.

“Elon Musk encouraged Republican state attorneys general to use their power to harass their critics and stifle reporting about X,” Carusone said. “Ken Paxton was one of those AGs that took up the call and he was defeated. Today’s decision is a victory for free speech.”

Paxton has not yet responded to the preliminary injunction and his office did not respond to Ars’ request to comment..

Media Matters’ lawyer, Aria C. Branch, a partner at Elias Law Group, told Ars that “while Attorney General Paxton’s office has not yet responded to Friday’s ruling, the preliminary injunction should certainly put an end to these kind of lawless, politically motivated attempts to muzzle the press.”

Judge halts Texas probe into Media Matters’ reporting on X Read More »

x-filing-“thermonuclear-lawsuit”-in-texas-should-be-“fatal,”-media-matters-says

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says

Ever since Elon Musk’s X Corp sued Media Matters for America (MMFA) over a pair of reports that X (formerly Twitter) claims caused an advertiser exodus in 2023, one big question has remained for onlookers: Why is this fight happening in Texas?

In a motion to dismiss filed in Texas’ northern district last month, MMFA argued that X’s lawsuit should be dismissed not just because of a “fatal jurisdictional defect,” but “dismissal is also required for lack of venue.”

Notably, MMFA is based in Washington, DC, while “X is organized under Nevada law and maintains its principal place of business in San Francisco, California, where its own terms of service require users of its platform to litigate any disputes.”

“Texas is not a fair or reasonable forum for this lawsuit,” MMFA argued, suggesting that “the case must be dismissed or transferred” because “neither the parties nor the cause of action has any connection to Texas.”

Last Friday, X responded to the motion to dismiss, claiming that the lawsuit—which Musk has described as “thermonuclear”—was appropriately filed in Texas because MMFA “intentionally” targeted readers and at least two X advertisers located in Texas, Oracle and AT&T. According to X, because MMFA “identified Oracle, a Texas-based corporation, by name in its coverage,” MMFA “cannot claim surprise at being held to answer for its conduct in Texas.” X also claimed that Texas has jurisdiction because Musk resides in Texas and “makes numerous critical business decisions about X while in Texas.”

This so-called targeting of Texans caused a “substantial part” of alleged financial harms that X attributes to MMFA’s reporting, X alleged.

According to X, MMFA specifically targeted X in Texas by sending newsletters sharing its reports with “hundreds or thousands” of Texas readers and by allegedly soliciting donations from Texans to support MMFA’s reporting.

But MMFA pushed back, saying that “Texas subscribers comprise a disproportionately small percentage of Media Matters’ newsletter recipients” and that MMFA did “not solicit Texas donors to fund Media Matters’s journalism concerning X.” Because of this, X’s “efforts to concoct claim-related Texas contacts amount to a series of shots in the dark, uninformed guesses, and irrelevant tangents,” MMFA argued.

On top of that, MMFA argued that X could not attribute any financial harms allegedly caused by MMFA’s reports to either of the two Texas-based advertisers that X named in its court filings. Oracle, MMFA said, “by X’s own admission,… did not withdraw its ads” from X, and AT&T was not named in MMFA’s reporting, and thus, “any investigation AT&T did into its ad placement on X was of its own volition and is not plausibly connected to Media Matters.” MMFA has argued that advertisers, particularly sophisticated Fortune 500 companies, made their own decisions to stop advertising on X, perhaps due to widely reported increases in hate speech on X or even Musk’s own seemingly antisemitic posting.

Ars could not immediately reach X, Oracle, or AT&T for comment.

X’s suit allegedly designed to break MMFA

MMFA President Angelo Carusone, who is a defendant in X’s lawsuit, told Ars that X’s recent filing has continued to “expose” the lawsuit as a “meritless and vexatious effort to inflict maximum damage on critical research and reporting about the platform.”

“It’s solely designed to basically break us or stop us from doing the work that we were doing originally,” Carusone said, confirming that the lawsuit has negatively impacted MMFA’s hate speech research on X.

MMFA argued that Musk could have sued in other jurisdictions, such as Maryland, DC, or California, and MMFA would not have disputed the venue, but Carusone suggested that Musk sued in Texas in hopes that it would be “a more friendly jurisdiction.”

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says Read More »

bill-that-could-ban-tiktok-passes-in-house-despite-constitutional-concerns

Bill that could ban TikTok passes in House despite constitutional concerns

Bill that could ban TikTok passes in House despite constitutional concerns

On Wednesday, the US House of Representatives passed a bill with a vote of 352–65 that could block TikTok in the US. Fifteen Republicans and 50 Democrats voted in opposition, and one Democrat voted present, CNN reported.

TikTok is not happy. A spokesperson told Ars, “This process was secret and the bill was jammed through for one reason: it’s a ban. We are hopeful that the Senate will consider the facts, listen to their constituents, and realize the impact on the economy, 7 million small businesses, and the 170 million Americans who use our service.”

Lawmakers insist that the Protecting Americans from Foreign Adversary Controlled Applications Act is not a ban. Instead, they claim the law gives TikTok a choice: either divest from ByteDance’s China-based owners or face the consequences of TikTok being cut off in the US.

Under the law—which still must pass the Senate, a more significant hurdle, where less consensus is expected and a companion bill has not yet been introduced—app stores and hosting services would face steep consequences if they provide access to apps controlled by US foreign rivals. That includes allowing the app to be updated or maintained by US users who already have the app on their devices.

Violations subject app stores and hosting services to fines of $5,000 for each individual US user “determined to have accessed, maintained, or updated a foreign adversary-controlled application.” With 170 million Americans currently on TikTok, that could add up quickly to eye-popping fines.

If the bill becomes law, app stores and hosting services would have 180 days to limit access to foreign adversary-controlled apps. The bill specifically names TikTok and ByteDance as restricted apps, making it clear that lawmakers intend to quash the alleged “national security threat” that TikTok poses in the US.

House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.), a proponent of the bill, has said that “foreign adversaries like China pose the greatest national security threat of our time. With applications like TikTok, these countries are able to target, surveil, and manipulate Americans.” The proposed bill “ends this practice by banning applications controlled by foreign adversaries of the United States that pose a clear national security risk.”

McMorris Rodgers has also made it clear that “our goal is to get this legislation onto the president’s desk.” Joe Biden has indicated he will sign the bill into law, leaving the Senate as the final hurdle to clear. Senators told CNN that they were waiting to see what happened in the House before seeking a path forward in the Senate that would respect TikTok users’ civil liberties.

Attempts to ban TikTok have historically not fared well in the US, with a recent ban in Montana being reversed by a federal judge last December. Judge Donald Molloy granted TikTok’s request for a preliminary injunction, denouncing Montana’s ban as an unconstitutional infringement of Montana-based TikTok users’ rights.

More recently, the American Civil Liberties Union (ACLU) has slammed House lawmakers for rushing the bill through Congress, accusing lawmakers of attempting to stifle free speech. ACLU senior policy counsel Jenna Leventoff said in a press release that lawmakers were “once again attempting to trade our First Amendment rights for cheap political points during an election year.”

“Just because the bill sponsors claim that banning TikTok isn’t about suppressing speech, there’s no denying that it would do just that,” Leventoff said.

Bill that could ban TikTok passes in House despite constitutional concerns Read More »