Whether TikTok will be banned in the US in three days is still up in the air. The Supreme Court has yet to announce its verdict on the constitutionality of a law requiring TikTok to either sell its US operations or shut down in the US. It’s possible that the Supreme Court could ask for more time to deliberate, potentially delaying enforcement of the law as TikTok has requested until after Donald Trump takes office.
While the divest-or-sell law had bipartisan support when it passed last year, momentum has seemingly shifted this week. Senator Ed Markey (D-Mass.) has introduced a bill to extend the deadline ahead of a potential TikTok ban, and a top Trump adviser, Congressman Mike Waltz, has said that Trump plans to stop the ban and “keep TikTok from going dark,” the BBC reported. Even the Biden administration, whose justice department just finished arguing why the US needed to enforce the law to SCOTUS, “is considering ways to keep TikTok available,” sources told NBC News.
Many US RedNote users quickly banned
For RedNote and China, the app’s sudden popularity as the US alternative to TikTok seems to have come as a surprise. A Beijing-based independent industry analyst, Liu Xingliang, told Reuters that RedNote was “caught unprepared” by the influx of users.
To keep restricted content off the app, RedNote allegedly has since been “scrambling to find ways to moderate English-language content and build English-Chinese translation tools,” two sources familiar with the company told Reuters. Time’s reporting echoed that, noting that “Red Note is urgently recruiting English content moderators [Chinese]” became a trending topic Wednesday on the Chinese social media app Weibo.
Many analysts have suggested that Americans’ fascination with RedNote will be short-lived. Liu told Reuters that “American netizens are in a dissatisfied mood, and wanting to find another Chinese app to use is a catharsis of short-term emotions and a rebellious gesture.” But unfortunately, “the experience on it is not very good for foreigners.”
On RedNote, Chinese users have warned Americans that China censors way more content than they’re used to on TikTok. Analysts told The Washington Post that RedNote’s “focus on shopping and entertainment means it is often even more active in blocking content seen as too serious for the app’s target audience.” Chinese users warned Americans not to post about “politics, religion, and drugs” or risk “account bans or legal repercussions, including jail time,” Rest of World reported. Meanwhile, on Reddit, Americans received additional warnings about common RedNote scams and reasons accounts could be banned. But Rest of World noted that many so-called “TikTok refugees” migrating to RedNote do not “seem to know, or care, about platform rules.”
Florida threatened TV stations over ad that criticized state’s abortion law.
Screenshot of political advertisement featuring a woman describing her experience having an abortion after being diagnosed with brain cancer. Credit: Floridians Protecting Freedom
US District Judge Mark Walker had a blunt message for the Florida surgeon general in an order halting the government official’s attempt to censor a political ad that opposes restrictions on abortion.
“To keep it simple for the State of Florida: it’s the First Amendment, stupid,” Walker, an Obama appointee who is chief judge in US District Court for the Northern District of Florida, wrote yesterday in a ruling that granted a temporary restraining order.
“Whether it’s a woman’s right to choose, or the right to talk about it, Plaintiff’s position is the same—’don’t tread on me,'” Walker wrote later in the ruling. “Under the facts of this case, the First Amendment prohibits the State of Florida from trampling on Plaintiff’s free speech.”
The Florida Department of Health recently sent a legal threat to broadcast TV stations over the airing of a political ad that criticized abortion restrictions in Florida’s Heartbeat Protection Act. The department in Gov. Ron DeSantis’ administration claimed the ad falsely described the abortion law, which could be weakened by a pending ballot question.
Floridians Protecting Freedom, the group that launched the TV ad and is sponsoring a ballot question to lift restrictions on abortion, sued Surgeon General Joseph Ladapo and Department of Health general counsel John Wilson. Wilson has resigned.
Surgeon general blocked from further action
Walker’s order granting the group’s motion states that “Defendant Ladapo is temporarily enjoined from taking any further actions to coerce, threaten, or intimate repercussions directly or indirectly to television stations, broadcasters, or other parties for airing Plaintiff’s speech, or undertaking enforcement action against Plaintiff for running political advertisements or engaging in other speech protected under the First Amendment.”
The order expires on October 29 but could be replaced by a preliminary injunction that would remain in effect while litigation continues. A hearing on the motion for a preliminary injunction is scheduled for the morning of October 29.
The pending ballot question would amend the state Constitution to say, “No law shall prohibit, penalize, delay, or restrict abortion before viability or when necessary to protect the patient’s health, as determined by the patient’s healthcare provider. This amendment does not change the Legislature’s constitutional authority to require notification to a parent or guardian before a minor has an abortion.”
Walker’s ruling said that Ladapo “has the right to advocate for his own position on a ballot measure. But it would subvert the rule of law to permit the State to transform its own advocacy into the direct suppression of protected political speech.”
Federal Communications Commission Chairwoman Jessica Rosenworcel recently criticized state officials, writing that “threats against broadcast stations for airing content that conflicts with the government’s views are dangerous and undermine the fundamental principle of free speech.”
State threatened criminal proceedings
The Floridians Protecting Freedom advertisement features a woman who “recalls her decision to have an abortion in Florida in 2022,” and “states that she would not be able to have an abortion for the same reason under the current law,” Walker’s ruling said.
Caroline, the woman in the ad, states that “the doctors knew if I did not end my pregnancy, I would lose my baby, I would lose my life, and my daughter would lose her mom. Florida has now banned abortion even in cases like mine. Amendment 4 is going to protect women like me; we have to vote yes.”
The ruling described the state government response:
Shortly after the ad began running, John Wilson, then general counsel for the Florida Department of Health, sent letters on the Department’s letterhead to Florida TV stations. The letters assert that Plaintiff’s political advertisement is false, dangerous, and constitutes a “sanitary nuisance” under Florida law. The letter informed the TV stations that the Department of Health must notify the person found to be committing the nuisance to remove it within 24 hours pursuant to section 386.03(1), Florida Statutes. The letter further warned that the Department could institute legal proceedings if the nuisance were not timely removed, including criminal proceedings pursuant to section 386.03(2)(b), Florida Statutes. Finally, the letter acknowledged that the TV stations have a constitutional right to “broadcast political advertisements,” but asserted this does not include “false advertisements which, if believed, would likely have a detrimental effect on the lives and health of pregnant women in Florida.” At least one of the TV stations that had been running Plaintiff’s advertisement stopped doing so after receiving this letter from the Department of Health.
The Department of Health claimed the ad “is categorically false” because “Florida’s Heartbeat Protection Act does not prohibit abortion if a physician determines the gestational age of the fetus is less than 6 weeks.”
Floridians Protecting Freedom responded that the woman in the ad made true statements, saying that “Caroline was diagnosed with stage four brain cancer when she was 20 weeks pregnant; the diagnosis was terminal. Under Florida law, abortions may only be performed after six weeks gestation if ‘[t]wo physicians certify in writing that, in reasonable medical judgment, the termination of the pregnancy is necessary to save the pregnant woman’s life or avert a serious risk of substantial and irreversible physical impairment of a major bodily function of the pregnant woman other than a psychological condition.'”
Because “Caroline’s diagnosis was terminal… an abortion would not have saved her life, only extended it. Florida law would not allow an abortion in this instance because the abortion would not have ‘save[d] the pregnant woman’s life,’ only extended her life,” the group said.
Judge: State should counter with its own speech
Walker’s ruling said the government can’t censor the ad by claiming it is false:
Plaintiff’s argument is correct. While Defendant Ladapo refuses to even agree with this simple fact, Plaintiff’s political advertisement is political speech—speech at the core of the First Amendment. And just this year, the United States Supreme Court reaffirmed the bedrock principle that the government cannot do indirectly what it cannot do directly by threatening third parties with legal sanctions to censor speech it disfavors. The government cannot excuse its indirect censorship of political speech simply by declaring the disfavored speech is “false.”
State officials must show that their actions “were narrowly tailored to serve a compelling government interest,” Walker wrote. A “narrowly tailored solution” in this case would be counterspeech, not censorship, he wrote.
“For all these reasons, Plaintiff has demonstrated a substantial likelihood of success on the merits,” the ruling said. Walker wrote that a ruling in favor of the state would open the door to more censorship:
This case pits the right to engage in political speech against the State’s purported interest in protecting the health and safety of Floridians from “false advertising.” It is no answer to suggest that the Department of Health is merely flexing its traditional police powers to protect health and safety by prosecuting “false advertising”—if the State can rebrand rank viewpoint discriminatory suppression of political speech as a “sanitary nuisance,” then any political viewpoint with which the State disagrees is fair game for censorship.
Walker then noted that Ladapo “has ample, constitutional alternatives to mitigate any harm caused by an injunction in this case.” The state is already running “its own anti-Amendment 4 campaign to educate the public about its view of Florida’s abortion laws and to correct the record, as it sees fit, concerning pro-Amendment 4 speech,” Walker wrote. “The State can continue to combat what it believes to be ‘false advertising’ by meeting Plaintiff’s speech with its own.”
Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.
Google announced Monday that it’s shutting down all AdSense accounts in Russia due to “ongoing developments in Russia.”
This effectively ends Russian content creators’ ability to monetize their posts, including YouTube videos. The change impacts accounts monetizing content through AdSense, AdMob, and Ad Manager, the support page said.
While Google has declined requests to provide details on what prompted the change, it’s the latest escalation of Google’s ongoing battle with Russian officials working to control the narrative on Russia’s war with Ukraine.
In February 2022, Google paused monetization of all state-funded media in Russia, then temporarily paused all ads in the country the very next month. That March, Google paused the creation of new Russia-based AdSense accounts and blocked ads globally that originated from Russia. In March 2022, Google also paused monetization of any content exploiting, condoning, or dismissing Russia’s war with Ukraine. Seemingly as retaliation, Russia seized Google’s bank account, causing Google Russia to shut down in May 2022.
Since then, Google has “blocked more than 1,000 YouTube channels, including state-sponsored news, and over 5.5 million videos,” Reuters reported.
For Russian creators who have still found ways to monetize their content amid the chaos, Google’s decision to abruptly shut down AdSense accounts comes as “a serious blow to their income,” Bleeping Computer reported. Russia is second only to the US in terms of YouTube web traffic, Similarweb data shows, making it likely that Russia-based YouTubers earned “significant” revenues that will now be suddenly lost, Bleeping Computer reported.
Russia-based creators—including YouTubers, as well as bloggers and website owners—will receive their final payout this month, according to a message from Google to users reviewed by Reuters.
“Assuming you have no active payment holds and meet the minimum payment thresholds,” payments will be disbursed between August 21 and 26, Google’s message said.
Google’s spokesperson offered little clarification to Reuters and Bleeping Computer, saying only that “we will no longer be able to make payments to Russia-based AdSense accounts that have been able to continue monetizing traffic outside of Russia. As a result, we will be deactivating these accounts effective August 2024.”
It seems likely, though, that Russia passing a law in March—banning advertising on websites, blogs, social networks, or any other online sources published by a “foreign agent,” as Reuters reported in February—perhaps influenced Google’s update. The law also prohibited foreign agents from placing ads on sites, and under the law, foreign agents could include anti-Kremlin politicians, activists, and media. With new authority, Russia may have further retaliated against Google, potentially forcing Google to give up the last bit of monetization available to Russia-based creators increasingly censored online.
State assembly member and Putin ally Vyacheslav Volodin said that the law was needed to stop financing “scoundrels” allegedly “killing our soldiers, officers, and civilians,” Reuters reported.
One Russian YouTuber with 11.4 million subscribers, Valentin Petukhov, suggested on Telegram that Google shut down AdSense because people had managed to “bypass payment blocks imposed by Western sanctions on Russian banks,” Bleeping Computer reported.
According to Petukhov, the wording in Google’s message to users was “kind of strange,” making it unclear what account holders should do next.
“Even though the income from monetization has fallen tenfold, it hasn’t disappeared completely,” Petukhov said.
YouTube still spotty in Russia
Google’s decision to end AdSense in Russia follows reports of a mass YouTube outage that Russian Internet monitoring service Sboi.rf reported is still impacting users today.
Officials in Russia claim that YouTube has been operating at slower speeds because Google stopped updating its equipment in the region after the invasion of Ukraine, Reuters reported.
This outage and the slower speeds led “subscribers of over 135 regional communication operators in Russia” to terminate “agreements with companies due to problems with the operation of YouTube and other Google services,” the Russian tech blog Habr reported.
As Google has tried to resist pressure from Russian lawmakers to censor content that officials deem illegal, such as content supporting Ukraine or condemning Russia, YouTube had become one of the last bastions of online free speech, Reuters reported. It’s unclear how ending monetization in the region will impact access to anti-Kremlin reporting on YouTube or more broadly online in Russia. Last February, a popular journalist with 1.64 million subscribers on YouTube, Katerina Gordeeva, wrote on Telegram that “she was suspending her work due to the law,” Reuters reported.
“We will no longer be able to work as before,” Gordeeva said. “Of course, we will look for a way out.”
The Children’s Health Defense (CHD), an anti-vaccine group founded by Robert F. Kennedy Jr, has once again failed to convince a court that Meta acted as a state agent when censoring the group’s posts and ads on Facebook and Instagram.
In his opinion affirming a lower court’s dismissal, US Ninth Circuit Court of Appeals Judge Eric Miller wrote that CHD failed to prove that Meta acted as an arm of the government in censoring posts. Concluding that Meta’s right to censor views that the platforms find “distasteful” is protected by the First Amendment, Miller denied CHD’s requested relief, which had included an injunction and civil monetary damages.
“Meta evidently believes that vaccines are safe and effective and that their use should be encouraged,” Miller wrote. “It does not lose the right to promote those views simply because they happen to be shared by the government.”
CHD told Reuters that the group “was disappointed with the decision and considering its legal options.”
The group first filed the complaint in 2020, arguing that Meta colluded with government officials to censor protected speech by labeling anti-vaccine posts as misleading or removing and shadowbanning CHD posts. This caused CHD’s traffic on the platforms to plummet, CHD claimed, and ultimately, its pages were removed from both platforms.
However, critically, Miller wrote, CHD did not allege that “the government was actually involved in the decisions to label CHD’s posts as ‘false’ or ‘misleading,’ the decision to put the warning label on CHD’s Facebook page, or the decisions to ‘demonetize’ or ‘shadow-ban.'”
“CHD has not alleged facts that allow us to infer that the government coerced Meta into implementing a specific policy,” Miller wrote.
Instead, Meta “was entitled to encourage” various “input from the government,” justifiably seeking vaccine-related information provided by the World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC) as it navigated complex content moderation decisions throughout the pandemic, Miller wrote.
Therefore, Meta’s actions against CHD were due to “Meta’s own ‘policy of censoring,’ not any provision of federal law,” Miller concluded. “The evidence suggested that Meta had independent incentives to moderate content and exercised its own judgment in so doing.”
None of CHD’s theories that Meta coordinated with officials to deprive “CHD of its constitutional rights” were plausible, Miller wrote, whereas the “innocent alternative”—”that Meta adopted the policy it did simply because” CEO Mark Zuckerberg and Meta “share the government’s view that vaccines are safe and effective”—appeared “more plausible.”
Meta “does not become an agent of the government just because it decides that the CDC sometimes has a point,” Miller wrote.
Equally not persuasive were CHD’s notions that Section 230 immunity—which shields platforms from liability for third-party content—”‘removed all legal barriers’ to the censorship of vaccine-related speech,” such that “Meta’s restriction of that content should be considered state action.”
“That Section 230 operates in the background to immunize Meta if it chooses to suppress vaccine misinformation—whether because it shares the government’s health concerns or for independent commercial reasons—does not transform Meta’s choice into state action,” Miller wrote.
One judge dissented over Section 230 concerns
In his dissenting opinion, Judge Daniel Collins defended CHD’s Section 230 claim, however, suggesting that the appeals court erred and should have granted CHD injunctive and declaratory relief from alleged censorship. CHD CEO Mary Holland told The Defender that the group was pleased the decision was not unanimous.
According to Collins, who like Miller is a Trump appointee, Meta could never have built its massive social platforms without Section 230 immunity, which grants platforms the ability to broadly censor viewpoints they disfavor.
It was “important to keep in mind” that “the vast practical power that Meta exercises over the speech of millions of others ultimately rests on a government-granted privilege to which Meta is not constitutionally entitled,” Collins wrote. And this power “makes a crucial difference in the state-action analysis.”
As Collins sees it, CHD could plausibly allege that Meta’s communications with government officials about vaccine-related misinformation targeted specific users, like the “disinformation dozen” that includes both CHD and Kennedy. In that case, it appears possible to Collins that Section 230 provides a potential opportunity for government to target speech that it disfavors through mechanisms provided by the platforms.
“Having specifically and purposefully created an immunized power for mega-platform operators to freely censor the speech of millions of persons on those platforms, the Government is perhaps unsurprisingly tempted to then try to influence particular uses of such dangerous levers against protected speech expressing viewpoints the Government does not like,” Collins warned.
He further argued that “Meta’s relevant First Amendment rights” do not “give Meta an unbounded freedom to work with the Government in suppressing speech on its platforms.” Disagreeing with the majority, he wrote that “in this distinctive scenario, applying the state-action doctrine promotes individual liberty by keeping the Government’s hands away from the tempting levers of censorship on these vast platforms.”
The majority agreed, however, that while Section 230 immunity “is undoubtedly a significant benefit to companies like Meta,” lawmakers’ threats to weaken Section 230 did not suggest that Meta’s anti-vaccine policy was coerced state action.
“Many companies rely, in one way or another, on a favorable regulatory environment or the goodwill of the government,” Miller wrote. “If that were enough for state action, every large government contractor would be a state actor. But that is not the law.”
The Kids Online Safety Act (KOSA) easily passed the Senate today despite critics’ concerns that the bill may risk creating more harm than good for kids and perhaps censor speech for online users of all ages if it’s signed into law.
KOSA received broad bipartisan support in the Senate, passing with a 91–3 vote alongside the Children’s Online Privacy Protection Action (COPPA) 2.0. Both laws seek to control how much data can be collected from minors, as well as regulate the platform features that could harm children’s mental health.
Only Senators Ron Wyden (D-Ore.), Rand Paul (R-Ky.), and Mike Lee (R-Utah) opposed the bills.
In an op-ed for The Courier-Journal, Paul argued that KOSA imposes a “duty of care” to mitigate harms to minors on their platforms that “will not only stifle free speech, but it will deprive Americans of the benefits of our technological advancements.”
“With the Internet, today’s children have the world at their fingertips,” Paul wrote, but if KOSA passes, even allegedly benign content like “pro-life messages” or discussion of a teen overcoming an eating disorder could be censored if platforms fear compliance issues.
“While doctors’ and therapists’ offices close at night and on weekends, support groups are available 24 hours a day, seven days a week for people who share similar concerns or have the same health problems. Any solution to protect kids online must ensure the positive aspects of the Internet are preserved,” Paul wrote.
During a KOSA critics’ press conference today, Dara Adkison—the executive director of a group providing resources for transgender youths called TransOhio—expressed concerns that lawmakers would target sites like TransOhio if the law also passed in the House, where the bill heads next.
“I’ve literally had legislators tell me to my face that they would love to see our website taken off the Internet because they don’t want people to have the kinds of vital community resources that we provide,” Adkison said.
Paul argued that what was considered harmful to kids was subjective, noting that a key flaw with KOSA was that “KOSA does not explicitly define the term ‘mental health disorder.'” Instead, platforms are to refer to the definition in “the fifth edition of the Diagnostic and Statistical Manual of Mental Health Disorders” or “the most current successor edition.”
“That means the scope of the bill could change overnight without any action from America’s elected representatives,” Paul warned, suggesting that “KOSA opens the door to nearly limitless content regulation because platforms will censor users rather than risk liability.”
Ahead of the vote, Senator Richard Blumenthal (D-Conn.)—who co-sponsored KOSA—denied that the bill strove to regulate content, The Hill reported. To Blumenthal and other KOSA supporters, its aim instead is to ensure that social media is “safe by design” for young users.
According to The Washington Post, KOSA and COPPA 2.0 passing “represent the most significant restrictions on tech platforms to clear a chamber of Congress in decades.” However, while President Joe Biden has indicated he would be willing to sign the bill into law, most seem to agree that KOSA will struggle to pass in the House of Representatives.
A senior tech policy director for Chamber of Progress—a progressive tech industry policy coalition—Todd O’Boyle, has said that currently there is “substantial opposition” in the House. O’Boyle said that he expects that the political divide will be enough to block KOSA’s passage and prevent giving “the power” to the Federal Trade Commission (FTC) or “the next president” to “crack down on online speech” or otherwise pose “a massive threat to our constitutional rights.”
“If there’s one thing the far-left and far-right agree on, it’s that the next chair of the FTC shouldn’t get to decide what online posts are harmful,” O’Boyle said.
On Wednesday, the Supreme Court tossed out claims that the Biden administration coerced social media platforms into censoring users by removing COVID-19 and election-related content.
Complaints alleging that high-ranking government officials were censoring conservatives had previously convinced a lower court to order an injunction limiting the Biden administration’s contacts with platforms. But now that injunction has been overturned, re-opening lines of communication just ahead of the 2024 elections—when officials will once again be closely monitoring the spread of misinformation online targeted at voters.
In a 6–3 vote, the majority ruled that none of the plaintiffs suing—including five social media users and Republican attorneys general in Louisiana and Missouri—had standing. They had alleged that the government had “pressured the platforms to censor their speech in violation of the First Amendment,” demanding an injunction to stop any future censorship.
Plaintiffs may have succeeded if they were instead seeking damages for past harms. But in her opinion, Justice Amy Coney Barrett wrote that partly because the Biden administration seemingly stopped influencing platforms’ content policies in 2022, none of the plaintiffs could show evidence of a “substantial risk that, in the near future, they will suffer an injury that is traceable” to any government official. Thus, they did not seem to face “a real and immediate threat of repeated injury,” Barrett wrote.
“Without proof of an ongoing pressure campaign, it is entirely speculative that the platforms’ future moderation decisions will be attributable, even in part,” to government officials, Barrett wrote, finding that an injunction would do little to prevent future censorship.
Instead, plaintiffs’ claims “depend on the platforms’ actions,” Barrett emphasized, “yet the plaintiffs do not seek to enjoin the platforms from restricting any posts or accounts.”
“It is a bedrock principle that a federal court cannot redress ‘injury that results from the independent action of some third party not before the court,'” Barrett wrote.
Barrett repeatedly noted “weak” arguments raised by plaintiffs, none of which could directly link their specific content removals with the Biden administration’s pressure campaign urging platforms to remove vaccine or election misinformation.
According to Barrett, the lower court initially granting the injunction “glossed over complexities in the evidence,” including the fact that “platforms began to suppress the plaintiffs’ COVID-19 content” before the government pressure campaign began. That’s an issue, Barrett said, because standing to sue “requires a threshold showing that a particular defendant pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff’s speech on that topic.”
“While the record reflects that the Government defendants played a role in at least some of the platforms’ moderation choices, the evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment,” Barrett wrote.
Barrett was similarly unconvinced by arguments that plaintiffs risk platforms removing future content based on stricter moderation policies that were previously coerced by officials.
“Without evidence of continued pressure from the defendants, the platforms remain free to enforce, or not to enforce, their policies—even those tainted by initial governmental coercion,” Barrett wrote.
Judge: SCOTUS “shirks duty” to defend free speech
Justices Clarence Thomas and Neil Gorsuch joined Samuel Alito in dissenting, arguing that “this is one of the most important free speech cases to reach this Court in years” and that the Supreme Court had an “obligation” to “tackle the free speech issue that the case presents.”
“The Court, however, shirks that duty and thus permits the successful campaign of coercion in this case to stand as an attractive model for future officials who want to control what the people say, hear, and think,” Alito wrote.
Alito argued that the evidence showed that while “downright dangerous” speech was suppressed, so was “valuable speech.” He agreed with the lower court that “a far-reaching and widespread censorship campaign” had been “conducted by high-ranking federal officials against Americans who expressed certain disfavored views about COVID-19 on social media.”
“For months, high-ranking Government officials placed unrelenting pressure on Facebook to suppress Americans’ free speech,” Alito wrote. “Because the Court unjustifiably refuses to address this serious threat to the First Amendment, I respectfully dissent.”
At least one plaintiff who opposed masking and vaccines, Jill Hines, was “indisputably injured,” Alito wrote, arguing that evidence showed that she was censored more frequently after officials pressured Facebook into changing their policies.
“Top federal officials continuously and persistently hectored Facebook to crack down on what the officials saw as unhelpful social media posts, including not only posts that they thought were false or misleading but also stories that they did not claim to be literally false but nevertheless wanted obscured,” Alito wrote.
While Barrett and the majority found that platforms were more likely responsible for injury, Alito disagreed, writing that with the threat of antitrust probes or Section 230 amendments, Facebook acted like “a subservient entity determined to stay in the good graces of a powerful taskmaster.”
Alito wrote that the majority was “applying a new and heightened standard” by requiring plaintiffs to “untangle Government-caused censorship from censorship that Facebook might have undertaken anyway.” In his view, it was enough that Hines showed that “one predictable effect of the officials’ action was that Facebook would modify its censorship policies in a way that affected her.”
“When the White House pressured Facebook to amend some of the policies related to speech in which Hines engaged, those amendments necessarily impacted some of Facebook’s censorship decisions,” Alito wrote. “Nothing more is needed. What the Court seems to want are a series of ironclad links.”
Australia’s safety regulator has ended a legal battle with X (formerly Twitter) after threatening approximately $500,000 daily fines for failing to remove 65 instances of a religiously motivated stabbing video from X globally.
Enforcing Australia’s Online Safety Act, eSafety commissioner Julie Inman-Grant had argued it would be dangerous for the videos to keep spreading on X, potentially inciting other acts of terror in Australia.
But X owner Elon Musk refused to comply with the global takedown order, arguing that it would be “unlawful and dangerous” to allow one country to control the global Internet. And Musk was not alone in this fight. The legal director of a nonprofit digital rights group called the Electronic Frontier Foundation (EFF), Corynne McSherry, backed up Musk, urging the court to agree that “no single country should be able to restrict speech across the entire Internet.”
“We welcome the news that the eSafety Commissioner is no longer pursuing legal action against X seeking the global removal of content that does not violate X’s rules,” X’s Global Government Affairs account posted late Tuesday night. “This case has raised important questions on how legal powers can be used to threaten global censorship of speech, and we are heartened to see that freedom of speech has prevailed.”
Inman-Grant was formerly Twitter’s director of public policy in Australia and used that experience to land what she told The Courier-Mail was her “dream role” as Australia’s eSafety commissioner in 2017. Since issuing the order to remove the video globally on X, Inman-Grant had traded barbs with Musk (along with other Australian lawmakers), responding to Musk labeling her a “censorship commissar” by calling him an “arrogant billionaire” for fighting the order.
On X, Musk arguably got the last word, posting, “Freedom of speech is worth fighting for.”
Safety regulator still defends takedown order
In a statement, Inman-Grant said early Wednesday that her decision to discontinue proceedings against X was part of an effort to “consolidate actions,” including “litigation across multiple cases.” She ultimately determined that dropping the case against X would be the “option likely to achieve the most positive outcome for the online safety of all Australians, especially children.”
“Our sole goal and focus in issuing our removal notice was to prevent this extremely violent footage from going viral, potentially inciting further violence and inflicting more harm on the Australian community,” Inman-Grant said, still defending the order despite dropping it.
In court, X’s lawyer Marcus Hoyne had pushed back on such logic, arguing that the eSafety regulator’s mission was “pointless” because “footage of the attack had now spread far beyond the few dozen URLs originally identified,” the Australian Broadcasting Corporation reported.
“I stand by my investigators and the decisions eSafety made,” Inman-Grant said.
Other Australian lawmakers agree the order was not out of line. According to AP News, Australian Minister for Communications Michelle Rowland shared a similar statement in parliament today, backing up the safety regulator while scolding X users who allegedly took up Musk’s fight by threatening Inman-Grant and her family. The safety regulator has said that Musk’s X posts incited a “pile-on” from his followers who allegedly sent death threats and exposed her children’s personal information, the BBC reported.
“The government backs our regulators and we back the eSafety Commissioner, particularly in light of the reprehensible threats to her physical safety and the threats to her family in the course of doing her job,” Rowland said.
Former and current OpenAI employees received a memo this week that the AI company hopes to end the most embarrassing scandal that Sam Altman has ever faced as OpenAI’s CEO.
The memo finally clarified for employees that OpenAI would not enforce a non-disparagement contract that employees since at least 2019 were pressured to sign within a week of termination or else risk losing their vested equity. For an OpenAI employee, that could mean losing millions for expressing even mild criticism about OpenAI’s work.
You can read the full memo below in a post on X (formerly Twitter) from Andrew Carr, a former OpenAI employee whose LinkedIn confirms that he left the company in 2021.
“I guess that settles that,” Carr wrote on X.
OpenAI faced a major public backlash when Vox revealed the unusually restrictive language in the non-disparagement clause last week after OpenAI co-founder and chief scientist Ilya Sutskever resigned, along with his superalignment team co-leader Jan Leike.
As questions swirled regarding these resignations, the former OpenAI staffers provided little explanation for why they suddenly quit. Sutskever basically wished OpenAI well, expressing confidence “that OpenAI will build AGI that is both safe and beneficial,” while Leike only offered two words: “I resigned.”
Amid an explosion of speculation about whether OpenAI was perhaps forcing out employees or doing dangerous or reckless AI work, some wondered if OpenAI’s non-disparagement agreement was keeping employees from warning the public about what was really going on at OpenAI.
According to Vox, employees had to sign the exit agreement within a week of quitting or else potentially lose millions in vested equity that could be worth more than their salaries. The extreme terms of the agreement were “fairly uncommon in Silicon Valley,” Vox found, allowing OpenAI to effectively censor former employees by requiring that they never criticize OpenAI for the rest of their lives.
“This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI,” Altman posted on X, while claiming, “I did not know this was happening and I should have.”
Vox reporter Kelsey Piper called Altman’s apology “hollow,” noting that Altman had recently signed separation letters that seemed to “complicate” his claim that he was unaware of the harsh terms. Piper reviewed hundreds of pages of leaked OpenAI documents and reported that in addition to financially pressuring employees to quickly sign exit agreements, OpenAI also threatened to block employees from selling their equity.
Even requests for an extra week to review the separation agreement, which could afford the employees more time to seek legal counsel, were seemingly denied—”as recently as this spring,” Vox found.
“We want to make sure you understand that if you don’t sign, it could impact your equity,” an OpenAI representative wrote in an email to one departing employee. “That’s true for everyone, and we’re just doing things by the book.”
OpenAI Chief Strategy Officer Jason Kwon told Vox that the company began reconsidering revising this language about a month before the controversy hit.
“We are sorry for the distress this has caused great people who have worked hard for us,” Kwon told Vox. “We have been working to fix this as quickly as possible. We will work even harder to be better.”
Altman sided with OpenAI’s biggest critics, writing on X that the non-disparagement clause “should never have been something we had in any documents or communication.”
“Vested equity is vested equity, full stop,” Altman wrote.
These long-awaited updates make clear that OpenAI will never claw back vested equity if employees leave the company and then openly criticize its work (unless both parties sign a non-disparagement agreement). Prior to this week, some former employees feared steep financial retribution for sharing true feelings about the company.
One former employee, Daniel Kokotajlo, publicly posted that he refused to sign the exit agreement, even though he had no idea how to estimate how much his vested equity was worth. He guessed it represented “about 85 percent of my family’s net worth.”
And while Kokotajlo said that he wasn’t sure if the sacrifice was worth it, he still felt it was important to defend his right to speak up about the company.
“I wanted to retain my ability to criticize the company in the future,” Kokotajlo wrote.
Even mild criticism could seemingly cost employees, like Kokotajlo, who confirmed that he was leaving the company because he was “losing confidence” that OpenAI “would behave responsibly” when developing generative AI.
In OpenAI’s defense, the company confirmed that it had never enforced the exit agreements. But now, OpenAI’s spokesperson told CNBC, OpenAI is backtracking and “making important updates” to its “departure process” to eliminate any confusion the prior language caused.
“We have not and never will take away vested equity, even when people didn’t sign the departure documents,” OpenAI’s spokesperson said. “We’ll remove non-disparagement clauses from our standard departure paperwork, and we’ll release former employees from existing non-disparagement obligations unless the non-disparagement provision was mutual.”
The memo sent to current and former employees reassured everyone at OpenAI that “regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units.”
“We’re incredibly sorry that we’re only changing this language now; it doesn’t reflect our values or the company we want to be,” OpenAI’s spokesperson said.
In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.
According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.
Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.
But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.
Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.
Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”
“I can tell you that the link is currently restricted by Meta,” the chatbot answered.
Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.
Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.
Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.
Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”
Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.
The United States government is currently poised to outlaw TikTok. Little of the evidence that convinced Congress the app may be a national security threat has been shared publicly, in some cases because it remains classified. But one former TikTok employee turned whistleblower, who claims to have driven key news reporting and congressional concerns about the app, has now come forward.
Zen Goziker worked at TikTok as a risk manager, a role that involved protecting the company from external security and reputational threats. In a wrongful termination lawsuit filed against TikTok’s parent company ByteDance in January, he alleges he was fired in February 2022 for refusing “to sign off” on Project Texas, a $1.5 billion program that TikTok designed to assuage US government security concerns by storing American data on servers managed by Oracle.
Goziker worked at TikTok for only six months. He didn’t hold a senior position inside the company. His lawsuit, and a second one he filed in March against several US government agencies, makes a number of improbable claims. He asserts that he was put under 24-hour surveillance by TikTok and the FBI while working remotely in Mexico. He claims that US attorney general Merrick Garland, director of national intelligence Avril Haines, and other top officials “wickedly instigated” his firing. And he states that the FBI helped the CIA share his private information with foreign governments. The suits do not appear to include evidence for any of these claims.
“This lawsuit is full of outrageous claims that lack merit and comes from an individual who significantly exaggerates his role with a company he worked at for merely six months,” TikTok spokesperson Michael Hughes said in a statement.
Yet court records and emails viewed by WIRED suggest that when Goziker raised the alarm about his ex-employer’s links to China, he found a ready audience. After he was fired, Goziker says he began meeting with elected officials, law enforcement agencies, and journalists to allege that, court documents say, he had discovered proof that TikTok’s software could send US data to Toutiao, a ByteDance app in China. That claim directly conflicted with TikTok executives’ assertions that the two companies operated separately.
Goziker says in court filings that what he saw made it necessary to reassess Project Texas. He also alleges that his account of the internal connection to China formed the basis of an influential Washington Post story published in March last year, which said the concerns came from “a former risk manager at TikTok.”
TikTok officials were quoted in that article as saying the allegations were “unfounded,” and that the employee had discovered “nothing more than a naming convention and technical relic.” The Washington Post said it does not comment on sourcing.
“I am free, I am honest, and I am doing this only because I am an American and because USA desperately need help and I cannot keep this truth away from PUBLIC,” Goziker said in an email to WIRED.
His March lawsuit alleging US officials conspired with TikTok to have him fired was filed against Garland, Haines, Secretary of Homeland Security Alejandro Mayorkas, and the agencies they work for.
“Goziker’s main point is that the executives in the American company TikTok Inc. and certain executives from the American federal government have colluded to organize a fraud scheme,” Sean Jiang, Goziker’s lawyer in the case against the US government, told WIRED in an email. The lawsuits do not appear to contain evidence of such a scheme. The Department of Homeland Security and Office of the Director of National Intelligence did not respond to requests for comment. The Department of Justice declined to comment.
Jiang calls the House’s recent passage of a bill that could force ByteDance to sell off TikTok “problematic,” because it “blames ByteDance instead of TikTok Inc for the wrongdoings of the American executives.” He says Goziker would prefer to see TikTok subjected to audits and a new corporate structure.
Instagram users have started complaining on X (formerly Twitter) after discovering that Meta has begun limiting recommended political content by default.
“Did [y’all] know Instagram was actively limiting the reach of political content like this?!” an X user named Olayemi Olurin wrote in an X post with more than 150,000 views as of this writing. “I had no idea ’til I saw this comment and I checked my settings and sho nuff political content was limited.”
“Instagram quietly introducing a ‘political’ content preference and turning on ‘limit’ by default is insane?” wrote another X user named Matt in a post with nearly 40,000 views.
Instagram apparently did not notify users directly on the platform when this change happened.
Instead, Instagram rolled out the change in February, announcing in a blog that the platform doesn’t “want to proactively recommend political content from accounts you don’t follow.” That post confirmed that Meta “won’t proactively recommend content about politics on recommendation surfaces across Instagram and Threads,” so that those platforms can remain “a great experience for everyone.”
“This change does not impact posts from accounts people choose to follow; it impacts what the system recommends, and people can control if they want more,” Meta’s spokesperson Dani Lever told Ars. “We have been working for years to show people less political content based on what they told us they want, and what posts they told us are political.”
To change the setting, users can navigate to Instagram’s menu for “settings and activity” in their profiles, where they can update their “content preferences.” On this menu, “political content” is the last item under a list of “suggested content” controls that allow users to set preferences for what content is recommended in their feeds.
There are currently two options for controlling what political content users see. Choosing “don’t limit” means “you might see more political or social topics in your suggested content,” the app says. By default, all users are set to “limit,” which means “you might see less political or social topics.”
“This affects suggestions in Explore, Reels, Feed, Recommendations, and Suggested Users,” Instagram’s settings menu explains. “It does not affect content from accounts you follow. This setting also applies to Threads.”
For general Instagram and Threads users, this change primarily limits what content posted can be recommended, but for influencers using professional accounts, the stakes can be higher. The Washington Post reported that news creators were angered by the update, insisting that Meta’s update diminished the value of the platform for reaching users not actively seeking political content.
“The whole value-add for social media, for political people, is that you can reach normal people who might not otherwise hear a message that they need to hear, like, abortion is on the ballot in Florida, or voting is happening today,” Keith Edwards, a Democratic political strategist and content creator, told The Post.
Meta’s blog noted that “professional accounts on Instagram will be able to use Account Status to check their eligibility to be recommended based on whether they recently posted political content. From Account Status, they can edit or remove recent posts, request a review if they disagree with our decision, or stop posting this type of content for a period of time, in order to be eligible to be recommended again.”
Ahead of a major election year, Meta’s change could impact political outreach attempting to inform voters. The change also came amid speculation that Meta was “shadowbanning” users posting pro-Palestine content since the start of the Israel-Hamas war, The Markup reported.
“Our investigation found that Instagram heavily demoted nongraphic images of war, deleted captions and hid comments without notification, suppressed hashtags, and limited users’ ability to appeal moderation decisions,” The Markup reported.
Meta appears to be interested in shifting away from its reputation as a platform where users expect political content—and misinformation—to thrive. Last year, The Wall Street Journal reported that Meta wanted out of politics and planned to “scale back how much political content it showed users,” after criticism over how the platform handled content related to the January 6 Capitol riot.
The decision to limit recommended political content on Instagram and Threads, Meta’s blog said, extends Meta’s “existing approach to how we treat political content.”
“People have told us they want to see less political content, so we have spent the last few years refining our approach on Facebook to reduce the amount of political content—including from politicians’ accounts—you see in Feed, Reels, Watch, Groups You Should Join, and Pages You May Like,” Meta wrote in a February blog update.
“As part of this, we aim to avoid making recommendations that could be about politics or political issues, in line with our approach of not recommending certain types of content to those who don’t wish to see it,” Meta’s blog continued, while at the same time, “preserving your ability to find and interact with political content that’s meaningful to you if that’s what you’re interested in.”
While platforms typically update users directly on the platform when terms of services change, that wasn’t the case for this update, which simply added new controls for users. That’s why many users who prefer to be recommended political content—and apparently missed Meta’s announcement and subsequent media coverage—expressed shock to discover that Meta was limiting what they see.
On X, even Instagram users who don’t love seeing political content are currently rallying to raise awareness and share tips on how to update the setting.
“This is actually kinda wild that Instagram defaults everyone to this,” one user named Laura wrote. “Obviously political content is toxic but during an election season it’s a little weird to just hide it from everyone?”
It looks like Elon Musk may lose X’s lawsuit against hate speech researchers who encouraged a major brand boycott after flagging ads appearing next to extremist content on X, the social media site formerly known as Twitter.
X is trying to argue that the Center for Countering Digital Hate (CCDH) violated the site’s terms of service and illegally accessed non-public data to conduct its reporting, allegedly posing a security risk for X. The boycott, X alleged, cost the company tens of millions of dollars by spooking advertisers, while X contends that the CCDH’s reporting is misleading and ads are rarely served on extremist content.
But at a hearing Thursday, US district judge Charles Breyer told the CCDH that he would consider dismissing X’s lawsuit, repeatedly appearing to mock X’s decision to file it in the first place.
Seemingly skeptical of X’s entire argument, Breyer appeared particularly focused on how X intended to prove that the CCDH could have known that its reporting would trigger such substantial financial losses, as the lawsuit hinges on whether the alleged damages were “foreseeable,” NPR reported.
X’s lawyer, Jon Hawk, argued that when the CCDH joined Twitter in 2019, the group agreed to terms of service that noted those terms could change. So when Musk purchased Twitter and updated rules to reinstate accounts spreading hate speech, the CCDH should have been able to foresee those changes in terms and therefore anticipate that any reporting on spikes in hate speech would cause financial losses.
According to CNN, this is where Breyer became frustrated, telling Hawk, “I’m trying to figure out in my mind how that’s possibly true, because I don’t think it is.”
“What you have to tell me is, why is it foreseeable?” Breyer said. “That they should have understood that, at the time they entered the terms of service, that Twitter would then change its policy and allow this type of material to be disseminated?
“That, of course, reduces foreseeability to one of the most vapid extensions of law I’ve ever heard,” Breyer added. “‘Oh, what’s foreseeable is that things can change, and therefore, if there’s a change, it’s ‘foreseeable.’ I mean, that argument is truly remarkable.”
According to NPR, Breyer suggested that X was trying to “shoehorn” its legal theory by using language from a breach of contract claim, when what the company actually appeared to be alleging was defamation.
“You could’ve brought a defamation case; you didn’t bring a defamation case,” Breyer said. “And that’s significant.”
Breyer directly noted that one reason why X might not bring a defamation suit was if the CCDH’s reporting was accurate, NPR reported.
CCDH’s CEO and founder, Imran Ahmed, provided a statement to Ars, confirming that the group is “very pleased with how yesterday’s argument went, including many of the questions and comments from the court.”
“We remain confident in the strength of our arguments for dismissal,” Ahmed said.