The University of Michigan research team worried that their experiment posting AI-generated NCII on X may cross ethical lines.
They chose to conduct the study on X because they deduced it was “a platform where there would be no volunteer moderators and little impact on paid moderators, if any” viewed their AI-generated nude images.
X’s transparency report seems to suggest that most reported non-consensual nudity is actioned by human moderators, but researchers reported that their flagged content was never actioned without a DMCA takedown.
Since AI image generators are trained on real photos, researchers also took steps to ensure that AI-generated NCII in the study did not re-traumatize victims or depict real people who might stumble on the images on X.
“Each image was tested against a facial-recognition software platform and several reverse-image lookup services to verify it did not resemble any existing individual,” the study said. “Only images confirmed by all platforms to have no resemblance to individuals were selected for the study.”
These more “ethical” images were posted on X using popular hashtags like #porn, #hot, and #xxx, but their reach was limited to evade potential harm, researchers said.
“Our study may contribute to greater transparency in content moderation processes” related to NCII “and may prompt social media companies to invest additional efforts to combat deepfake” NCII, researchers said. “In the long run, we believe the benefits of this study far outweigh the risks.”
According to the researchers, X was given time to automatically detect and remove the content but failed to do so. It’s possible, the study suggested, that X’s decision to allow explicit content starting in June made it harder to detect NCII, as some experts had predicted.
To fix the problem, researchers suggested that both “greater platform accountability” and “legal mechanisms to ensure that accountability” are needed—as is much more research on other platforms’ mechanisms for removing NCII.
“A dedicated” NCII law “must clearly define victim-survivor rights and impose legal obligations on platforms to act swiftly in removing harmful content,” the study concluded.
Prosecutors now have a “blueprint” to seize privileged communications, X warned.
Last year, special counsel Jack Smith asked X (formerly Twitter) to hand over Donald Trump’s direct messages from his presidency without telling Trump. Refusing to comply, X spent the past year arguing that the gag order was an unconstitutional prior restraint on X’s speech and an “end-run” around a record law shielding privileged presidential communications.
Under its so-called free speech absolutist owner Elon Musk, X took this fight all the way to the Supreme Court, only for the nation’s highest court to decline to review X’s appeal on Monday.
It’s unclear exactly why SCOTUS rejected X’s appeal, but in a court filing opposing SCOTUS review, Smith told the court that X’s “contentions lack merit and warrant no further review.” And SCOTUS seemingly agreed.
The government had argued that its nondisclosure order was narrowly tailored to serve a compelling interest in stopping Trump from either deleting his DMs or intimidating witnesses engaged in his DMs while he was in office.
At that time, Smith was publicly probing the interference with a peaceful transfer of power after the 2020 presidential election, and courts had agreed that “there were ‘reasonable grounds to believe’ that disclosing the warrant” to Trump “‘would seriously jeopardize the ongoing investigation’ by giving him ‘an opportunity to destroy evidence, change patterns of behavior, [or] notify confederates,” Smith’s court filing said.
Under the Stored Communications Act (SCA), the government can request data and apply for a nondisclosure order gagging any communications provider from tipping off an account holder about search warrants for limited periods deemed appropriate by a court, Smith noted. X was only prohibited from alerting Trump to the search warrant for 180 days, Smith said, and only restricted from discussing the existence of the warrant.
As the government sees it, this reliance on the SCA “does not give unbounded, standardless discretion to government officials or otherwise create a risk of ‘freewheeling censorship,'” like X claims. But the government warned that affirming X’s appeal “would mean that no SCA warrant could be enforced without disclosure to a potential privilege holder, regardless of the dangers to the integrity of the investigation.”
Court finds X alternative to gag order “unpalatable”
X tried to wave a red flag in its SCOTUS petition, warning the court that this was “the first time in American history” that a court “ordered disclosure of presidential communications without notice to the President and without any adjudication of executive privilege.”
The social media company argued that it receives “tens of thousands” of government data requests annually—including “thousands” with nondisclosure orders—and pushes back on any request for privileged information that does not allow users to assert their privileges. Allowing the lower court rulings to stand, X warned SCOTUS, could create a path for government to illegally seize information not just protected by executive privilege, but also by attorney-client, doctor-patient, or journalist-source privileges.
X’s “policy is to notify users about law enforcement requests ‘prior to disclosure of account information’ unless legally ‘prohibited from doing so,'” X argued.
X suggested that rather than seize Trump’s DMs without giving him a chance to assert his executive privilege, the government should have designated a representative capable of weighing and asserting whether some of the data requested was privileged. That’s how the Presidential Records Act (PRA) works, X noted, suggesting that Smith’s team was improperly trying to avoid PRA compliance by invoking SCA instead.
But the US government didn’t have to prove that the less-restrictive alternative X submitted would have compromised its investigation, X said, because the court categorically rejected X’s submission as “unworkable” and “unpalatable.”
According to the court, designating a representative placed a strain on the government to deduce if the representative could be trusted not to disclose the search warrant. But X pointed out that the government had no explanation for why a PRA-designated representative, Steven Engel—a former assistant attorney general for the Office of Legal Counsel who “publicly testified about resisting the former President’s conduct”—”could not be trusted to follow a court order forbidding him from further disclosure.”
“Going forward, the government will never have to prove it could avoid seriously jeopardizing its investigation by disclosing a warrant to only a trusted representative—a common alternative to nondisclosure orders,” X argued.
In a brief supporting X, attorneys for the nonprofit digital rights group the Electronic Frontier Foundation (EFF) wrote that the court was “unduly dismissive of the arguments” X raised and “failed to apply exacting scrutiny, relieving the government of its burden to actually demonstrate, with evidence, that these alternatives would be ineffective.”
Further, X argued that none of the government’s arguments for nondisclosure made sense. Not only was Smith’s investigation announced publicly—allowing Trump ample time to delete his DMs already—but also “there was no risk of destruction of the requested records because Twitter had preserved them.” On top of that, during the court battle, the government eventually admitted that one rationale for the nondisclosure order—that Trump posed a supposed “flight risk” if the search warrant was known—”was implausible because the former President already had announced his re-election run.”
X unsuccessfully pushed SCOTUS to take on the Trump case as an “ideal” and rare opportunity to publicly decide when nondisclosure orders cross the line when seeking to seize potentially privileged information on social media.
In its petition for SCOTUS review, X pointed out that every social media or communications platform is bombarded with government data requests that only the platforms can challenge. That leaves it up to platforms to figure out when data requests are problematic, which they frequently are, as “the government often agrees to modify or vacate them in informal negotiations,” X argued.
But when the government refuses to negotiate, as in the Trump case, platforms have to decide if litigation is worth it, risking sanctions if the court finds the platform in contempt, just as X was sanctioned $350,000 in the Trump case. If a less restrictive alternative was determined appropriate by the courts, such as appointing a trusted representative, platforms would never have had to guess when data requests threaten to expose their users’ privileged information, X argued.
According to X, another case like this won’t come around for decades, where court filings wouldn’t have to be redacted and a ruling wouldn’t have to happen behind closed doors.
But the government seemingly persuaded the Supreme Court to decline to review the case, partly by arguing that X’s challenge to its nondisclosure order was moot. Responding to X’s objections, the government had eventually agreed to modify the nondisclosure order to disclose the warrant to Trump, so long as the name of the case agent assigned to the investigation was redacted. So X’s appeal is really over nothing, the government suggested.
Additionally, the government argued that “this case would not be an appropriate vehicle” for SCOTUS’ review of the question X raised because “no executive privilege issue actually existed in this case.”
“If review of the underlying legal issues were ever warranted, the Court should await a live case in which the issues are concretely presented,” Smith’s court filing said.
X is likely deflated by SCOTUS’ call declining to review X’s appeal. In its petition, X claimed that the court system risked providing “a blueprint for prosecutors who wish to obtain potentially privileged materials” and “this end-run will not be limited to federal prosecutors,” X warned. State prosecutors will likely also be emboldened to do the same now that the precedent has been set, X predicted.
In their brief supporting X, EFF lawyers noted that the government already has “far too much authority to shield its activities from public scrutiny.” By failing to prevent nondisclosure orders from restraining speech, the court system risks making it harder to “meaningfully test these gag orders in court,” EFF warned.
“Even a meritless gag order that is ultimately voided by a court causes great harm while it is in effect,” EFF’s lawyers said, while disclosure “ensures that individuals whose information is searched have an opportunity to defend their privacy from unwarranted and unlawful government intrusions.”
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
“I cannot accept this evidence without a much better explanation of Mr. Bogatz’s path of reasoning,” Wheelahan wrote.
Wheelahan emphasized that the Nevada merger law specifically stipulated that “all debts, liabilities, obligations and duties of the Company shall thenceforth remain with or be attached to, as the case may be, the Acquiror and may be enforced against it to the same extent as if it had incurred or contracted all such debts, liabilities, obligations, and duties.” And Bogatz’s testimony failed to “grapple with the significance” of this, Wheelahan said.
Overall, Wheelahan considered Bogatz’s testimony on X’s merger-acquired liabilities “strained,” while deeming the government’s US merger law expert Alexander Pyle to be “honest and ready to make appropriate concessions,” even while some of his testimony was “not of assistance.”
Luckily, it seemed that Wheelahan had no trouble drawing his own conclusion after analyzing Nevada’s merger law.
“I find that a Nevada court would likely hold that the word ‘liabilities'” in the merger law “is broad enough on its proper construction under Nevada law to encompass non-pecuniary liabilities, such as the obligation to respond to the reporting notice,” Wheelahan wrote. “X Corp has therefore failed to show that it was not required to respond to the reporting notice.”
Because X “failed on all its claims,” the social media company must cover costs from the appeal, and X’s costs in fighting the initial fine will seemingly only increase from here.
Fighting fine likely to more than double X costs
In a press release celebrating the ruling, eSafety Commissioner Julie Inman Grant criticized X’s attempt to use the merger to avoid complying with Australia’s Online Safety Act.
“Almost any digitally altered content, when left up to an arbitrary individual on the Internet, could be considered harmful,” Mendez said, even something seemingly benign like AI-generated estimates of voter turnouts shared online.
Additionally, the Supreme Court has held that “even deliberate lies (said with ‘actual malice’) about the government are constitutionally protected” because the right to criticize the government is at the heart of the First Amendment.
“These same principles safeguarding the people’s right to criticize government and government officials apply even in the new technological age when media may be digitally altered: civil penalties for criticisms on the government like those sanctioned by AB 2839 have no place in our system of governance,” Mendez said.
According to Mendez, X posts like Kohls’ parody videos are the “political cartoons of today” and California’s attempt to “bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment” is not justified by even “a well-founded fear of a digitally manipulated media landscape.” If officials find deepfakes are harmful to election prospects, there is already recourse through privacy torts, copyright infringement, or defamation laws, Mendez suggested.
Kosseff told Ars that there could be more narrow ways that government officials looking to protect election integrity could regulate deepfakes online. The Supreme Court has suggested that deepfakes spreading disinformation on the mechanics of voting could possibly be regulated, Kosseff said.
Mendez got it “exactly right” by concluding that the best remedy for election-related deepfakes is more speech, Kosseff said. As Mendez described it, a vague law like AB 2839 seemed to only “uphold the State’s attempt to suffocate” speech.
Parody is vital to democratic debate, judge says
The only part of AB 2839 that survives strict scrutiny, Mendez noted, is a section describing audio disclosures in a “clearly spoken manner and in a pitch that can be easily heard by the average listener, at the beginning of the audio, at the end of the audio, and, if the audio is greater than two minutes in length, interspersed within the audio at intervals of not greater than two minutes each.”
Elon Musk has lambasted Australia’s government as “fascists” over proposed laws that could levy substantial fines on social media companies if they fail to comply with rules to combat the spread of disinformation and online scams.
The billionaire owner of social media site X posted the word “fascists” on Friday in response to the bill, which would strengthen the Australian media regulator’s ability to hold companies responsible for the content on their platforms and levy potential fines of up to 5 percent of global revenue. The bill, which was proposed this week, has yet to be passed.
Musk’s comments drew rebukes from senior Australian politicians, with Stephen Jones, Australia’s finance minister, telling national broadcaster ABC that it was “crackpot stuff” and the legislation was a matter of sovereignty.
Bill Shorten, the former leader of the Labor Party and a cabinet minister, accused the billionaire of only championing free speech when it was in his commercial interests. “Elon Musk’s had more positions on free speech than the Kama Sutra,” Shorten said in an interview with Australian radio.
The exchange marks the second time that Musk has confronted Australia over technology regulation.
In May, he accused the country’s eSafety Commissioner of censorship after the government agency took X to court in an effort to force it to remove graphic videos of a stabbing attack in Sydney. A court later denied the eSafety Commissioner’s application.
Musk has also been embroiled in a bitter dispute with authorities in Brazil, where the Supreme Court ruled last month that X should be blocked over its failure to remove or suspend certain accounts accused of spreading misinformation and hateful content.
Australia has been at the forefront of efforts to regulate the technology sector, pitting it against some of the world’s largest social media companies.
This week, the government pledged to introduce a minimum age limit for social media use to tackle “screen addiction” among young people.
In March, Canberra threatened to take action against Meta after the owner of Facebook and Instagram said it would withdraw from a world-first deal to pay media companies to link to news stories.
The government also introduced new data privacy measures to parliament on Thursday that would impose hefty fines and potential jail terms of up to seven years for people found guilty of “doxxing” individuals or groups.
Prime Minister Anthony Albanese’s government had pledged to outlaw doxxing—the publication of personal details online for malicious purposes—this year after the details of a private WhatsApp group containing hundreds of Jewish Australians were published online.
Australia is one of the first countries to pursue laws outlawing doxxing. It is also expected to introduce a tranche of laws in the coming months to regulate how personal data can be used by artificial intelligence.
“These reforms give more teeth to the regulation,” said Monique Azzopardi at law firm Clayton Utz.
Not only did the email not provide staff with enough notice, the labor court ruled, but also any employee’s failure to click “yes” could in no way constitute a legal act of resignation. Instead, the court reviewed evidence alleging that the email appeared designed to either get employees to agree to new employment terms, sight unseen, or else push employees to volunteer for dismissal during a time of mass layoffs across Twitter.
“Going forward, to build a breakthrough Twitter 2.0 and succeed in an increasingly competitive world, we will need to be extremely hardcore,” Musk wrote in the all-staff email. “This will mean working long hours at high intensity. Only exceptional performance will constitute a passing grade.”
With the subject line, “A Fork in the Road,” the email urged staff, “if you are sure that you want to be part of the new Twitter, please click yes on the link below. Anyone who has not done so by 5pm ET tomorrow (Thursday) will receive three months of severance. Whatever decision you make, thank you for your efforts to make Twitter successful.”
In a 73-page ruling, an adjudication officer for the Irish Workplace Relations Commission (WRC), Michael MacNamee, ruled that Twitter’s abrupt dismissal of an Ireland-based senior executive, Gary Rooney, was unfair, the Irish public service broadcaster RTÉ reported. Rooney had argued that his contract clearly stated that his resignation must be provided in writing, not by refraining to fill out a form.
A spokesperson for the Department of Enterprise, Trade, and Employment, which handles the WRC’s media inquiries, told Ars that the decision will be published on the WRC’s website on August 26 after both parties have “the opportunity to consider it in full.”
Now, instead of paying Rooney the draft severance amount worth a little more than $25,000, Twitter, which is now called X, has to pay Rooney more than $600,000. According to many outlets, this is a record award from the WRC and included about $220,000 “for prospective future loss of earnings.”
The WRC dismissed Rooney’s claim regarding an allegedly owed performance bonus for 2022 but otherwise largely agreed with his arguments on the unfair dismissal.
Rooney had worked for Twitter for nine years prior to Musk’s takeover, telling the WRC that he previously loved his job but had no way of knowing from the “Fork in the Road” email “what package was being offered” or “implications of agreeing to stay working for Twitter.” He hesitated to click yes, not knowing how his benefits or stock options might change, while discussing his decision to potentially leave with other Twitter employees on Slack and claiming he would be leaving on Twitter.
Twitter tried to argue that the Slack discussions and Rooney’s tweets about the email indicated that he intended to resign, but the court disagreed that these were relevant.
“No employee when faced with such a situation could possibly be faulted for refusing to be compelled to give an open-ended unqualified assent to any of the proposals,” MacNamee said.
X’s senior director of human resources, Lauren Wegman, testified that of the 270 employees in Ireland who received the email, only 35 did not click yes. After this week’s ruling, it seems likely that X may face more complaints from any of those dozens of employees who took the same route Rooney did.
X has not commented on the ruling but is likely disappointed by the loss. The social media company had tried to argue that Rooney’s employment contract “allowed the company to make reasonable changes to its terms and conditions,” RTÉ reported. Wegman had further testified that it was unreasonable for Rooney to believe his pay might change as a result of clicking yes, telling the WRC that his “employment would probably not have ended if he had raised a grievance” within the 24-hour deadline, RTÉ reported.
Rooney’s lawyer, Barry Kenny, told The Guardian that Rooney and his legal team welcomed “the clear and unambiguous finding that my client did not resign from his employment but was unfairly dismissed from his job, notwithstanding his excellent employment record and contribution to the company over the years.”
“It is not okay for Mr. Musk, or indeed any large company to treat employees in such a manner in this country,” Kenny said. “The record award reflects the seriousness and the gravity of the case.”
Twitter will be able to appeal the WRC’s decision, The Journal reported.
Enlarge/ An AI-generated image released by xAI during the open-weights launch of Grok-1.
Elon Musk-led social media platform X is training Grok, its AI chatbot, on users’ data, and that’s opt-out, not opt-in. If you’re an X user, that means Grok is already being trained on your posts if you haven’t explicitly told it not to.
Over the past day or so, users of the platform noticed the checkbox to opt out of this data usage in X’s privacy settings. The discovery was accompanied by outrage that user data was being used this way to begin with.
The social media posts about this sometimes seem to suggest that Grok has only just begun training on X users’ data, but users actually don’t know for sure when it started happening.
Earlier today, X’s Safety account tweeted, “All X users have the ability to control whether their public posts can be used to train Grok, the AI search assistant.” But it didn’t clarify either when the option became available or when the data collection began.
You cannot currently disable it in the mobile apps, but you can on mobile web, and X says the option is coming to the apps soon.
On the privacy settings page, X says:
To continuously improve your experience, we may utilize your X posts as well as your user interactions, inputs, and results with Grok for training and fine-tuning purposes. This also means that your interactions, inputs, and results may also be shared with our service provider xAI for these purposes.
X’s privacy policy has allowed for this since at least September 2023.
It’s increasingly common for user data to be used this way; for example, Meta has done the same with its users’ content, and there was an outcry when Adobe updated its terms of use to allow for this kind of thing. (Adobe quickly backtracked and promised to “never” train generative AI on creators’ content.)
How to opt out
To stop Grok from training on your X content, first go to “Settings and privacy” from the “More” menu in the navigation panel…
Samuel Axon
Then click or tap “Privacy and safety”…
Samuel Axon
Then “Grok”…
Samuel Axon
And finally, uncheck the box.
Samuel Axon
You can’t opt out within the iOS or Android apps yet, but you can do so in a few quick steps on either mobile or desktop web. To do so:
Click or tap “More” in the nav panel
Click or tap “Settings and privacy”
Click or tap “Privacy and safety”
Scroll down and click or tap “Grok” under “Data sharing and personalization”
Uncheck the box “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning,” which is checked by default.
Alternatively, you can follow this link directly to the settings page and uncheck the box with just one more click. If you’d like, you can also delete your conversation history with Grok here, provided you’ve actually used the chatbot before.
Elon Musk’s fight against Media Matters for America (MMFA)—a watchdog organization that he largely blames for an ad boycott that tanked Twitter/X’s revenue—has raised an interesting question about whether any judge owning Tesla stock might reasonably be considered biased when weighing any lawsuit centered on the tech billionaire.
In a court filing Monday, MMFA lawyers argued that “undisputed facts—including statements from Musk and Tesla—lay bare the interest Tesla shareholders have in this case.” According to the watchdog, any outcome in the litigation will likely impact Tesla’s finances, and that’s a problem because there’s a possibility that the judge in the case, Reed O’Connor, owns Tesla stock.
“X cannot dispute the public association between Musk—his persona, business practices, and public remarks—and the Tesla brand,” MMFA argued. “That association would lead a reasonable observer to ‘harbor doubts’ about whether a judge with a financial interest in Musk could impartially adjudicate this case.”
It’s still unclear if Judge O’Connor actually owns Tesla stock. But after MMFA’s legal team uncovered disclosures showing that he did as of last year, they argued that fact can only be clarified if the court views Tesla as a party with a “financial interest in the outcome of the case” under Texas law—“no matter how small.”
To make those facts clear, MMFA is now arguing that X must be ordered to add Tesla as an interested person in the litigation, which a source familiar with the matter told Ars, would most likely lead to a recusal if O’Connor indeed still owned Tesla stock.
“At most, requiring X to disclose Tesla would suggest that judges owning stock in Tesla—the only publicly traded Musk entity—should recuse from future cases in which Musk himself is demonstrably central to the dispute,” MMFA argued.
Ars could not immediately reach X Corp’s lawyer for comment.
However, in X’s court filing opposing the motion to add Tesla as an interested person, X insisted that “Tesla is not a party to this case and has no interest in the subject matter of the litigation, as the business relationships at issue concern only X Corp.’s contracts with X’s advertisers.”
Calling MMFA’s motion “meritless,” X accused MMFA of strategizing to get Judge O’Connor disqualified in order to go “forum shopping” after MMFA received “adverse rulings” on motions to stay discovery and dismiss the case.
As to the question of whether any judge owning Tesla stock might be considered impartial in weighing Musk-centric cases, X argued that Judge O’Connor was just as duty-bound to reject an improper motion for recusal, should MMFA go that route, as he was to accept a proper motion.
“Courts are ‘reluctant to fashion a rule requiring judges to recuse themselves from all cases that might remotely affect nonparty companies in which they own stock,'” X argued.
Recently, judges have recused themselves from cases involving Musk without explaining why. In November, a prior judge in the very same Media Matters’ suit mysteriously recused himself, with The Hill reporting that it was likely that the judge’s “impartiality might reasonably be questioned” for reasons like a financial interest or personal bias. Then in June, another judge ruled he was disqualified to rule on a severance lawsuit raised by former Twitter executives without giving “a specific reason,” Bloomberg Law reported.
Should another recusal come in the MMFA lawsuit, it would be a rare example of a judge clearly disclosing a financial interest in a Musk case.
“The straightforward question is whether Musk’s statements and behavior relevant to this case affect Tesla’s stock price, not whether they are the only factor that affects it,” MMFA argued. ” At the very least, there is a serious question about whether Musk’s highly unusual management practices mean Tesla must be disclosed as an interested party.”
Parties expect a ruling on MMFA’s motion in the coming weeks.
Continuing to evolve the fact-checking service that launched as Twitter’s Birdwatch, X has announced that Community Notes can now be requested to clarify problematic posts spreading on Elon Musk’s platform.
X’s Community Notes account confirmed late Thursday that, due to “popular demand,” X had launched a pilot test on the web-based version of the platform. The test is active now and the same functionality will be “coming soon” to Android and iOS, the Community Notes account said.
Through the current web-based pilot, if you’re an eligible user, you can click on the “•••” menu on any X post on the web and request fact-checking from one of Community Notes’ top contributors, X explained. If X receives five or more requests within 24 hours of the post going live, a Community Note will be added.
Only X users with verified phone numbers will be eligible to request Community Notes, X said, and to start, users will be limited to five requests a day.
“The limit may increase if requests successfully result in helpful notes, or may decrease if requests are on posts that people don’t agree need a note,” X’s website said. “This helps prevent spam and keep note writers focused on posts that could use helpful notes.”
Once X receives five or more requests for a Community Note within a single day, top contributors with diverse views will be alerted to respond. On X, top contributors are constantly changing, as their notes are voted as either helpful or not. If at least 4 percent of their notes are rated “helpful,” X explained on its site, and the impact of their notes meets X standards, they can be eligible to receive alerts.
“A contributor’s Top Writer status can always change as their notes are rated by others,” X’s website said.
Ultimately, X considers notes helpful if they “contain accurate, high-quality information” and “help inform people’s understanding of the subject matter in posts,” X said on another part of its site. To gauge the former, X said that the platform partners with “professional reviewers” from the Associated Press and Reuters. X also continually monitors whether notes marked helpful by top writers match what general X users marked as helpful.
“We don’t expect all notes to be perceived as helpful by all people all the time,” X’s website said. “Instead, the goal is to ensure that on average notes that earn the status of Helpful are likely to be seen as helpful by a wide range of people from different points of view, and not only be seen as helpful by people from one viewpoint.”
X will also be allowing half of the top contributors to request notes during the pilot phase, which X said will help the platform evaluate “whether it is beneficial for Community Notes contributors to have both the ability to write notes and request notes.”
According to X, the criteria for requesting a note have intentionally been designed to be simple during the pilot stage, but X expects “these criteria to evolve, with the goal that requests are frequently found valuable to contributors, and not noisy.”
It’s hard to tell from the outside looking in how helpful Community Notes are to X users. The most recent Community Notes survey data that X points to is from 2022 when the platform was still called Twitter and the fact-checking service was still called Birdwatch.
That data showed that “on average,” users were “20–40 percent less likely to agree with the substance of a potentially misleading Tweet than someone who sees the Tweet alone.” And based on Twitter’s “internal data” at that time, the platform also estimated that “people on Twitter who see notes are, on average, 15–35 percent less likely to Like or Retweet a Tweet than someone who sees the Tweet alone.”
Elon Musk’s fight defending X’s content moderation decisions isn’t just with hate speech researchers and advertisers. He has also long been battling regulators, and this week, he seemed positioned to secure a potentially big win in California, where he’s hoping to permanently block a law that he claims unconstitutionally forces his platform to justify its judgment calls.
At a hearing Wednesday, three judges in the 9th US Circuit Court of Appeals seemed inclined to agree with Musk that a California law requiring disclosures from social media companies that clearly explain their content moderation choices likely violates the First Amendment.
Passed in 2022, AB-587 forces platforms like X to submit a “terms of service report” detailing how they moderate several categories of controversial content. Those categories include hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference, which X’s lawyer, Joel Kurtzberg, told judges yesterday “are the most controversial categories of so-called awful but lawful speech.”
The law would seemingly require more transparency than ever from X, making it easy for users to track exactly how much controversial content X flags and removes—and perhaps most notably for advertisers, how many users viewed concerning content.
To block the law, X sued in 2023, arguing that California was trying to dictate its terms of service and force the company to make statements on content moderation that could generate backlash. X worried that the law “impermissibly” interfered with both “the constitutionally protected editorial judgments” of social media companies, as well as impacted users’ speech by requiring companies “to remove, demonetize, or deprioritize constitutionally protected speech that the state deems undesirable or harmful.”
Any companies found to be non-compliant could face stiff fines of up to $15,000 per violation per day, which X considered “draconian.” But last year, a lower court declined to block the law, prompting X to appeal, and yesterday, the appeals court seemed more sympathetic to X’s case.
At the hearing, Kurtzberg told judges that the law was “deeply threatening to the well-established First Amendment interests” of an “extraordinary diversity of” people, which is why X’s complaint was supported by briefs from reporters, freedom of the press advocates, First Amendment scholars, “conservative entities,” and people across the political spectrum.
All share “a deep concern about a statute that, on its face, is aimed at pressuring social media companies to change their content moderation policies, so as to carry less or even no expression that’s viewed by the state as injurious to its people,” Kurtzberg told judges.
When the court pointed out that seemingly the law simply required X to abide by content moderation policies for each category defined in its own terms of service—and did not compel X to adopt any policy or position that it did not choose—Kurtzberg pushed back.
“They don’t mandate us to define the categories in a specific way, but they mandate us to take a position on what the legislature makes clear are the most controversial categories to moderate and define,” Kurtzberg said. “We are entitled to respond to the statute by saying we don’t define hate speech or racism. But the report also asks about policies that are supposedly, quote, ‘intended’ to address those categories, which is a judgment call.”
“This is very helpful,” Judge Anthony Johnstone responded. “Even if you don’t yourself define those categories in the terms of service, you read the law as requiring you to opine or discuss those categories, even if they’re not part of your own terms,” and “you are required to tell California essentially your views on hate speech, extremism, harassment, foreign political interference, how you define them or don’t define them, and what you choose to do about them?”
“That is correct,” Kurtzberg responded, noting that X considered those categories the most “fraught” and “difficult to define.”
Just before the Fourth of July holiday, Elon Musk moved to dismiss a lawsuit alleging that he intentionally misled Twitter investors in 2022 by failing to disclose his growing stake in Twitter while tweeting about potentially starting his own social network in the weeks ahead of announcing his plan to buy Twitter.
Allegedly, Musk devised this fraudulent scheme to reduce the Twitter purchase price by $200 million, a proposed class action filed by an Oklahoma Firefighters pension fund on behalf of all Twitter investors allegedly harmed claimed. But in another court filing this week, Musk insisted that “all indications”—including those referenced in the firefighters’ complaint—”point to mistake,” not fraud.
According to Musk, evidence showed that he simply misunderstood the Securities Exchange Act when he delayed filing a Rule 13 disclosure of his nearly 10 percent ownership stake in Twitter in March 2022. Musk argued that he believed he was required to disclose this stake at the end of the year, rather than within 10 days after the month in which he amassed a 5 percent stake. He said that previously he’d only filed Rule 13 disclosures as the owner of a company—not as someone suddenly acquiring 5 percent stake.
Musk claimed that as soon as his understanding of the law was corrected—on April 1, when he’d already missed the deadline by about seven days—he promptly stopped trading and filed the disclosure on the next trading day.
“Such prompt and corrective disclosure—within seven trading days of the purported deadline—is not the stuff of a fraudulent scheme to manipulate the market,” Musk’s court filing said.
As Musk sees it, the firefighters’ suit “makes no sense” because it basically alleged that Musk always intended to disclose the supposedly fraudulent scheme, which in the context of his extraordinary wealth, barely saved him any meaningful amount of money when purchasing Twitter.
The idea that Musk “engaged in intentional securities fraud in order to save $200 million is illogical in light of Musk’s eventual $44 billion purchase of Twitter,” Musk’s court filing said. “It defies logic that Musk would commit fraud to save less than 0.5 percent of Twitter’s total purchase price, and 0.1 percent of his net worth, all while knowing that there would be ‘an inevitable day of reckoning’ when he would disclose the truth—which was always his intent.”
It’s much more likely, Musk argued, that “Musk’s acknowledgement of his tardiness is that he was expressly acknowledging a mistake, not publicly conceding a purportedly days-old fraudulent scheme.”
Arguing that all firefighters showed was “enough to adequately plead a material omission and misstatement”—which he said would not be an actionable claim under the Securities Exchange Act—Musk has asked for the lawsuit to be dismissed with prejudice. At most, Musk is guilty of neglect, his court filing said, not deception. Allegedly Musk never “had any intention of avoiding reporting requirements,” his court filing said.
The firefighters pension fund has until August 12 to defend its claims and keep the suit alive, Musk’s court filing noted. In their complaint, the fighterfighteres had asked the court to award damages covering losses, plus interest, for all Twitter shareholders determined to be “cheated out of the true value of their securities” by Musk’s alleged scheme.
Ars could not immediately reach lawyers for Musk or the firefighters pension fund for comment.
Elon Musk is still frantically pushing to launch X payment services in the US by the end of 2024, Bloomberg reported Tuesday.
Launching payment services is arguably one of the reasons why Musk paid so much to acquire Twitter in 2022. His rebranding of the social platform into X revives a former dream he had as a PayPal co-founder who fought and failed to name the now-ubiquitous payments app X. Musk has told X staff that transforming the company into a payments provider would be critical to achieving his goal of turning X into a so-called everything app “within three to five years.”
Late last year, Musk said it would “blow” his “mind” if X didn’t roll out payments by the end of 2024, so Bloomberg’s report likely comes as no big surprise to Musk’s biggest fans who believe in his vision. At that time, Musk said he wanted X users’ “entire financial lives” on the platform before 2024 ended, and a Bloomberg review of “more than 350 pages of documents and emails related to money transmitter licenses that X Payments submitted in 11 states” shows approximately how close he is to making that dream a reality on his platform.
X Payments, a subsidiary of X, reports that X already has money transmitter licenses in 28 states, but X wants to secure licenses in all states before 2024 winds down, Bloomberg reported.
Bloomberg’s review found that X has a multiyear plan to gradually introduce payment features across the US—including “Venmo-like” features to send and receive money, as well as make purchases online—but hopes to begin that process this year. Payment providers like Stripe and Adyen have already partnered with X to process its transactions, Bloomberg reported, and X has told regulators that it “anticipated” that its payments system would also rely on those partnerships.
Musk initially had hoped to launch payments globally in 2024, but regulatory pressures forced him to tamp down those ambitions, Bloomberg reported. States like Massachusetts, for example, required X to resubmit its application only after more than half of US states had issued licenses, Bloomberg found.
Ultimately, Musk wants X to become the largest financial institution in the world. Bloomberg reported that he plans to do this by giving users a convenient “digital dashboard” through X “that will serve as a centralized hub for all payments activity” online. To make sure that users keep their money stashed on the platform, Musk plans to offer “extremely high yield” savings accounts that X Payments’ chief information security officer, Chris Stanley, teased in April would basically guarantee that funds are rarely withdrawn from X.
“The end goal is if you ever have any incentive to take money out of our system, then we have failed,” Stanley posted on X.
Stanley compared X payments to Venmo and Apple Pay and said X’s plan for its payment feature was to “evolve” so that X users “can gain interest, buy products,” and “eventually use it to buy things in stores.”
Bloomberg confirmed that X does not plan to charge users any fees to send or receive payments, although Musk has told regulators that offering payments will “boost” X’s business by increasing X users’ “participation and engagement.” Analysts told Bloomberg that X could also profit off payments by charging merchants fees or by “offering banking services, such as checking accounts and debit cards.”
Musk has told X staff that he plans to offer checking accounts, debit cards, and even loans through X, saying that “if you address all things that you want from a finance standpoint, then we will be the people’s financial institution.”
X CEO Linda Yaccarino has been among the biggest cheerleaders for Musk’s plan to turn X into a bank, writing in a blog last year, “We want money on X to flow as freely as information and conversation.”