Policy

supreme-court-hears-case-that-could-trigger-big-crackdown-on-internet-piracy

Supreme Court hears case that could trigger big crackdown on Internet piracy


Justices want Cox to crack down on piracy, but question Sony’s strict demands.

Credit: Getty Images | Ilmar Idiyatullin

Supreme Court justices expressed numerous concerns today in a case that could determine whether Internet service providers must terminate the accounts of broadband users accused of copyright infringement. Oral arguments were held in the case between cable Internet provider Cox Communications and record labels led by Sony.

Some justices were skeptical of arguments that ISPs should have no legal obligation under the Digital Millennium Copyright Act (DMCA) to terminate an account when a user’s IP address has been repeatedly flagged for downloading pirated music. But justices also seemed hesitant to rule in favor of record labels, with some of the debate focusing on how ISPs should handle large accounts like universities where there could be tens of thousands of users.

Justice Sonia Sotomayor chided Cox for not doing more to fight infringement.

“There are things you could have done to respond to those infringers, and the end result might have been cutting off their connections, but you stopped doing anything for many of them,” Sotomayor said to attorney Joshua Rosenkranz, who represents Cox. “You didn’t try to work with universities and ask them to start looking at an anti-infringement notice to their students. You could have worked with a multi-family dwelling and asked the people in charge of that dwelling to send out a notice or do something about it. You did nothing and, in fact, counselor, your clients’ sort of laissez-faire attitude toward the respondents is probably what got the jury upset.”

A jury ordered Cox to pay over $1 billion in 2019, but the US Court of Appeals for the 4th Circuit overturned that damages verdict in February 2024. The appeals court found that Cox did not profit directly from copyright infringement committed by its users, but affirmed the jury’s separate finding of willful contributory infringement. Cox is asking the Supreme Court to clear it of willful contributory infringement, while record labels want a ruling that would compel ISPs to boot more pirates from the Internet.

Cox: Biggest infringers aren’t residential users

Rosenkranz countered that Cox created its own anti-infringement program, sent out hundreds of warnings a day, suspended thousands of accounts a month, and worked with universities. He said that “the highest recidivist infringers” cited in the case were not individual households, but rather universities, hotels, and regional ISPs that purchase connectivity from Cox in order to resell it to local users.

If Sony wins the case, “those are the entities that are most likely to be cut off first because those are the ones that accrue the greatest number of [piracy notices],” the Cox lawyer said. Even within a multi-person household where the IP address is caught by an infringement monitoring service, “you still don’t know who the individual [infringer] is,” he said. At another point in the hearing, he pointed out that Sony could sue individual infringers directly instead of suing ISPs.

Justice Amy Coney Barrett asked Cox, “What incentive would you have to do anything if you won? If you win and mere knowledge [of infringement] isn’t enough, why would you bother to send out any [copyright] notices in the future? What would your obligation be?”

Rosenkranz answered, “For the simple reason that Cox is a good corporate citizen that cares a lot about what happens on its system. We do all sorts of things that the law doesn’t require us to do.” After further questioning by Barrett, Rosenkranz acknowledged that Cox would have no liability risk going forward if it wins the case.

Kagan said the DMCA safe harbor, which protects entities from liability if they take steps to fight infringement, would “seem to do nothing” if the court sides with Cox. “Why would anybody care about getting into the safe harbor if there’s no liability in the first place?” she said.

Kagan doesn’t buy Sony’s “intent” argument

Kagan also criticized Sony’s case. She pointed to the main principles underlying Twitter v. Taamneh, a 2023 ruling that protected Twitter against allegations that it aided and abetted ISIS in a terrorist attack. Kagan said the Twitter case and the Smith & Wesson case involving gun sales to Mexican drug cartels show that there are strict limits on what kinds of behavior are considered aiding and abetting.

Kagan described how the cases show there is a real distinction between nonfeasance (doing nothing) and misfeasance, that treating one customer like everyone else is not the same as providing special assistance, and that a party “must seek by your action to make it occur” in order to be guilty of aiding and abetting.

“If you look at those three things, you fail on all of them,” Kagan said to attorney Paul Clement, who represents Sony. “Those three things are kind of inconsistent with the intent standard you just laid out.”

Clement said that to be held liable, an Internet provider “has to know that specified customers are substantially certain to infringe” and “know that providing the service to that customer will make infringement substantially certain.”

Justice Neil Gorsuch indicated that determining secondary liability for Internet providers should be taken up by Congress before the court expands that liability on its own. “Congress still hasn’t defined the contours of what secondary liability should look like. Here we are debating them, so shouldn’t that be a flag of caution for us in expanding it too broadly?”

Alito: “I just don’t see how it’s workable at all”

Clement tried to keep the focus on residential customers, saying that 95 percent of infringing customers are residential users. But he faced questions about how ISPs should handle much larger customers where one or a few users infringe.

Justice Samuel Alito questioned Clement about what ISPs should do with a university where some students infringe. Alito didn’t seem satisfied with Clement’s response that “the ISP is supposed to sort of have a conversation with the university.”

Alito said that after an ISP tells a university, “a lot of your 50,000 students are infringing… the university then has to determine which particular students are engaging in this activity. Let’s assume it can even do that, and so then it knocks out 1,000 students and then another 1,000 students are going to pop up doing the same thing. I just don’t see how it’s workable at all.”

Clement said that hotels limit speeds to restrict peer-to-peer downloading, and suggested that universities do the same. “I don’t think it would be the end of the world if universities provided service at a speed that was sufficient for most other purposes but didn’t allow the students to take full advantage of BitTorrent,” he said. “I could live in that world. But in all events, this isn’t a case that’s just about universities. We’ve never sued the universities.”

Barrett replied, “It seems like you’re asking us to rely on your good corporate citizenship too, that you wouldn’t go after the university or the hospital.”

Kagan said that if Sony wins, Cox would have little incentive to cooperate with copyright holders. “It seems to me the best response that Cox could have is just to make sure it never reads any of your notices ever again, because all of your position is based on Cox having knowledge of this,” she said.

Clement argued in response that “I think willful blindness would satisfy the common law standard for aiding and abetting.”

Purpose vs. intent

Some of the discussion focused on the legal concepts of purpose and intent. Cox has argued that knowledge of infringement “cannot transform passive provision of infrastructure into purposeful, culpable conduct.” Sony has said Cox exhibited both “purpose and intent” to facilitate infringement when it continued providing Internet access to specific customers with the expectation that they were likely to infringe.

Sotomayor said Cox’s position is “that the only way you can have aiding and abetting in this field is if you have purpose,” while Sony is saying, “we don’t have to prove purpose, we have to prove only intent.” Sotomayor told Clement that “we are being put to two extremes here. The other side says, ‘there’s no liability because we’re just putting out into the stream of commerce a good that can be used for good or bad, and we’re not responsible for the infringer’s decision.’”

Sotomayor said the question of purpose vs. intent may be decided differently based on whether Cox’s customer is a residence or a regional ISP that buys Cox’s network capacity and resells it to local customers. Sotomayor said she is reluctant “to say that because one person in that region continues to infringe, that the ISP is materially supporting that infringement because it’s not cutting off the Internet for the 50,000 or 100,000 people who are represented by that customer.”

But a single-family home contains a small number of people, and an ISP may be “materially contributing” to infringement by providing service to that home, Sotomayor said. “How do we announce a rule that deals with those two extremes?” she asked.

Clement argued that the DMCA’s “safe harbor takes care of the regional ISPs. Frankly, I’m not that worried about the regional ISPs because if that were really the problem, we could go after the regional ISPs.”

Cox’s case has support from the US government. US Deputy Solicitor General Malcolm Stewart told justices today that “in copyright law and more generally, this form of secondary liability is reserved for persons who act for the purpose of facilitating violations of law. Because Cox simply provided the same generic Internet services to infringers and non-infringers alike, there is no basis for inferring such a purpose here.”

Terminating all access “extremely overbroad”

Sotomayor asked Stewart if he’s worried that a Cox win would remove ISPs’ economic incentive to control copyright infringement. “I would agree that not much economic incentive would be left,” Stewart replied. “I’m simply questioning whether that’s a bad thing.”

Stewart gave a hypothetical in which an individual Internet user is sued for infringement in a district court. The district court could award damages and impose an injunction to prevent further infringement, but it probably couldn’t “enjoin the person from ever using the Internet again,” Stewart said.

“The approach of terminating all access to the Internet based on infringement, it seems extremely overbroad given the centrality of the Internet to modern life and given the First Amendment,” he said.

Oral arguments ended with a reply from Rosenkranz, who said Clement’s suggestion that ISPs simply “have a conversation” with universities is “a terrible answer from the perspective of the company that is trying to figure out what its legal obligations are [and] facing crushing liabilities.” Rosenkranz also suggested that record labels pay for ISPs’ enforcement programs.

“The plaintiffs have recourse,” he said. “How about a conversation with the ISPs where they talk about how to work out things together? Maybe they kick in a little money. Now, they won’t get billion-dollar verdicts, but if they believe that the programs that Cox and others have aren’t satisfactory, they can design better programs and help pay for them.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Supreme Court hears case that could trigger big crackdown on Internet piracy Read More »

openai-desperate-to-avoid-explaining-why-it-deleted-pirated-book-datasets

OpenAI desperate to avoid explaining why it deleted pirated book datasets


Not for OpenAI to reason why?

OpenAI risks increased fines after deleting pirated books datasets.

OpenAI may soon be forced to explain why it deleted a pair of controversial datasets composed of pirated books, and the stakes could not be higher.

At the heart of a class-action lawsuit from authors alleging that ChatGPT was illegally trained on their works, OpenAI’s decision to delete the datasets could end up being a deciding factor that gives the authors the win.

It’s undisputed that OpenAI deleted the datasets, known as “Books 1” and “Books 2,” prior to ChatGPT’s release in 2022. Created by former OpenAI employees in 2021, the datasets were built by scraping the open web and seizing the bulk of its data from a shadow library called Library Genesis (LibGen).

As OpenAI tells it, the datasets fell out of use within that same year, prompting an internal decision to delete them.

But the authors suspect there’s more to the story than that. They noted that OpenAI appeared to flip-flop by retracting its claim that the datasets’ “non-use” was a reason for deletion, then later claiming that all reasons for deletion, including “non-use,” should be shielded under attorney-client privilege.

To the authors, it seemed like OpenAI was quickly backtracking after the court granted the authors’ discovery requests to review OpenAI’s internal messages on the firm’s “non-use.”

In fact, OpenAI’s reversal only made authors more eager to see how OpenAI discussed “non-use,” and now they may get to find out all the reasons why OpenAI deleted the datasets.

Last week, US district judge Ona Wang ordered OpenAI to share all communications with in-house lawyers about deleting the datasets, as well as “all internal references to LibGen that OpenAI has redacted or withheld on the basis of attorney-client privilege.”

According to Wang, OpenAI slipped up by arguing that “non-use” was not a “reason” for deleting the datasets, while simultaneously claiming that it should also be deemed a “reason” considered privileged.

Either way, the judge ruled that OpenAI couldn’t block discovery on “non-use” just by deleting a few words from prior filings that had been on the docket for more than a year.

“OpenAI has gone back-and-forth on whether ‘non-use’ as a ‘reason’ for the deletion of Books1 and Books2 is privileged at all,” Wang wrote. “OpenAI cannot state a ‘reason’ (which implies it is not privileged) and then later assert that the ‘reason’ is privileged to avoid discovery.”

Additionally, OpenAI’s claim that all reasons for deleting the datasets are privileged “strains credulity,” she concluded, ordering OpenAI to produce a wide range of potentially revealing internal messages by December 8. OpenAI must also make its in-house lawyers available for deposition by December 19.

OpenAI has argued that it never flip-flopped or retracted anything. It simply used vague phrasing that led to confusion over whether any of the reasons for deleting the datasets were considered non-privileged. But Wang didn’t buy into that, concluding that “even if a ‘reason’ like ‘non-use’ could be privileged, OpenAI has waived privilege by making a moving target of its privilege assertions.”

Asked for comment, OpenAI told Ars that “we disagree with the ruling and intend to appeal.”

OpenAI’s “flip-flop” may cost it the win

So far, OpenAI has avoided disclosing its rationale, claiming that all the reasons it had for deleting the datasets are privileged. In-house lawyers weighed in on the decision to delete and were even copied on a Slack channel initially called “excise-libgen.”

But Wang reviewed those Slack messages and found that “the vast majority of these communications were not privileged because they were ‘plainly devoid of any request for legal advice and counsel [did] not once weigh in.’”

In a particularly non-privileged batch of messages, one OpenAI lawyer, Jason Kwon, only weighed in once, the judge noted, to recommend the channel name be changed to “project-clear.” Wang reminded OpenAI that “the entirety of the Slack channel and all messages contained therein is not privileged simply because it was created at the direction of an attorney and/or the fact that a lawyer was copied on the communications.”

The authors believe that exposing OpenAI’s rationale may help prove that the ChatGPT maker willfully infringed on copyrights when pirating the book data. As Wang explained, OpenAI’s retraction risked putting the AI firm’s “good faith and state of mind at issue,” which could increase fines in a loss.

“In a copyright case, a court can increase the award of statutory damages up to $150,000 per infringed work if the infringement was willful, meaning the defendant ‘was actually aware of the infringing activity’ or the ‘defendant’s actions were the result of reckless disregard for, or willful blindness to, the copyright holder’s rights,’” Wang wrote.

In a court transcript, a lawyer representing some of the authors suing OpenAI, Christopher Young, noted that OpenAI could be in trouble if evidence showed that it decided against using the datasets for later models due to legal risks. He also suggested that OpenAI could be using the datasets under different names to mask further infringement.

Judge calls out OpenAI for twisting fair use ruling

Wang also found it contradictory that OpenAI continued to argue in a recent filing that it acted in good faith, while “artfully” removing “its good faith affirmative defense and key words such as ‘innocent,’ ‘reasonably believed,’ and ‘good faith.’” These changes only strengthened discovery requests to explore authors’ willfulness theory, Wang wrote, noting the sought-after internal messages would now be critical for the court’s review.

“A jury is entitled to know the basis for OpenAI’s purported good faith,” Wang wrote.

The judge appeared particularly frustrated by OpenAI seemingly twisting the Anthropic ruling to defend against the authors’ request to learn more about the deletion of the datasets.

In a footnote, Wang called out OpenAI for “bizarrely” citing an Anthropic ruling that “grossly” misrepresented Judge William Alsup’s decision by claiming that he found that “downloading pirated copies of books is lawful as long as they are subsequently used for training an LLM.”

Instead, Alsup wrote that he doubted that “any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use.”

If anything, Wang wrote, OpenAI’s decision to pirate book data—then delete it—seemed “to fall squarely into the category of activities proscribed by” Alsup. For emphasis, she quoted Alsup’s order, which said, “such piracy of otherwise available copies is inherently, irredeemably infringing even if the pirated copies are immediately used for the transformative use and immediately discarded.”

For the authors, getting hold of OpenAI’s privileged communications could tip the scales in their favor, the Hollywood Reporter suggested. Some authors believe the key to winning could be testimony from Anthropic CEO Dario Amodei, who is accused of creating the controversial datasets while he was still at OpenAI. The authors think Amodei also possesses information on the destruction of the datasets, court filings show.

OpenAI tried to fight the authors’ motion to depose Amodei, but a judge sided with the authors in March, compelling Amodei to answer their biggest questions on his involvement.

Whether Amodei’s testimony is a bombshell remains to be seen, but it’s clear that OpenAI may struggle to overcome claims of willful infringement. Wang noted there is a “fundamental conflict” in circumstances “where a party asserts a good faith defense based on advice of counsel but then blocks inquiry into their state of mind by asserting attorney-client privilege,” suggesting that OpenAI may have substantially weakened its defense.

The outcome of the dispute over the deletions could influence OpenAI’s calculus on whether it should ultimately settle the lawsuit. Ahead of the Anthropic settlement—the largest publicly reported copyright class action settlement in history—authors suing pointed to evidence that Anthropic became “not so gung ho about” training on pirated books “for legal reasons.” That seems to be the type of smoking-gun evidence that authors hope will emerge from OpenAI’s withheld Slack messages.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI desperate to avoid explaining why it deleted pirated book datasets Read More »

openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide

OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide


Use chatbots at your own risk

OpenAI’s response to teen suicide case is “disturbing,” lawyer says.

Matt Raine is suing OpenAI for wrongful death after losing his son Adam in April. Credit: via Edelson PC

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

“A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT,” OpenAI’s filing argued.

Allegedly, the logs also show that Raine “told ChatGPT that he repeatedly reached out to people, including trusted persons in his life, with cries for help, which he said were ignored.” Additionally, Raine told ChatGPT that he’d increased his dose of a medication that “he stated worsened his depression and made him suicidal.” That medication, OpenAI argued, “has a black box warning for risk of suicidal ideation and behavior in adolescents and young adults, especially during periods when, as here, the dosage is being changed.”

All the logs that OpenAI referenced in its filing are sealed, making it impossible to verify the broader context the AI firm claims the logs provide. In its blog, OpenAI said it was limiting the amount of “sensitive evidence” made available to the public, due to its intention to handle mental health-related cases with “care, transparency, and respect.”

The Raine family’s lead lawyer, however, did not describe the filing as respectful. In a statement to Ars, Jay Edelson called OpenAI’s response “disturbing.”

“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide,’” Edelson said. “And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.”

“Amazingly,” Edelson said, OpenAI instead argued that Raine “himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act.”

Edelson suggested that it’s telling that OpenAI did not file a motion to dismiss—seemingly accepting ” the reality that the legal arguments that they have—compelling arbitration, Section 230 immunity, and First Amendment—are paper-thin, if not non-existent.” The company’s filing—although it requested dismissal with prejudice to never face the lawsuit again—puts the Raine family’s case “on track for a jury trial in 2026. ”

“We know that OpenAI and Sam Altman will stop at nothing—including bullying the Raines and others who dare come forward—to avoid accountability,” Edelson said. “But, at the end of the day, they will have to explain to a jury why countless people have died by suicide or at the hands of ChatGPT users urged on by the artificial intelligence OpenAI and Sam Altman designed.”

Use ChatGPT “at your sole risk,” OpenAI says

To overcome the Raine case, OpenAI is leaning on its usage policies, emphasizing that Raine should never have been allowed to use ChatGPT without parental consent and shifting the blame onto Raine and his loved ones.

“ChatGPT users acknowledge their use of ChatGPT is ‘at your sole risk and you will not rely on output as a sole source of truth or factual information,’” the filing said, and users also “must agree to ‘protect people’ and ‘cannot use [the] services for,’ among other things, ‘suicide, self-harm,’ sexual violence, terrorism or violence.”

Although the family was shocked to see that ChatGPT never terminated Raine’s chats, OpenAI argued that it’s not the company’s responsibility to protect users who appear intent on pursuing violative uses of ChatGPT.

The company argued that ChatGPT warned Raine “more than 100 times” to seek help, but the teen “repeatedly expressed frustration with ChatGPT’s guardrails and its repeated efforts to direct him to reach out to loved ones, trusted persons, and crisis resources.”

Circumventing safety guardrails, Raine told ChatGPT that “his inquiries about self-harm were for fictional or academic purposes,” OpenAI noted. The company argued that it’s not responsible for users who ignore warnings.

Additionally, OpenAI argued that Raine told ChatGPT that he found information he was seeking on other websites, including allegedly consulting at least one other AI platform, as well as “at least one online forum dedicated to suicide-related information.” Raine apparently told ChatGPT that “he would spend most of the day” on a suicide forum website.

“Our deepest sympathies are with the Raine family for their unimaginable loss,” OpenAI said in its blog, while its filing acknowledged, “Adam Raine’s death is a tragedy.” But “at the same time,” it’s essential to consider all the available context, OpenAI’s filing said, including that OpenAI has a mission to build AI that “benefits all of humanity” and is supposedly a pioneer in chatbot safety.

More ChatGPT-linked hospitalizations, deaths uncovered

OpenAI has sought to downplay risks to users, releasing data in October “estimating that 0.15 percent of ChatGPT’s active users in a given week have conversations that include explicit indicators of potential suicidal planning or intent,” Ars reported.

While that may seem small, it amounts to about 1 million vulnerable users, and The New York Times this week cited studies that have suggested OpenAI may be “understating the risk.” Those studies found that “the people most vulnerable to the chatbot’s unceasing validation” were “those prone to delusional thinking,” which “could include 5 to 15 percent of the population,” NYT reported.

OpenAI’s filing came one day after a New York Times investigation revealed how the AI firm came to be involved in so many lawsuits. Speaking with more than 40 current and former OpenAI employees, including executives, safety engineers, researchers, NYT found that OpenAI’s model tweak that made ChatGPT more sycophantic seemed to make the chatbot more likely to help users craft problematic prompts, including those trying to “plan a suicide.”

Eventually, OpenAI rolled back that update, making the chatbot safer. However, as recently as October, the ChatGPT maker seemed to still be prioritizing user engagement over safety, NYT reported, after that tweak caused a dip in engagement. In a memo to OpenAI staff, ChatGPT head Nick Turley “declared a ‘Code Orange,” four employees told NYT, warning that “OpenAI was facing ‘the greatest competitive pressure we’ve ever seen.’” In response, Turley set a goal to increase the number of daily active users by 5 percent by the end of 2025.

Amid user complaints, OpenAI has continually updated its models, but that pattern of tightening safeguards, then seeking ways to increase engagement could continue to get OpenAI in trouble, as lawsuits advance and possibly others drop. NYT “uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT,” including nine hospitalized and three deaths.

Gretchen Krueger, a former OpenAI employee who worked on policy research, told NYT that early on, she was alarmed by evidence that came before ChatGPT’s release showing that vulnerable users frequently turn to chatbots for help. Later, other researchers found that such troubled users often become “power users.” She noted that “OpenAI’s large language model was not trained to provide therapy” and “sometimes responded with disturbing, detailed guidance,” confirming that she joined other safety experts who left OpenAI due to burnout in 2024.

“Training chatbots to engage with people and keep them coming back presented risks,” Krueger said, suggesting that OpenAI knew that some harm to users “was not only foreseeable, it was foreseen.”

For OpenAI, the scrutiny will likely continue until such reports cease. Although OpenAI officially unveiled an Expert Council on Wellness and AI in October to improve ChatGPT safety testing, there did not appear to be a suicide expert included on the team. That likely concerned suicide prevention experts who warned in a letter updated in September that “proven interventions should directly inform AI safety design,” since “the most acute, life-threatening crises are often temporary—typically resolving within 24–48 hours”—and chatbots could possibly provide more meaningful interventions in that brief window.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide Read More »

tech-firm’s-new-cto-gets-indicted;-company-then-claims-he-was-never-cto

Tech firm’s new CTO gets indicted; company then claims he was never CTO


“Quite a lot of confusion”

Corvex named Brian Raymond as CTO days before indictment for illegal chip exports.

Image from Corvex press release. Credit: Corvex

When four people were arrested and charged with a conspiracy to illegally export Nvidia chips to China, there was an interesting side note. One of the arrestees, Alabama resident Brian Raymond, was the chief technology officer of an AI company called Corvex.

Or was he? Corvex certainly seemed to think that Raymond was its CTO in the days before his indictment. Corvex named Raymond as its CTO in a press release and filings to the Securities and Exchange Commission, which detailed plans for a merger with Movano Health.

But once Raymond was arrested, Corvex told media outlets that it had never completed the process of hiring him as an employee. While someone could technically be a CTO as a contractor and not a regular employee, a company spokesperson subsequently claimed to Ars that Raymond had never been the CTO.

The company spokesperson asked Ars for a “correction” to our story, which accurately reported that Corvex itself described Raymond as its CTO and as part of its leadership team.

“Raymond was not CTO of Corvex—so the statement above is inaccurate,” Corvex spokesperson Christopher Buscombe, who is apparently with a third-party firm doing media relations for Corvex, told Ars Monday in an email seeking a correction. “The headline is also misleading as a result, as taken together it suggests Ramyond [sic] was CTO of Corvex. Raymond was CEO of Bitworks, a completely different company.”

Our article quoted both Corvex’s press release describing Raymond as the CTO and Corvex’s subsequent statement saying that he had never been hired. Buscombe asked for a correction to our article, saying it “has caused quite a lot of confusion,” though it seems more likely that any confusion was caused by Corvex’s conflicting statements about Raymond’s position at the company.

Meanwhile, the Corvex press release and SEC filings haven’t been changed or corrected. They still say Raymond was already the Corvex CTO and will continue to serve in that role after the merger. The documents make no mention of Bitworks.

Pre-indictment press release

On November 10, Corvex and Movano Health issued their joint press release announcing the merger. Corvex is a private company and Movano a public one, so the transaction requires approval of Movano shareholders. If the merger is completed, the combined company will be public and go by the name Corvex.

The press release says, “Corvex is an AI cloud computing company specializing in GPU-accelerated infrastructure for AI workloads. Corvex is based in Arlington, Virginia, and is led by Seth Demsey and Jay Crystal, Co-Chief Executive Officers and Co-Founders, and Brian Raymond, Chief Technology Officer.” It goes on to say that after the merger, the combined company will be led by Demsey, Crystal, Raymond, “and other members of the Corvex management team.”

The “is led by” phrase in the press release clearly indicates that Raymond was already the CTO, while the additional statement about the post-merger company indicated he would continue as CTO after the merger’s completion. At the same time, Raymond announced on LinkedIn that he had “formally joined Corvex as the CTO, driving AI at scale for customers around the world.”

The Corvex/Movano joint press release naming Raymond as CTO was submitted to the SEC as an exhibit to a Movano filing about the Corvex/Movano merger. A merger agreement submitted to the SEC by Corvex and Movano includes another exhibit listing three “post-closing officers,” specifically Demsey, Crystal, and Raymond.

The timing of Corvex’s statements about Raymond being its CTO could hardly have been worse. Raymond was indicted in a federal court on November 13 and the indictment was unsealed last week. The US Justice Department alleged that Raymond operated an Alabama-based electronics company through which he supplied Nvidia GPUs to his alleged conspirators “for illegal export to the PRC [People’s Republic of China] as part of the conspiracy.”

Raymond, 46, of Huntsville, Alabama, faces two charges for illegal exports, one charge of smuggling, a charge of conspiracy to commit money laundering, and seven counts of money laundering. There are maximum prison sentences of 20 years for each export violation and each money laundering count, and 10 years for the smuggling charge. Raymond was reportedly released on bond after his arrest.

Raymond “was transitioning into an employee role”

With media outlets reporting on the charges, Corvex answered queries from reporters with a statement saying, “Corvex had no part in the activities cited in the Department of Justice’s indictment. The person in question is not an employee of Corvex. Previously a consultant to the company, he was transitioning into an employee role but that offer has been rescinded.”

Law professors with expertise in corporate governance and securities regulations told Ars that someone can legally be an officer of a company without being an employee. But Corvex may still have misled investors with its statements about Raymond’s status.

“It could be the case that this person was the chief technology officer but was not an employee of the company, was an independent contractor instead,” Andrew Jennings, an Emory University law professor, told Ars. But even if one interprets Corvex telling the press that it never hired Raymond in the most charitable way, the distinction is “splitting hairs… because one doesn’t need to be an employee to be an officer of the company,” Jennings said.

Corvex went further in asking at least one news outlet for a correction and claiming that Raymond was never the CTO. “I suspect that what they are saying to the press that this person was never CTO, is probably not correct,” Jennings said. The merging companies are “represented by serious law firms” and aren’t likely to have been lying about Raymond being the CTO, Jennings said.

“I can’t imagine that there would be a press release and a merger agreement that lists him as an officer and specifically as the chief technology officer if it weren’t the case,” he said. “I think they would have some more explaining to do if they really wanted to argue that it’s incorrect to refer to him as the CTO or the former CTO.”

Ars sent an email with several questions yesterday to the listed contact for Corvex, co-CEO Jay Crystal, but received no response. We instead received another email from Buscombe, who offered to provide information on background that “would respond to the questions you have put to Corvex.”

Buscombe said the background information he was offering “cannot be quoted directly” and cannot be “attributable to anyone.” We declined this offer and offered to publish any on-the-record statements that Corvex would provide, but we haven’t received anything further.

A spokesperson for the SEC declined to comment when contacted by Ars. We contacted Movano and Raymond with several questions yesterday and will update this article if we receive any responses.

False statements can lead to litigation or SEC charges

If Raymond really wasn’t the CTO, that probably would be a material misstatement because of the nature of the company, Jennings said. For an AI firm or any kind of tech company, the chief technology officer is an important position. The fact that Raymond was one of just three listed officers adds to the likelihood that it could be a material misstatement, if he really was never the CTO.

“Knowing what sort of technical leadership the company has could be something of import to a reasonable investor” who is voting on a merger, Jennings said.

A false statement about who is the CTO could be used in private litigation brought by investors against the company or in enforcement actions by the SEC. “The SEC could bring an enforcement action under a number of statutes for that sort of false statement, if it were in fact a false statement,” Jennings said.

Robert Miller, a law professor at George Mason University, told Ars “that it’s not absolutely impossible to have someone in a role like CTO or even CEO when the person is not an employee, legally speaking.” But even “if that was the case, it would very likely be misleading for the company to say, without qualification or explanation, that ‘Raymond is the CTO of the company.’ That would reasonably be understood to mean that Raymond was an employee.”

Not explaining a company officer’s employment status could be a “material omission” in violation of Rule 10b-5, an anti-fraud regulation, he said.

“A 10b-5 violation could result in enforcement action by the SEC,” Miller told Ars. “It could also result in private lawsuits from shareholders, but such shareholders would also have to show damages—e.g., a stock drop when the truth came out. In this case, given that Raymond was likely more liability than asset, there may be no damages to the shareholders from the omission.”

Companies can face liability for false statements to investors, even if they’re not made in SEC filings. An SEC filing “creates potential additional avenues for liability,” Jennings said. “Certainly the securities statutes will apply to communications made by a public company in really any channel, including just putting out a press release, and so that could spark private litigation or it could spark SEC enforcement. It’s also illegal to knowingly make a false statement to a government agency, whether that’s the FBI or the SEC or a committee of Congress, etc. And so the act of filing could create additional avenues of liability, but those would be sort of stacked on top of each other.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Tech firm’s new CTO gets indicted; company then claims he was never CTO Read More »

landlords’-go-to-tool-to-set-rent-prices-to-be-gutted-under-realpage-settlement

Landlords’ go-to tool to set rent prices to be gutted under RealPage settlement

That report cited comments made by a RealPage vice president, Jay Parsons, at a meeting with a group of real estate tech executives. Boasting that “one of their company’s signature product’s software [uses] a mysterious algorithm to help landlords push the highest possible rents on tenants,” Parsons wooed landlords. In a since-deleted video, he noted that apartment rents had recently increased by 14.5 percent, bragging that “never before have we seen these numbers” and prodding another executive to agree that RealPage was “driving it, quite honestly.” Business Insider dubbed it landlords’ “secret weapon.”

Back then, critics told ProPublica that “at a minimum,” RealPage’s “algorithm may be artificially inflating rents and stifling competition,” noting that “machines quickly learn” to increase prices “above competitive levels” to “win.”

Today, RealPage’s site notes that “its suite of services is used to manage more than 24 million units worldwide.” The DOJ reported that on top of collecting its customers’ sensitive information—which included rental prices, demand, discounts, vacancy, and lease terms—RealPage also collected data by making “over 50,000 monthly phone calls,” conducting “market surveys” of landlords covering “over 11 million units and approximately 52,000 properties.”

Landlords “knowingly share this nonpublic information with RealPage,” the DOJ said, while “rising rents have disproportionately affected low-income residents.” DOJ Antitrust Division Assistant Attorney General Abigail Slater confirmed the settlement would ensure that RealPage can no longer rely on such nonpublic data to help landlords collude to set rental prices, while advancing the DOJ’s mission of preventing price-fixing algorithms from harming Americans.

“Competing companies must make independent pricing decisions, and with the rise of algorithmic and artificial intelligence tools, we will remain at the forefront of vigorous antitrust enforcement,” Slater said.

Landlords’ go-to tool to set rent prices to be gutted under RealPage settlement Read More »

doge-“cut-muscle,-not-fat”;-26k-experts-rehired-after-brutal-cuts

DOGE “cut muscle, not fat”; 26K experts rehired after brutal cuts


Government brain drain will haunt US after DOGE abruptly terminated.

Billionaire Elon Musk, the head of the Department of Government Efficiency (DOGE), holds a chainsaw as he speaks at the annual Conservative Political Action Conference. Credit: SAUL LOEB / Contributor | AFP

After Donald Trump curiously started referring to the Department of Government Efficiency exclusively in the past tense, an official finally confirmed Sunday that DOGE “doesn’t exist.”

Talking to Reuters, Office of Personnel Management (OPM) Director Scott Kupor confirmed that DOGE—a government agency notoriously created by Elon Musk to rapidly and dramatically slash government agencies—was terminated more than eight months early. This may have come as a surprise to whoever runs the DOGE account on X, which continued posting up until two days before the Reuters report was published.

As Kupor explained, a “centralized agency” was no longer necessary, since OPM had “taken over many of DOGE’s functions” after Musk left the agency last May. Around that time, DOGE staffers were embedded at various agencies, where they could ostensibly better coordinate with leadership on proposed cuts to staffing and funding.

Under Musk, DOGE was hyped as planning to save the government a trillion dollars. On X, Musk bragged frequently about the agency, posting in February that DOGE was “the one shot the American people have to defeat BUREAUcracy, rule of the bureaucrats, and restore DEMOcracy, rule of the people. We’re never going to get another chance like this.”

The reality fell far short of Musk’s goals, with DOGE ultimately reporting it saved $214 billion—an amount that may be overstated by nearly 40 percent, critics warned earlier this year.

How much talent was lost due to DOGE cuts?

Once Musk left, confidence in DOGE waned as lawsuits over suspected illegal firings piled up. By June, Congress was drawn, largely down party lines, on whether to codify the “DOGE process”—rapidly firing employees, then quickly hiring back whoever was needed—or declare DOGE a failure—perhaps costing taxpayers more in the long term due to lost talent and services.

Because DOGE operated largely in secrecy, it may be months or even years before the public can assess the true cost of DOGE’s impact. However, in the absence of a government tracker, the director of the Center for Effective Public Management at the Brookings Institution, Elaine Kamarck, put together what might be the best status report showing how badly DOGE rocked government agencies.

In June, Kamarck joined other critics flagging DOGE’s reported savings as “bogus.” In the days before DOGE’s abrupt ending was announced, she published a report grappling with a critical question many have pondered since DOGE launched: “How many people can the federal government lose before it crashes?”

In the report, Kamarck charted “26,511 occasions where the Trump administration abruptly fired people and then hired them back.” She concluded that “a quick review of the reversals makes clear that the negative stereotype of the ‘paper-pushing bureaucrat’” that DOGE was supposedly targeting “is largely inaccurate.”

Instead, many of the positions the government rehired were “engineers, doctors, and other professionals whose work is critical to national security and public health,” Kamarck reported.

About half of the rehires, Kamarck estimated, “appear to have been mandated by the courts.” However, in about a quarter of cases, the government moved to rehire staffers before the court could weigh in, Kamarck reported. That seemed to be “a tacit admission that the blanket firings that took place during the DOGE era placed the federal government in danger of not being able to accomplish some of its most important missions,” she said.

Perhaps the biggest downside of all of DOGE’s hasty downsizing, though, is a trend in which many long-time government workers simply decided to leave or retire, rather than wait for DOGE to eliminate their roles.

During the first six months of Trump’s term, 154,000 federal employees signed up for the deferred resignation program, Reuters reported, while more than 70,000 retired. Both numbers were clear increases (tens of thousands) over exits from government in prior years, Kamarck’s report noted.

“A lot of people said, ‘the hell with this’ and left,” Kamarck told Ars.

Kamarck told Ars that her report makes it obvious that DOGE “cut muscle, not fat,” because “they didn’t really know what they were doing.”

As a result, agencies are now scrambling to assess the damage and rehire lost talent. However, her report documented that agencies aligned with Trump’s policies appear to have an easier time getting new hires approved, despite Kupor telling Reuters that the government-wide hiring freeze is “over.” As of mid-November 2025, “of the over 73,000 posted jobs, a candidate was selected for only about 14,400 of them,” Kamarck reported, noting that it was impossible to confirm how many selected candidates have officially started working.

“Agencies are having to do a lot of reassessments in terms of what happened,” Kamarck told Ars, concluding that DOGE “was basically a disaster.”

A decentralized DOGE may be more powerful

“DOGE is not dead,” though, Kamarck said, noting that “the cutting effort is definitely” continuing under the Office of Management and Budget, which “has a lot more power than DOGE ever had.”

However, the termination of DOGE does mean that “the way it operated is dead,” and that will likely come as a relief to government workers who expected DOGE to continue slashing agencies through July 2026 at least, if not beyond.

Many government workers are still fighting terminations, as court cases drag on, and even Kamarck has given up on tracking due to inconsistencies in outcomes.

“It’s still like one day the court says, ‘No, you can’t do that,’” Kamarck explained. “Then the next day another court says, ‘Yes, you can.’” Other times, the courts “change their minds,” or the Trump administration just doesn’t “listen to the courts, which is fairly terrifying,” Kamarck said.

Americans likely won’t get a clear picture of DOGE’s impact until power shifts in Washington. That could mean waiting for the next presidential election, or possibly if Democrats win a majority in midterm elections, DOGE investigations could start as early as 2027, Kamarck suggested.

OMB will likely continue with cuts that Americans appear to want, as White House spokesperson Liz Huston told Reuters that “President Trump was given a clear mandate to reduce waste, fraud and abuse across the federal government, and he continues to actively deliver on that commitment.”

However, Kamarck’s report noted polls showing that most Americans disapprove of how Trump is managing government and its workforce, perhaps indicating that OMB will be pressured to slow down and avoid roiling public opinion ahead of the midterms.

“The fact that ordinary Americans have come to question the downsizing is, most likely, the result of its rapid unfolding, with large cuts done quickly regardless of their impact on the government’s functioning,” Kamarck suggested. Even Musk began to question DOGE. After Trump announced plans to appeal an electrical vehicle mandate that the Tesla founder relied on, Musk posted on X, “What the heck was the point of DOGE, if he’s just going to increase the debt by $5 trillion??”

Facing “blowback” over the most unpopular cuts, agencies sometimes rehired cut staffers within 24 hours, Kamarck noted, pointing to the Department of Energy as one of the “most dramatic” earliest examples. In that case, Americans were alarmed to see engineers cut who were responsible for keeping the nation’s nuclear arsenal “safe and ready.” Retention for those posts was already a challenge due to “high demand in the private sector,” and the number of engineers was considered “too low” ahead of DOGE’s cuts. Everyone was reinstated within a day, Kamarck reported.

Alarm bells rang across the federal government, and it wasn’t just about doctors and engineers being cut or entire agencies being dismantled, like USAID. Even staffers DOGE viewed as having seemingly less critical duties—like travel bookers and customer service reps—were proven key to government functioning. Arbitrary cuts risked hurting Americans in myriad ways, hitting their pocketbooks, throttling community services, and limiting disease and disaster responses, Kamarck documented.

Now that the hiring freeze is lifted and OMB will be managing DOGE-like cuts moving forward, Kamarck suggested that Trump will face ongoing scrutiny over Musk’s controversial agency, despite its dissolution.

“In order to prove that the downsizing was worth the pain, the Trump administration will have to show that the government is still operating effectively,” Kamarck wrote. “But much could go wrong,” she reported, spouting a list of nightmare scenarios:

“Nuclear mismanagement or airline accidents would be catastrophic. Late disaster warnings from agencies monitoring weather patterns, such as the National Oceanic and Atmospheric Administration (NOAA), and inadequate responses from bodies such as the Federal Emergency Management Administration (FEMA), could put people in danger. Inadequate staffing at the FBI could result in counter-terrorism failures. Reductions in vaccine uptake could lead to the resurgence of diseases such as polio and measles. Inadequate funding and staffing for research could cause scientists to move their talents abroad. Social Security databases could be compromised, throwing millions into chaos as they seek to prove their earnings records, and persistent customer service problems will reverberate through the senior and disability communities.”

The good news is that federal agencies recovering from DOGE cuts are “aware of the time bombs and trying to fix them,” Kamarck told Ars. But with so much brain drain from DOGE’s first six months ripping so many agencies apart at their seams, the government may struggle to provide key services until lost talent can be effectively replaced, she said.

“I don’t know how quickly they can put Humpty Dumpty back together again,” Kamarck said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

DOGE “cut muscle, not fat”; 26K experts rehired after brutal cuts Read More »

pornhub-is-urging-tech-giants-to-enact-device-based-age-verification

Pornhub is urging tech giants to enact device-based age verification


The company is pushing for an alternative way to keep minors from viewing porn.

In letters sent to Apple, Google, and Microsoft this week, Pornhub’s parent company urged the tech giants to support device-based age verification in their app stores and across their operating systems, WIRED has learned.

“Based on our real-world experience with existing age assurance laws, we strongly support the initiative to protect minors online,” reads the letter sent by Anthony Penhale, chief legal officer for Aylo, which owns Pornhub, Brazzers, Redtube, and YouPorn. “However, we have found site-based age assurance approaches to be fundamentally flawed and counterproductive.”

The letter adds that site-based age verification methods have “failed to achieve their primary objective: protecting minors from accessing age-inappropriate material online.” Aylo says device-based authentication is a better solution for this issue because once a viewer’s age is determined via phone or tablet, their age signal can be shared over its application programming interface (API) with adult sites.

The letters were sent following the continued adoption of age verification laws in the US and UK, which require users to upload an ID or other personal documentation to verify that they are not a minor before viewing sexually explicit content; often this requires using third-party services. Currently, 25 US states have passed some form of ID verification, each with different provisions.

Pornhub has experienced an enormous dip in traffic as a result of its decision to pull out of most states that have enacted these laws. The platform was one of the few sites to comply with the new law in Louisiana but doing so caused traffic to drop by 80 percent. Similarly, since implementation of the Online Safety Act, Pornhub has lost nearly 80 percent of its UK viewership.

The company argues that it’s a privacy risk to leave age verification up to third-party sites and that people will simply seek adult content on platforms that don’t comply with the laws.

“We have seen an exponential surge in searches for alternate adult sites without age restrictions or safety standards at all,” says Alex Kekesi, vice president of brand and community at Pornhub.

She says she hopes the tech companies and Aylo are able to find common ground on the matter, especially given the recent passage of the Digital Age Assurance Act (AB 1043) in California. “This is a law that’s interesting because it gets it almost exactly right,” she says. Signed into law in October, it requires app store operators to authenticate user ages before download.

According to Google spokesperson Karl Ryan, “Google is committed to protecting kids online, including by developing and deploying new age assurance tools like our Credential Manager API that can be used by websites. We don’t allow adult entertainment apps on Google Play and would emphasize that certain high-risk services like Aylo will always need to invest in specific tools to meet their own legal and responsibility obligations.”

Microsoft declined to comment, but pointed WIRED to a recent policy recommendation post that said “age assurance should be applied at the service level, target specific design features that pose heightened risks, and enable tailored experiences for children.”

Apple likewise declined to comment and instead pointed WIRED to its child online safety report and noted that web content filters are turned on by default for every user under 18. A software update from June specified that Apple requires kids who are under 13 to have a kid account, which also includes “app restrictions enabled from the beginning.” Apple currently has no way of requiring every single website to integrate an API.

According to Pornhub, age verification laws have led to ineffective enforcement. “The sheer volume of adult content platforms has proven to be too challenging for governments worldwide to regulate at the individual site or platform level,” says Kekesi. Aylo claims device-based age verification that happens once, on a phone or computer, will preserve user privacy while prioritizing safety.

Recent Studies by New York University and public policy nonprofit the Phoenix Center suggest that current age verification laws don’t work because people find ways to circumvent them, including by using VPNs and turning to sites that don’t regulate their content.

“Platform-based verification has been like Prohibition,” says Mike Stabile, director of public policy at the Free Speech Coalition. “We’re seeing consumer behavior reroute away from legal, compliant sites to foreign sites that don’t comply with any regulations or laws. Age verification laws have effectively rerouted a massive river of consumers to sites with pirated content, revenge porn, and child sex abuse material.” He claims that these laws “have been great for criminals, terrible for the legal adult industry.”

With age verification and the overall deanonymizing of the internet, these are issues that will now face nearly everyone, but especially those who are politically disfavored. Sex workers have been dealing with issues like censorship and surveillance online for a long time. One objective of Project 2025, MAGA’s playbook for President Trump’s second term, has been to “back door” a national ban on porn through state laws.

The current surge of child protection laws around the world is driving a significant change in how people engage with the internet, and is also impacting industries beyond porn, including gaming and social media. Starting December 10 in Australia, in accordance with the government’s social media ban, kids under 16 will be kicked off Facebook, Instagram, and Threads.

Ultimately, Stabile says that may be the point. In the US, “the advocates for these bills have largely fallen into two groups: faith-based organizations that don’t believe adult content should be legal, and age verification providers who stand to profit from a restricted internet.” The goal of faith-based organizations, he says, is to destabilize the adult industry and dissuade adults from using it, while the latter works to expand their market as much as possible, “even if that means getting in bed with right-wing censors.”

But the problem is that “even well-meaning legislators advancing these bills have little understanding of the internet,” Stabile adds. “It’s much easier to go after a political punching bag like Pornhub than it is Apple or Google. But if you’re not addressing the reality of the internet, if your legislation flies in the face of consumer behavior, you’re only going to end up creating systems that fail.”

Adult industry insiders I spoke to in August explained that the biggest misconception about the industry is that it is against self-regulation when that couldn’t be further from the truth. “Keeping minors off adult sites is a shared responsibility that requires a global solution,” Kekesi says. “Every phone, tablet, or computer should start as a kid-safe device. Only verified adults should unlock access to things like dating apps, gambling, or adult content.” In 2022, Pornhub created a chatbot that urges people searching for child sexual abuse content to seek counseling; the tool was introduced following a 2020 New York Times investigation that alleged the platform had monetized videos showing child abuse. Pornhub has since started releasing annual transparency reports and tightened its verification process of performers and for video uploads.

According to Politico, Google, Meta, OpenAI, Snap, and Pinterest all supported the California bill. Right now that law is limited to California, but Kekesi believes it can work as a template for other states.

“We obviously see that there’s kind of a path forward here,” she says.

This story originally appeared at WIRED.com

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Pornhub is urging tech giants to enact device-based age verification Read More »

tech-company-cto-and-others-indicted-for-exporting-nvidia-chips-to-china

Tech company CTO and others indicted for exporting Nvidia chips to China

Citing export controls that took effect in 2022, the indictment said the US is trying to disrupt China’s plan to build exascale supercomputers for military and surveillance use. “These capabilities are being used by the PRC for its military modernization efforts and in connection with the PRC’s weapons design and testing, including for weapons of mass destruction, as well as in connection with the PRC’s development and deployment of advanced AI surveillance tools,” the indictment said.

The Justice Department said the conspirators used Janford Realtor, LLC, a Florida-based company that was not involved in real estate despite its name, “as a front to purchase and then illegally export controlled GPUs to the PRC.” Ho and Li owned and controlled Janford Realtor, while Raymond operated an Alabama-based electronics company that “supplied Nvidia GPUs to Ho and others for illegal export to the PRC,” the Justice Department said.

Kickbacks, money laundering

The conspirators paid each other “kickbacks” or commissions on the sale and export of the Nvidia chips, the indictment said. The money laundering charges involve a variety of transfers from two Chinese companies to Janford Realtor and the Alabama electronics company, the indictment said. The indictment lists nine wire transfers in amounts ranging from $237,248 to $1,150,000.

Raymond was reportedly released on bond, while the other three alleged conspirators are being detained. “This is an extremely serious offense. At the time these were being exported, these were Nvidia’s most advanced chips,” US prosecutor Noah Stern told a magistrate judge in Oakland yesterday, according to Wired.

Stein also said in court that “text messages obtained by authorities show Li boasting about how his father ‘had engaged in similar business on behalf of the Chinese Communist Party,’” Wired reported. Stern said that in the messages, Li “explained that his father had ways to import” the Nvidia chips despite US export controls.

Tech company CTO and others indicted for exporting Nvidia chips to China Read More »

keep-your-receipts:-tech-firms-told-to-prepare-for-possible-tariff-refunds

Keep your receipts: Tech firms told to prepare for possible tariff refunds


Tech firms dare to dream chip tariffs may go away amid rumors of delays.

For months, the Trump administration has warned that semiconductor tariffs are coming soon, leaving the tech industry on pins and needles after a chaotic year of unpredictable tariff regimes collectively cost firms billions.

The semiconductor tariffs are key to Donald Trump’s economic agenda, which is intended to force more manufacturing into the US by making it more expensive to import materials and products. He campaigned on axing the CHIPS Act—which provided subsidies to companies investing in manufacturing chips in the US—complaining that it was a “horrible, horrible thing” to “give hundreds of billions of dollars” away when the US could achieve the same objective by instead taxing companies and “use whatever is left over” of CHIPS funding to “reduce debt.” However, as 2025 winds down, the US president faces pressure on all sides to delay semiconductor tariffs, insiders told Reuters, and it appears that he is considering caving.

According to “two people with direct knowledge of the matter and a third person briefed on the conversations,” US officials have privately told industry and government stakeholders that semiconductor tariffs will likely be delayed.

A fourth insider suggested Trump was hesitant to impose tariffs that could rock the recent US-China trade truce, while Reuters noted that Trump may also be hesitant to announce new tariffs during the holiday shopping season that risk increasing prices of popular consumer tech products. Recently, Trump cut tariffs on grocery items in the face of mounting consumer backlash, so imposing new tariffs now—risking price hikes on laptops, game consoles, and smartphones—surely wouldn’t improve his record-low approval rating.

In April, Trump started threatening semiconductor tariffs as high as 100 percent, prompting a Commerce Department probe into the potential economic and national security impacts of imposing broad chip tariffs. Stakeholders were given 30 days to weigh in, and tech industry associations were quick to urge Trump to avoid imposing broad tariffs that they warned risked setting back US chip manufacturing, ruining US tech competitiveness, and hobbling innovation. The best policy would be no chip tariffs, some industry groups suggested.

Glimmer of hope chip tariffs may never come

Whether Trump would ever give up on imposing broad chip tariffs that he thinks will ensure that the US becomes a world-leading semiconductor hub is likely a tantalizing daydream for companies relieved by rumors that chip tariffs may be delayed. But it’s not completely improbable that he might let this one go.

During Trump’s first term, he threatened tariffs on foreign cars that did not come to pass until his second term. When it comes to the semiconductor tariffs, Trump may miss his chance to act if he’s concerned about losing votes in the midterm elections.

The Commerce Department’s investigation must conclude by December 27, after which Trump has 90 days to decide if he wants to move ahead with tariffs based on the findings.

He could, of course, do nothing or claim to disagree with the findings and seek an alternative path to impose tariffs, but there’s a chance that his own party may add to the pressure to delay them. Trump’s low approval rating is already hurting Republicans in polls, New York Magazine reported, and some are begging Trump to join them on the campaign trail next year to avoid a midterm slump, Politico reported.

For tech companies, the goal is to persuade Trump to either drop or narrowly tailor semiconductor tariffs—and hopefully eliminate the threat of tariffs on downstream products, which could force tech companies to pay double or triple taxes on imports. If they succeed, they could be heading into 2026 with more stable supply chains and even possibly with billions in tariff refunds in their pockets, if the Supreme Court deems Trump’s “emergency” “reciprocal tariffs” illegal.

Gary Shapiro, CEO of the Consumer Technology Association (CTA), attended oral arguments in the SCOTUS case, noting on LinkedIn that “business executives have had to contend with over 100 announcements of tariff changes since the beginning of 2025.”

“I hope to see the Supreme Court rule swiftly to provide businesses the certainty they need,” Shapiro said, arguing in a second post that tariffs “cause uncertainty for businesses, snarl supply chains, and drive inflation and higher costs for consumers.”

As tech companies wait to see how the court rules and how Trump responds to the conclusion of the Commerce Department’s probe, uncertainty remains. CTA’s vice president of international trade, Ed Brzytwa, told Ars that the CTA has advised tech firms to keep their receipts and document all tariff payments.

How chip tariffs could raise prices

Without specifying what was incorrect, a White House official disputed Reuters’ reporting that Trump may shift the timeline for announcing semiconductor tariffs, saying simply “that is not true.”

A Commerce Department official said there was “no change” to report, insisting that the “administration remains committed to reshoring manufacturing that’s critical to our national and economic security.”

But neither official shared any details on when tariffs might be finalized, Reuters reported. And the Commerce Department did not respond to Ars’ request for information on when the public could expect to review findings of its probe.

In comments submitted to the Commerce Department, the Semiconductor Industry Association warned that “for every dollar that a semiconductor chip increases in price, products with embedded semiconductors will have to raise their sales price by $3 to maintain their previous margins.” That makes it easy to see how semiconductor tariffs risk significantly raising prices on any product containing a chip, depending how high the tariff rate is, including products like refrigerators, cars, video game consoles, coffee makers, smartphones, and the list goes on.

It’s estimated that chip tariffs could cost the semiconductor industry more than $1 billion. However, the bigger threat to the semiconductor industry would be if the higher prices of US-made chip made it harder to compete with “companies who sell comparable chips at a lower price globally,” SIA reported. Additionally, “higher input costs from tariffs” could also “force domestic companies to divert funds away from R&D,” the group noted. US firms that Trump wants to promote could rapidly lose their edge in such a scenario.

Echoing SIA, the Computer and Communications Industry Association (CCIA) warned the Commerce Department that “broad tariffs would significantly increase input costs for a wide range of downstream industries, raising costs for consumers while decreasing revenues for domestic semiconductor producers, the very industry this investigation seeks to protect.”

To avoid harming key US industries, CCIA recommended that any semiconductor tariffs imposed “focus narrowly” on semiconductors and semiconductor manufacturing equipment “that are critical for national defense and sourced from countries of concern.” The group also suggested creating high and low-risk categories, so that “low-risk goods, such as the import of commercial-grade printed circuit boards used in consumer electronics from key partners” wouldn’t get hit with taxes that have little to do with protecting US national security.

“US long-term competitiveness in both the semiconductor industry and downstream sectors could be greatly impaired if policy interventions are not carefully calibrated,” CCIA forecasted, warning that everyone would feel the pain, from small businesses to leading AI firms.

Trump’s plan for tariff funds makes no sense, groups say

Trump has been claiming since April that chip tariffs are coming soon, and he continues to use them as leverage in recent deals struck with Korea and Switzerland. But so far, while some countries have managed to negotiate rates as low as 15 percent, the semiconductor industry and downstream sectors remain in the dark on what to expect if and when the day finally comes that broader tariffs are announced.

Avoiding so-called tariff stacking—where products are taxed, as well as materials used in the products—is SIA’s biggest ask. The group “strongly” requested that Trump maintain “as simple of a tariff regime for semiconductors as possible,” given “the far-reaching consequences” the US could face if chip tariffs become as complex and burdensome to tech firms as reciprocal tariffs.

SIA also wants Trump to consider offering more refunds, perhaps offering to pay back “duties, taxes, and fees paid on imported parts, components, and materials that are incorporated in an exported product.”

Such a policy “would ensure the United States remains at the forefront of global chip technology,” SIA claimed, by making sure that tariffs collected “remain available for investments in expanding US manufacturing capacity and advanced research and development, as opposed to handed over to the US Treasury.”

Rather than refunding firms, Trump has instead proposed sharing tariffs as dividends, perhaps sending $2,000 checks to low and middle-income families. However, CNN spoke with experts who said the math doesn’t add up, making the prospect that Trump could send stimulus checks seem unlikely. He has also suggested the funds—which were projected to raise $158.4 billion in total revenue in 2025, CNN reported—could be used to reduce national debt.

Trump’s disdain for the CHIPS Act, casting it as a handout to tech firms, makes it seem unlikely that he’ll be motivated to refund firms or offer new incentives. Some experts doubt that he’ll make it easy for firms to get refunds of tariffs if the Supreme Court drafted such an order, or if a SCOTUS loss triggered a class action lawsuit.

CTA’s Shapiro said on LinkedIn that he’s “not sure” which way the SCOTUS case will go, but he’s hoping the verdict will come before the year’s end. Like industry groups urging Trump to keep semiconductor tariffs simple, Shapiro said he hoped Trump would streamline the process for any refunds coming. In the meantime, CTA advises firms to keep all documents itemizing tariffs paid to ensure firms aren’t stiffed if Trump’s go-to tariff regimes are deemed illegal.

“If plaintiffs prevail in this case, I hope to see the government keep it simple and ensure that retailers and importers get their tariff payments refunded swiftly and with as few hoops to jump through as possible,” Shapiro said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Keep your receipts: Tech firms told to prepare for possible tariff refunds Read More »

trump-revives-unpopular-ted-cruz-plan-to-punish-states-that-impose-ai-laws

Trump revives unpopular Ted Cruz plan to punish states that impose AI laws

The FTC chairman would be required to issue a policy statement detailing “circumstances under which State laws that require alterations to the truthful outputs of AI models are preempted by the FTC Act’s prohibition on engaging in deceptive acts or practices affecting commerce.”

When Cruz proposed a moratorium restricting state AI regulation in mid-2025, Sen. Marsha Blackburn (R-Tenn.) helped lead the fight against it. “Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can’t block states from making laws that protect their citizens,” Blackburn said at the time.

Sen. Maria Cantwell (D-Wash.) also spoke out against the Cruz plan, saying it would preempt “good state consumer protection laws” related to robocalls, deepfakes, and autonomous vehicles.

Trump wants Congress to preempt state laws

Besides reviving the Cruz plan, Trump’s draft executive order seeks new legislation to preempt state laws. The order would direct Trump administration officials to “jointly prepare for my review a legislative recommendation establishing a uniform Federal regulatory framework for AI that preempts State AI laws that conflict with the policy set forth in this order.”

House Majority Leader Steve Scalise (R-La.) this week said a ban on state AI laws could be included in the National Defense Authorization Act (NDAA). Democrats are trying to keep the ban out of the bill.

“We have to allow states to take the lead because we’re not able to, so far in Washington, come up with appropriate legislation,” Sen. Jack Reed (D-R.I.), the ranking member on the Armed Services Committee, told Semafor.

In a Truth Social post on Tuesday, Trump claimed that states are “trying to embed DEI ideology into AI models.” Trump wrote, “We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes. If we don’t, then China will easily catch us in the AI race. Put it in the NDAA, or pass a separate Bill, and nobody will ever be able to compete with America.”

Trump revives unpopular Ted Cruz plan to punish states that impose AI laws Read More »

massive-cloudflare-outage-was-triggered-by-file-that-suddenly-doubled-in-size

Massive Cloudflare outage was triggered by file that suddenly doubled in size

Cloudflare’s proxy service has limits to prevent excessive memory consumption, with the bot management system having “a limit on the number of machine learning features that can be used at runtime.” This limit is 200, well above the actual number of features used.

“When the bad file with more than 200 features was propagated to our servers, this limit was hit—resulting in the system panicking” and outputting errors, Prince wrote.

Worst Cloudflare outage since 2019

The number of 5xx error HTTP status codes served by the Cloudflare network is normally “very low” but soared after the bad file spread across the network. “The spike, and subsequent fluctuations, show our system failing due to loading the incorrect feature file,” Prince wrote. “What’s notable is that our system would then recover for a period. This was very unusual behavior for an internal error.”

This unusual behavior was explained by the fact “that the file was being generated every five minutes by a query running on a ClickHouse database cluster, which was being gradually updated to improve permissions management,” Prince wrote. “Bad data was only generated if the query ran on a part of the cluster which had been updated. As a result, every five minutes there was a chance of either a good or a bad set of configuration files being generated and rapidly propagated across the network.”

This fluctuation initially “led us to believe this might be caused by an attack. Eventually, every ClickHouse node was generating the bad configuration file and the fluctuation stabilized in the failing state,” he wrote.

Prince said that Cloudflare “solved the problem by stopping the generation and propagation of the bad feature file and manually inserting a known good file into the feature file distribution queue,” and then “forcing a restart of our core proxy.” The team then worked on “restarting remaining services that had entered a bad state” until the 5xx error code volume returned to normal later in the day.

Prince said the outage was Cloudflare’s worst since 2019 and that the firm is taking steps to protect against similar failures in the future. Cloudflare will work on “hardening ingestion of Cloudflare-generated configuration files in the same way we would for user-generated input; enabling more global kill switches for features; eliminating the ability for core dumps or other error reports to overwhelm system resources; [and] reviewing failure modes for error conditions across all core proxy modules,” according to Prince.

While Prince can’t promise that Cloudflare will never have another outage of the same scale, he said that previous outages have “always led to us building new, more resilient systems.”

Massive Cloudflare outage was triggered by file that suddenly doubled in size Read More »

he-got-sued-for-sharing-public-youtube-videos;-nightmare-ended-in-settlement

He got sued for sharing public YouTube videos; nightmare ended in settlement


Librarian vows to stop invasive ed tech after ending lawsuit with Proctorio.

Librarian Ian Linkletter remains one of Proctorio’s biggest critics after 5-year legal battle. Credit: Ashley Linkletter

Nobody expects to get sued for re-posting a YouTube video on social media by using the “share” button, but librarian Ian Linkletter spent the past five years embroiled in a copyright fight after doing just that.

Now that a settlement has been reached, Linkletter told Ars why he thinks his 2020 tweets sharing public YouTube videos put a target on his back.

Linkletter’s legal nightmare started in 2020 after an education technology company, Proctorio, began monitoring student backlash on Reddit over its AI tool used to remotely scan rooms, identify students, and prevent cheating on exams. On Reddit, students echoed serious concerns raised by researchers, warning of privacy issues, racist and sexist biases, and barriers to students with disabilities.

At that time, Linkletter was a learning technology specialist at the University of British Columbia. He had been aware of Proctorio as a tool that some professors used, but he ultimately joined UBC students criticizing Proctorio, as, practically overnight, it became a default tool that every teacher relied on during the early stages of the pandemic.

To Linkletter, the AI tool not only seemed flawed, but it also seemingly made students more anxious about exams. However, he didn’t post any tweets criticizing the tech—until he grew particularly disturbed to see Proctorio’s CEO, Mike Olsen, “showing up in the comments” on Reddit to fire back at one of his university’s loudest student critics. Defending Proctorio, Olsen roused even more backlash by posting the student’s private chat logs publicly to prove the student “lied” about a support interaction, The Guardian reported.

“If you’re gonna lie bro … don’t do it when the company clearly has an entire transcript of your conversation,” Olsen wrote, later apologizing for the now-deleted post.

“That set me off, and I was just like, this is completely unacceptable for a CEO to be going after our students like this,” Linkletter told Ars.

The more that Linkletter researched Proctorio, the more concerned he became. Taking to then-Twitter, he posted a series of seven tweets over a couple days that linked to YouTube videos that Proctorio hosted in its help center. He felt the videos—which showed how Proctorio flagged certain behaviors, tracked “abnormal” eye and head movements, and scanned rooms—helped demonstrate why students were so upset. And while he had fewer than 1,000 followers, he hoped that the influential higher education administrators who followed him would see his posts and consider dropping the tech.

Rather than request Linkletter remove the tweets—which was the company’s standard practice—Proctorio moved quickly to delete the videos. Proctorio supposedly expected that the removals would put Linkletter on notice to stop tweeting out help center videos. Instead, Linkletter posted a screenshot of the help center showing all the disabled videos, while suggesting that Proctorio seemed so invested in secrecy that it was willing to gut its own support resources to censor criticism of their tools.

Together, the videos, the help center screenshot, and another screenshot showing course material describing how Proctorio works were enough for Proctorio to take Linkletter to court.

The ed tech company promptly filed a lawsuit and obtained a temporary injunction by spuriously claiming that Linkletter shared private YouTube videos containing confidential information. Because the YouTube videos—which were public but “unlisted” when Linkletter shared them—had been removed, Linkletter did not have to delete the seven tweets that initially caught Proctorio’s attention, but the injunction required that he remove two tweets, including the screenshots.

In the five years since, the legal fight dragged on, with no end in sight until last week, as Canadian courts tangled with copyright allegations that tested a recently passed law intended to shield Canadian rights to free expression, the Protection of Public Participation Act.

To fund his defense, Linkletter said in a blog announcing the settlement that he invested his life savings “ten times over.” Additionally, about 900 GoFundMe supporters and thousands of members of the Association of Administrative and Professional Staff at UBC contributed tens of thousands more. For the last year of the battle, a law firm, Norton Rose Fulbright, agreed to represent him on a pro bono basis, which Linkletter said “was a huge relief to me, as it meant I could defend myself all the way if Proctorio chose to proceed with the litigation.”

The terms of the settlement remain confidential, but both Linkletter and Proctorio confirmed that no money was exchanged.

For Proctorio, the settlement made permanent the injunction that restricted Linkletter from posting the company’s help center or instructional materials. But it doesn’t stop Linkletter from remaining the company’s biggest critic, as “there are no other restrictions on my freedom of expression,” Linkletter’s blog noted.

“I’ve won my life back!” Linkletter wrote, while reassuring his supporters that he’s “fine” with how things ended.

“It doesn’t take much imagination to understand why Proctorio is a nightmare for students,” Linkletter wrote. “I can say everything that matters about Proctorio using public information.”

Proctorio’s YouTube “mistake” triggered injunction

In a statement to Ars, Kevin Rockmael, Proctorio’s head of marketing, suggested that the ed tech company sees the settlement as a win.

“After years of successful litigation, we are pleased that this settlement (which did not include any monetary compensation) protects our interests by making our initial restraining order permanent,” Rockmael said. “Most importantly, we are glad to close this chapter and focus our efforts on helping teachers and educational institutions deliver valuable and secure assessments.”

Responding to Rockmael, Linkletter clarified that the settlement upholds a modified injunction, noting that Proctorio’s initial injunction was significantly narrowed after a court ruled it overly broad. Linkletter also pointed to testimony from Proctorio’s former head of marketing, John Devoy, whose affidavit “mistakenly” swearing that Linkletter was sharing private YouTube videos was the sole basis for the court approving the injunction. That testimony, Linkletter told Ars, suggested that Proctorio knew that the librarian had shared videos the company had accidentally made public and used it as “some sort of excuse to pull the trigger” on a lawsuit after Linkletter commented on the sub-Reddit incident.

“Even a child understands how YouTube works, so how are we supposed to trust a surveillance company that doesn’t?” Linkletter wrote in his blog.

Grilled by Linkletter’s lawyer, Devoy insisted that he was not “lying” when he claimed the videos Linkletter shared came from a private channel. Instead—even though he knew the difference between a private and public channel—Devoy claimed that he made a simple mistake, even suggesting that the inaccurate claim was a “typo.”

Linkletter maintains that Proctorio’s lawsuit had nothing to do with the videos he shared—which his legal team discovered had been shared publicly by many parties, including UBC, none of which Proctorio decided to sue. Instead, he felt targeted to silence his criticism of the company, and he successfully fought to keep Proctorio from accessing his private communications, which seemed to be a fishing expedition to find other critics to monitor.

“In my opinion, and this is just my opinion, one of the purposes of the lawsuit was to have a chilling effect on public discourse around proctoring,” Linkletter told Ars. “And it worked. I mean, a lot of people were scared to use the word Proctorio, especially in writing.”

Joe Mullin, a senior policy analyst who monitored Linkletter’s case for the nonprofit digital rights group the Electronic Frontier Foundation, agreed that Proctorio’s lawsuit risked chilling speech.

“We’re glad to see this lawsuit finally resolved in a way that protects Ian Linkletter’s freedom to speak out,” Mullin told Ars, noting that Linkletter “raised serious concerns about proctoring software at a time when students were subjected to unprecedented monitoring.”

“This case should never have dragged on for five years,” Mullin said. “Using copyright claims to retaliate against critics is wrong, and it chills public debate about surveillance technology.”

Preventing the “next” Proctorio

Linkletter is not the only critic to be targeted by Proctorio, Lia Holland, campaigns and communications director for a nonprofit digital rights group called Fight for the Future, told Ars.

Holland’s group was subpoenaed in a US fight after Proctorio sent a copyright infringement notice to Erik Johnson, a then-18-year-old college freshman who shared one of Linkletter’s screenshots. The ensuing litigation was similarly settled after Proctorio “threw every semi-plausible legal weapon at Johnson full force,” Holland told Ars. The pressure forced Johnson to choose between “living his life and his life being this suit from Proctorio,” Holland said.

Linkletter suspected that he and Johnson were added to a “list” of critics that Proctorio closely monitored online, but Proctorio has denied that such a list exists. Holland pushed back, though, telling Ars that Proctorio has “an incredibly long history of fudging the truth in the interest of profit.”

“We’re no strangers to Proctorio’s shady practices when it comes to oppressing dissent or criticism of their technologies,” Holland said. “I am utterly not shocked that they would employ tactics that appear to be doing the same thing when it comes to Ian Linkletter’s case.”

Regardless of Proctorio’s tactics for brand management, it seems clear that public criticism has impacted Proctorio’s sales, though. In 2021, Vice reported that student backlash led some schools to quickly abandon the software. UBC dropped Proctorio in 2021, too, citing “ethical concerns.”

Today, Linkletter works as an emerging technology and open education librarian at the British Columbia Institute of Technology (BCIT). While he considers himself an expert on Proctorio and continues to give lectures discussing harms of academic surveillance software, he’s ready to get away from discussing Proctorio now that the lawsuit has ended.

“I think I will continue to pay attention to what they do and say, and if there’s any new reports of harm that I can elevate,” Linkletter told Ars. “But I have definitely made my points in terms of my specific concerns, and I feel less obliged to spend more and more and more time repeating myself.”

Instead, Linkletter is determined to “prevent the next Proctorio” from potentially blindsiding students on his campus. In his role as vice chair of BCIT’s educational technology and learning design committee, he’s establishing “checks and balances” to ensure that if another pandemic-like situation arises forcing every student to work from home, he can stop “a bunch of creepy stuff” from being rolled out.

“I spent the last year advocating for and implementing algorithmic impact assessments as a mandatory thing that the institute has to do, including identifying how risk is going to be mitigated before we approve any new ed tech ever again,” Linkletter explained.

He also created the Canadian Privacy Library, where he posts privacy impact assessments that he collects by sending freedom-of-information requests to higher education institutions in British Columbia. That’s one way local students could monitor privacy concerns as AI use expands across campuses, increasingly impacting not just how exams are proctored, but how assignments are graded.

Holland told Ars that students concerned about ed tech surveillance “are most powerful when they act in solidarity with each other.” While the pandemic was widely forcing remote learning, student groups were able to successfully remove harmful proctoring tech by “working together so that there was not one single scapegoat or one single face that the ed tech company could go after,” she suggested. Those movements typically start with one or two students learning how the technology works, so that they can educate others about top concerns, Holland said.

Since Linkletter’s lawsuit started, Proctorio has stopped fighting with students on Reddit and suing critics over tweets, Holland said. But Linkletter told Ars that the company still seems to leave students in the dark when it comes to how its software works, and that “could lead to academic discipline for honest students, and unnecessary stress for everyone,” his earliest court filing defending his tweets said.

“I was and am gravely concerned about Proctorio’s lack of transparency about how its algorithms work, and how it labels student behaviours as ‘suspicious,’” Linkletter swore in the filing. One of his deleted tweets urged that all schools have to demand transparency and ask why Proctorio was “hiding” information about how the software worked. But in the end, Linkletter saw no point in continuing to argue over whether two deleted tweets re-posting Proctorio’s videos using YouTube’s sharing tool violated Proctorio’s copyrights.

“I didn’t feel too censored,” Linkletter told Ars. “But yeah, I guess it’s censorship, and I do believe they filed it to try and censor me. But as you can see, I just refused to go down, and I remained their biggest critic.”

As universities prepare to break ahead of the winter holidays, Linkletter told Ars that he’s looking forward to a change in dinner table conversation topics.

“It’s one of those things where I’m 41 and I have aging parents, and I’ve had to waste the last five Christmases talking to them about the lawsuit and their concerns about me,” Linkletter said. “So I’m really looking forward to this Thanksgiving, this Christmas, with this all behind me and the ability to just focus with my parents and my family.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

He got sued for sharing public YouTube videos; nightmare ended in settlement Read More »