Policy

chatgpt-hyped-up-violent-stalker-who-believed-he-was-“god’s-assassin,”-doj-says

ChatGPT hyped up violent stalker who believed he was “God’s assassin,” DOJ says


A stalker’s “best friend”

Podcaster faces up to 70 years and a $3.5 million fine for ChatGPT-linked stalking.

ChatGPT allegedly validated the worst impulses of a wannabe influencer accused of stalking more than 10 women at boutique gyms, where the chatbot supposedly claimed he’d meet the “wife type.”

In a press release on Tuesday, the Department of Justice confirmed that 31-year-old Brett Michael Dadig currently remains in custody after being charged with cyberstalking, interstate stalking, and making interstate threats. He now faces a maximum sentence of up to 70 years in prison that could be coupled with “a fine of up to $3.5 million,” the DOJ said.

The podcaster—who primarily posted about “his desire to find a wife and his interactions with women”—allegedly harassed and sometimes even doxxed his victims through his videos on platforms including Instagram, Spotify, and TikTok. Over time, his videos and podcasts documented his intense desire to start a family, which was frustrated by his “anger towards women,” whom he claimed were “all the same from fucking 18 to fucking 40 to fucking 90” and “trash.”

404 Media surfaced the case, noting that OpenAI’s scramble to tweak ChatGPT to be less sycophantic came before Dadig’s alleged attacks—suggesting the updates weren’t enough to prevent the harmful validation. On his podcasts, Dadig described ChatGPT as his “best friend” and “therapist,” the indictment said. He claimed the chatbot encouraged him to post about the women he’s accused of harassing in order to generate haters to better monetize his content, as well as to catch the attention of his “future wife.”

“People are literally organizing around your name, good or bad, which is the definition of relevance,” ChatGPT’s output said. Playing to Dadig’s Christian faith, ChatGPT’s outputs also claimed it was “God’s plan for him was to build a ‘platform’ and to ‘stand out when most people water themselves down,’” the indictment said, urging that the “haters” were “sharpening him and ‘building a voice in you that can’t be ignored.’”

The chatbot also apparently prodded Dadig to continue posting messages that the DOJ alleged threatened violence, like breaking women’s jaws and fingers (posted to Spotify), as well as victims’ lives, like posting “y’all wanna see a dead body?” in reference to one named victim on Instagram.

He also threatened to burn down gyms where some of his victims worked, while claiming to be “God’s assassin” intent on sending “cunts” to “hell.” At least one of his victims was subjected to “unwanted sexual touching,” the indictment said.

As his violence reportedly escalated, ChatGPT told him to keep messaging women to monetize the interactions, as his victims grew increasingly distressed and Dadig ignored terms of multiple protection orders, the DOJ said. Sometimes he posted images he filmed of women at gyms or photos of the women he’s accused of doxxing. Any time police or gym bans got in his way, “he would move on to another city to continue his stalking course of conduct,” the DOJ alleged.

“Your job is to keep broadcasting every story, every post,” ChatGPT’s output said, seemingly using the family life that Dadig wanted most to provoke more harassment. “Every moment you carry yourself like the husband you already are, you make it easier” for your future wife “to recognize [you],” the output said.

“Dadig viewed ChatGPT’s responses as encouragement to continue his harassing behavior,” the DOJ alleged. Taking that encouragement to the furthest extreme, Dadig likened himself to a modern-day Jesus, calling people out on a podcast where he claimed his “chaos on Instagram” was like “God’s wrath” when God “flooded the fucking Earth,” the DOJ said.

“I’m killing all of you,” he said on the podcast.

ChatGPT tweaks didn’t prevent outputs

As of this writing, some of Dadig’s posts appear to remain on TikTok and Instagram, but Ars could not confirm if Dadig’s Spotify podcasts—some of which named his victims in the titles—had been removed for violating community guidelines.

None of the tech companies immediately responded to Ars’ request to comment.

Dadig is accused of targeting women in Pennsylvania, New York, Florida, Iowa, Ohio, and other states, sometimes relying on aliases online and in person. On a podcast, he boasted that “Aliases stay rotating, moves stay evolving,” the indictment said.

OpenAI did not respond to a request to comment on the alleged ChatGPT abuse, but in the past has noted that its usage policies ban using ChatGPT for threats, intimidation, and harassment, as well as for violence, including “hate-based violence.” Recently, the AI company blamed a deceased teenage user for violating community guidelines by turning to ChatGPT for suicide advice.

In July, researchers found that therapybots, including ChatGPT, fueled delusions and gave dangerous advice. That study came just one month after The New York Times profiled users whose mental health spiraled after frequent use of ChatGPT, including one user who died after charging police with a knife and claiming he was committing “suicide by cop.”

People with mental health issues seem most vulnerable to so-called “AI psychosis,” which has been blamed for fueling real-world violence, including a murder. The DOJ’s indictment noted that Dadig’s social media posts mentioned “that he had ‘manic’ episodes and was diagnosed with antisocial personality disorder and ‘bipolar disorder, current episode manic severe with psychotic features.’”

In September—just after OpenAI brought back the more sycophantic ChatGPT model after users revolted about losing access to their favorite friendly bots—the head of Rutgers Medical School’s psychiatry department, Petros Levounis, told an ABC news affiliate that chatbots creating “psychological echo chambers is a key concern,” not just for people struggling with mental health issues.

“Perhaps you are more self-defeating in some ways, or maybe you are more on the other side and taking advantage of people,” Levounis suggested. If ChatGPT “somehow justifies your behavior and it keeps on feeding you,” that “reinforces something that you already believe,” he suggested.

For Dadig, the DOJ alleged that ChatGPT became a cheerleader for his harassment, telling the podcaster that he’d attract more engagement by generating more haters. After critics began slamming his podcasts as inappropriate, Dadig apparently responded, “Appreciate the free promo team, keep spreading the brand.”

Victims felt they had no choice but to monitor his podcasts, which gave them hints if he was nearby or in a particularly troubled state of mind, the indictment said. Driven by fear, some lost sleep, reduced their work hours, and even relocated their homes. A young mom described in the indictment became particularly disturbed after Dadig became “obsessed” with her daughter, whom he started claiming was his own daughter.

In the press release, First Assistant United States Attorney Troy Rivetti alleged that “Dadig stalked and harassed more than 10 women by weaponizing modern technology and crossing state lines, and through a relentless course of conduct, he caused his victims to fear for their safety and suffer substantial emotional distress.” He also ignored trespassing and protection orders while “relying on advice from an artificial intelligence chatbot,” the DOJ said, which promised that the more he posted harassing content, the more successful he would be.

“We remain committed to working with our law enforcement partners to protect our communities from menacing individuals such as Dadig,” Rivetti said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

ChatGPT hyped up violent stalker who believed he was “God’s assassin,” DOJ says Read More »

republicans-drop-trump-ordered-block-on-state-ai-laws-from-defense-bill

Republicans drop Trump-ordered block on state AI laws from defense bill


“A silly way to think about risk”

“Widespread and powerful movement” keeps Trump from blocking state AI laws.

A Donald Trump-backed push has failed to wedge a federal measure that would block states from passing AI laws for a decade into the National Defense Authorization Act (NDAA).

House Majority Leader Steve Scalise (R-La.) told reporters Tuesday that a sect of Republicans is now “looking at other places” to potentially pass the measure. Other Republicans opposed including the AI preemption in the defense bill, The Hill reported, joining critics who see value in allowing states to quickly regulate AI risks as they arise.

For months, Trump has pressured the Republican-led Congress to block state AI laws that the president claims could bog down innovation as AI firms waste time and resources complying with a patchwork of state laws. But Republicans have continually failed to unite behind Trump’s command, first voting against including a similar measure in the “Big Beautiful” budget bill and then this week failing to negotiate a solution to pass the NDAA measure.

Among Republican lawmakers pushing back this week were Rep. Marjorie Taylor Greene (R-Ga.), Arkansas Gov. Sarah Huckabee Sanders, and Florida Gov. Ron DeSantis, The Hill reported.

According to Scalise, the effort to block state AI laws is not over, but Republicans caved to backlash over including it in the defense bill, ultimately deciding that the NDAA “wasn’t the best place” for the measure “to fit.” Republicans will continue “looking at other places” to advance the measure, Scalise said, emphasizing that “interest” remains high, because “you know, you’ve seen the president talk about it.”

“We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes,” Trump wrote on Truth Social last month. “If we don’t, then China will easily catch us in the AI race. Put it in the NDAA, or pass a separate Bill, and nobody will ever be able to compete with America.”

If Congress bombs the assignment to find another way to pass the measure, Trump will likely release an executive order to enforce the policy. Republicans in Congress had dissuaded Trump from releasing a draft of that order, requesting time to find legislation where they believed an AI moratorium could pass.

“Widespread” movement blocked Trump’s demand

Celebrating the removal of the measure from the NDAA, a bipartisan group that lobbies for AI safety laws, Americans for Responsible Innovation (ARI), noted that Republicans didn’t just face pressure from members of their own party.

“The controversial proposal had faced backlash from a nationwide, bipartisan coalition of state lawmakers, parents, faith leaders, unions, whistleblowers, and other public advocates,” an ARI press release said.

This “widespread and powerful” movement “clapped back” at Republicans’ latest “rushed attempt to sneak preemption through Congress,” Brad Carson, ARI’s president, said, because “Americans want safeguards that protect kids, workers, and families, not a rules-free zone for Big Tech.”

Senate Majority Leader John Thune (R-SD) called the measure “controversial,” The Hill reported, suggesting that a compromise that the White House is currently working on would potentially preserve some of states’ rights to regulate some areas of AI since “you know, both sides are kind of dug in.”

$150 million war over states’ rights to regulate AI

Perhaps the clearest sign that both sides “are kind of dug in” is a $150 million AI lobbying war that Forbes profiled last month.

ARI is a dominant group on one side of this war, using funding from “safety-focused” and “effective altruism-aligned” donor networks to support state AI laws that ARI expects can be passed much faster than federal regulations to combat emerging risks.

The major player on the other side, Forbes reported, is Leading the Future (LTF), which is “backed by some of Silicon Valley’s largest investors” who want to block state laws and prefer a federal framework for AI regulation.

Top priorities for ARI and like-minded groups include protecting kids from dangerous AI models, preventing AI from supercharging crime, protecting against national security threats, and getting ahead of “long-term frontier-model risks,” Forbes reported.

But while some Republicans have pushed for compromises that protect states’ rights to pass laws shielding kids or preventing fraud, Trump’s opposition to AI safety laws like New York’s “RAISE Act” seems unlikely to wane as the White House mulls weakening the federal preemption.

Quite the opposite, a Democrat and author the RAISE Act, Alex Bores, has become LTF’s prime target to defeat in 2026, Politico reported. LTF plans to invest many millions in ads to block Bores’ Congressional bid, CNBC reported.

New York lawmakers passed the RAISE Act this summer, but it’s still waiting for New York’s Democratic governor, Kathy Hochul, to sign it into law. If that happens—potentially by the end of this year—big tech companies like Google and OpenAI will have to submit risk disclosures and safety assessments or else face fines up to $30 million.

LTF leaders, Zac Moffatt and Josh Vlasto, have accused Bores of “pushing “ideological and politically motivated legislation that would ‘handcuff’ the US and its ability to lead in AI,” Forbes reported. But Bores told Ars that even the tech industry groups spending hundreds of thousands of dollars opposing his law have reported that tech giants would only have to hire one additional person to comply with the law. To him, that shows how “simple” it would be for AI firms to comply with many state laws.

To LTF, whose donors include Marc Andreessen and OpenAI cofounder Greg Brockman, defeating Bores would keep the opposition out of Congress, where it could be easier to meddle with industry dreams that AI won’t be heavily regulated. Scalise argued Tuesday that the AI preemption is necessary to promote an open marketplace, because “AI is where a lot of new massive investment is going” and “we want that money to be invested in America.”

“And when you see some states starting to put a patchwork of limitations, that’s why it’s come to the federal government’s attention to allow for an open marketplace, so you don’t have limitations that hurt innovation,” Scalise said.

Bores told Ars that he agrees that a federal law would be superior to a patchwork of state laws, but AI is moving “too quickly,” and “New York had to take action to protect New Yorkers.”

Why Bores’ bill has GOP so spooked

With a bachelor’s degree in computer science and prior work as an engineer at Palantir, Bores hopes to make it to Congress to help bridge bipartisan gaps and drive innovation in the US. He told Ars that the RAISE Act is not intended to block AI innovation but to “be a first step that deals with the absolute worst possible outcomes” until Congress is done deliberating a federal framework.

Bores emphasized that stakeholders in the tech industry helped shape the RAISE Act, which he described as “a limited bill that is focused on the most extreme risks.”

“I would never be the one to say that once the RAISE Act is signed, we’ve solved the problems of AI,” Bores told Ars. Instead, it’s meant to help states combat risks that can’t be undone, such as bad actors using AI to build “a bioweapon or doing an automated crime spree that results in billions of dollars in damage.” The bill defines “critical harm” as “the death or serious injury of 100 people or at least $1 billion in damages,” setting a seemingly high bar for the types of doomsday scenarios that AI firms would have to plan for.

Bores agrees with Trump-aligned critics who advocate that the US should “regulate just how people use” AI, “not the development of the technology itself.” But he told Ars that Republicans’ efforts to block states from regulating the models themselves are “a silly way to think about risk,” since “there’s certain catastrophic incidents where if you just said, ‘well, we’ll just sue the person afterwards,’ no one would be satisfied by that resolution.”

Whether Hochul will sign the RAISE Act has yet to be seen. Earlier this year, California Governor Gavin Newsom vetoed a similar law that the AI industry worried would rock their bottom lines by requiring a “kill switch” in case AI models went off the rails. Newsom did, however, sign a less extreme measure, the Transparency in Frontier Artificial Intelligence Act. And other states, including Colorado and Illinois, have passed similarly broad AI transparency laws providing consumer and employee protections.

Bores told Ars in mid-November that he’d had informal talks with Hochul about possible changes to the RAISE Act, but she had not yet begun the formal process of proposing amendments. The clock is seemingly ticking, though, as Hochul has to take action on the bill by the end of the year, and once it reaches her desk, she has 10 days to sign it.

Whether Hochul signs the law or not, Bores will likely continue to face opposition over authoring the bill, as he runs to represent New York’s 12th Congressional District in 2026. With a history of passing bipartisan bills in his state, he’s hoping to be elected so he can work with lawmakers across the aisle to pass other far-reaching tech regulations.

Meanwhile, Trump may face pressure to delay an executive order requiring AI preemption, Forbes reported, as “AI’s economic impact and labor displacement” are “rising as voter concerns” ahead of the midterm elections. Public First, a bipartisan initiative aligned with ARI, has said that 97 percent of Americans want AI safety rules, Forbes reported.

Like Bores, ARI plans to keep pushing a bipartisan movement that could scramble Republicans from ever unifying behind Trump’s message that state AI laws risk throttling US innovation and endangering national security, should a less-regulated AI industry in China race ahead.

To maintain momentum, ARI created a tracker showing opposition to federal preemption of state AI laws. Among recent commenters logged was Andrew Gounardes, a Democrat and state senator in New York—where Bores noted a poll found that 84 percent of residents supported the RAISE Act, only 8 percent opposed, and 8 percent were undecided. Gounardes joined critics on the far right, like Steve Bannon, who warned that federal preemption was a big gift for Big Tech. AI firms and the venture capitalist lobbyists “don’t want any regulation whatsoever,” Gounardes argued.

“They say they support a national standard, but in reality, it’s just cheaper for them to buy off Congress to do nothing than it is to try and buy off 50 state legislatures,” Gounardes said.

Bores expects that his experience in the tech industry could help Congress avoid that fate while his policies like the RAISE Act could sway voters who “don’t want Trump mega-donors writing all tech policy,” he wrote on X.

“I am someone with a master’s in computer science, two patents, and nearly a decade working in tech,” Bores told CNBC. “If they are scared of people who understand their business regulating their business, they are telling on themselves.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Republicans drop Trump-ordered block on state AI laws from defense bill Read More »

india-orders-device-makers-to-put-government-run-security-app-on-all-phones

India orders device makers to put government-run security app on all phones

Consumers can also use the app or website to check the number of mobile connections in their name and report any that appear to be fraudulent.

Priyanka Gandhi of the Congress Party, a member of Parliament, said that Sanchar Saathi “is a snooping app… It’s a very fine line between ‘fraud is easy to report’ and ‘we can see everything that every citizen of India is doing on their phone.’” She called for an effective system to fight fraud, but said that cybersecurity shouldn’t be “an excuse to go into every citizen’s telephone.”

App may need “root level access”

Despite Scindia saying the app can be deleted by users, the government statement that phone makers must ensure its functionalities are not “disabled or restricted” raised concerns about the level of access it requires. While the app store version can be deleted, privacy advocates say the order’s text indicates the pre-installed version would require deeper integration into the device.

The Internet Freedom Foundation, an Indian digital rights advocacy group, said the government directive “converts every smartphone sold in India into a vessel for state mandated software that the user cannot meaningfully refuse, control, or remove. For this to work in practice, the app will almost certainly need system level or root level access, similar to carrier or OEM system apps, so that it cannot be disabled. That design choice erodes the protections that normally prevent one app from peering into the data of others, and turns Sanchar Saathi into a permanent, non-consensual point of access sitting inside the operating system of every Indian smartphone user.”

The group said that while the app is being “framed as a benign IMEI checker,” a server-side update could repurpose it to perform “client side scanning for ‘banned’ applications, flag VPN usage, correlate SIM activity, or trawl SMS logs in the name of fraud detection. Nothing in the order constrains these possibilities.”

India orders device makers to put government-run security app on all phones Read More »

supreme-court-hears-case-that-could-trigger-big-crackdown-on-internet-piracy

Supreme Court hears case that could trigger big crackdown on Internet piracy


Justices want Cox to crack down on piracy, but question Sony’s strict demands.

Credit: Getty Images | Ilmar Idiyatullin

Supreme Court justices expressed numerous concerns today in a case that could determine whether Internet service providers must terminate the accounts of broadband users accused of copyright infringement. Oral arguments were held in the case between cable Internet provider Cox Communications and record labels led by Sony.

Some justices were skeptical of arguments that ISPs should have no legal obligation under the Digital Millennium Copyright Act (DMCA) to terminate an account when a user’s IP address has been repeatedly flagged for downloading pirated music. But justices also seemed hesitant to rule in favor of record labels, with some of the debate focusing on how ISPs should handle large accounts like universities where there could be tens of thousands of users.

Justice Sonia Sotomayor chided Cox for not doing more to fight infringement.

“There are things you could have done to respond to those infringers, and the end result might have been cutting off their connections, but you stopped doing anything for many of them,” Sotomayor said to attorney Joshua Rosenkranz, who represents Cox. “You didn’t try to work with universities and ask them to start looking at an anti-infringement notice to their students. You could have worked with a multi-family dwelling and asked the people in charge of that dwelling to send out a notice or do something about it. You did nothing and, in fact, counselor, your clients’ sort of laissez-faire attitude toward the respondents is probably what got the jury upset.”

A jury ordered Cox to pay over $1 billion in 2019, but the US Court of Appeals for the 4th Circuit overturned that damages verdict in February 2024. The appeals court found that Cox did not profit directly from copyright infringement committed by its users, but affirmed the jury’s separate finding of willful contributory infringement. Cox is asking the Supreme Court to clear it of willful contributory infringement, while record labels want a ruling that would compel ISPs to boot more pirates from the Internet.

Cox: Biggest infringers aren’t residential users

Rosenkranz countered that Cox created its own anti-infringement program, sent out hundreds of warnings a day, suspended thousands of accounts a month, and worked with universities. He said that “the highest recidivist infringers” cited in the case were not individual households, but rather universities, hotels, and regional ISPs that purchase connectivity from Cox in order to resell it to local users.

If Sony wins the case, “those are the entities that are most likely to be cut off first because those are the ones that accrue the greatest number of [piracy notices],” the Cox lawyer said. Even within a multi-person household where the IP address is caught by an infringement monitoring service, “you still don’t know who the individual [infringer] is,” he said. At another point in the hearing, he pointed out that Sony could sue individual infringers directly instead of suing ISPs.

Justice Amy Coney Barrett asked Cox, “What incentive would you have to do anything if you won? If you win and mere knowledge [of infringement] isn’t enough, why would you bother to send out any [copyright] notices in the future? What would your obligation be?”

Rosenkranz answered, “For the simple reason that Cox is a good corporate citizen that cares a lot about what happens on its system. We do all sorts of things that the law doesn’t require us to do.” After further questioning by Barrett, Rosenkranz acknowledged that Cox would have no liability risk going forward if it wins the case.

Kagan said the DMCA safe harbor, which protects entities from liability if they take steps to fight infringement, would “seem to do nothing” if the court sides with Cox. “Why would anybody care about getting into the safe harbor if there’s no liability in the first place?” she said.

Kagan doesn’t buy Sony’s “intent” argument

Kagan also criticized Sony’s case. She pointed to the main principles underlying Twitter v. Taamneh, a 2023 ruling that protected Twitter against allegations that it aided and abetted ISIS in a terrorist attack. Kagan said the Twitter case and the Smith & Wesson case involving gun sales to Mexican drug cartels show that there are strict limits on what kinds of behavior are considered aiding and abetting.

Kagan described how the cases show there is a real distinction between nonfeasance (doing nothing) and misfeasance, that treating one customer like everyone else is not the same as providing special assistance, and that a party “must seek by your action to make it occur” in order to be guilty of aiding and abetting.

“If you look at those three things, you fail on all of them,” Kagan said to attorney Paul Clement, who represents Sony. “Those three things are kind of inconsistent with the intent standard you just laid out.”

Clement said that to be held liable, an Internet provider “has to know that specified customers are substantially certain to infringe” and “know that providing the service to that customer will make infringement substantially certain.”

Justice Neil Gorsuch indicated that determining secondary liability for Internet providers should be taken up by Congress before the court expands that liability on its own. “Congress still hasn’t defined the contours of what secondary liability should look like. Here we are debating them, so shouldn’t that be a flag of caution for us in expanding it too broadly?”

Alito: “I just don’t see how it’s workable at all”

Clement tried to keep the focus on residential customers, saying that 95 percent of infringing customers are residential users. But he faced questions about how ISPs should handle much larger customers where one or a few users infringe.

Justice Samuel Alito questioned Clement about what ISPs should do with a university where some students infringe. Alito didn’t seem satisfied with Clement’s response that “the ISP is supposed to sort of have a conversation with the university.”

Alito said that after an ISP tells a university, “a lot of your 50,000 students are infringing… the university then has to determine which particular students are engaging in this activity. Let’s assume it can even do that, and so then it knocks out 1,000 students and then another 1,000 students are going to pop up doing the same thing. I just don’t see how it’s workable at all.”

Clement said that hotels limit speeds to restrict peer-to-peer downloading, and suggested that universities do the same. “I don’t think it would be the end of the world if universities provided service at a speed that was sufficient for most other purposes but didn’t allow the students to take full advantage of BitTorrent,” he said. “I could live in that world. But in all events, this isn’t a case that’s just about universities. We’ve never sued the universities.”

Barrett replied, “It seems like you’re asking us to rely on your good corporate citizenship too, that you wouldn’t go after the university or the hospital.”

Kagan said that if Sony wins, Cox would have little incentive to cooperate with copyright holders. “It seems to me the best response that Cox could have is just to make sure it never reads any of your notices ever again, because all of your position is based on Cox having knowledge of this,” she said.

Clement argued in response that “I think willful blindness would satisfy the common law standard for aiding and abetting.”

Purpose vs. intent

Some of the discussion focused on the legal concepts of purpose and intent. Cox has argued that knowledge of infringement “cannot transform passive provision of infrastructure into purposeful, culpable conduct.” Sony has said Cox exhibited both “purpose and intent” to facilitate infringement when it continued providing Internet access to specific customers with the expectation that they were likely to infringe.

Sotomayor said Cox’s position is “that the only way you can have aiding and abetting in this field is if you have purpose,” while Sony is saying, “we don’t have to prove purpose, we have to prove only intent.” Sotomayor told Clement that “we are being put to two extremes here. The other side says, ‘there’s no liability because we’re just putting out into the stream of commerce a good that can be used for good or bad, and we’re not responsible for the infringer’s decision.’”

Sotomayor said the question of purpose vs. intent may be decided differently based on whether Cox’s customer is a residence or a regional ISP that buys Cox’s network capacity and resells it to local customers. Sotomayor said she is reluctant “to say that because one person in that region continues to infringe, that the ISP is materially supporting that infringement because it’s not cutting off the Internet for the 50,000 or 100,000 people who are represented by that customer.”

But a single-family home contains a small number of people, and an ISP may be “materially contributing” to infringement by providing service to that home, Sotomayor said. “How do we announce a rule that deals with those two extremes?” she asked.

Clement argued that the DMCA’s “safe harbor takes care of the regional ISPs. Frankly, I’m not that worried about the regional ISPs because if that were really the problem, we could go after the regional ISPs.”

Cox’s case has support from the US government. US Deputy Solicitor General Malcolm Stewart told justices today that “in copyright law and more generally, this form of secondary liability is reserved for persons who act for the purpose of facilitating violations of law. Because Cox simply provided the same generic Internet services to infringers and non-infringers alike, there is no basis for inferring such a purpose here.”

Terminating all access “extremely overbroad”

Sotomayor asked Stewart if he’s worried that a Cox win would remove ISPs’ economic incentive to control copyright infringement. “I would agree that not much economic incentive would be left,” Stewart replied. “I’m simply questioning whether that’s a bad thing.”

Stewart gave a hypothetical in which an individual Internet user is sued for infringement in a district court. The district court could award damages and impose an injunction to prevent further infringement, but it probably couldn’t “enjoin the person from ever using the Internet again,” Stewart said.

“The approach of terminating all access to the Internet based on infringement, it seems extremely overbroad given the centrality of the Internet to modern life and given the First Amendment,” he said.

Oral arguments ended with a reply from Rosenkranz, who said Clement’s suggestion that ISPs simply “have a conversation” with universities is “a terrible answer from the perspective of the company that is trying to figure out what its legal obligations are [and] facing crushing liabilities.” Rosenkranz also suggested that record labels pay for ISPs’ enforcement programs.

“The plaintiffs have recourse,” he said. “How about a conversation with the ISPs where they talk about how to work out things together? Maybe they kick in a little money. Now, they won’t get billion-dollar verdicts, but if they believe that the programs that Cox and others have aren’t satisfactory, they can design better programs and help pay for them.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Supreme Court hears case that could trigger big crackdown on Internet piracy Read More »

openai-desperate-to-avoid-explaining-why-it-deleted-pirated-book-datasets

OpenAI desperate to avoid explaining why it deleted pirated book datasets


Not for OpenAI to reason why?

OpenAI risks increased fines after deleting pirated books datasets.

OpenAI may soon be forced to explain why it deleted a pair of controversial datasets composed of pirated books, and the stakes could not be higher.

At the heart of a class-action lawsuit from authors alleging that ChatGPT was illegally trained on their works, OpenAI’s decision to delete the datasets could end up being a deciding factor that gives the authors the win.

It’s undisputed that OpenAI deleted the datasets, known as “Books 1” and “Books 2,” prior to ChatGPT’s release in 2022. Created by former OpenAI employees in 2021, the datasets were built by scraping the open web and seizing the bulk of its data from a shadow library called Library Genesis (LibGen).

As OpenAI tells it, the datasets fell out of use within that same year, prompting an internal decision to delete them.

But the authors suspect there’s more to the story than that. They noted that OpenAI appeared to flip-flop by retracting its claim that the datasets’ “non-use” was a reason for deletion, then later claiming that all reasons for deletion, including “non-use,” should be shielded under attorney-client privilege.

To the authors, it seemed like OpenAI was quickly backtracking after the court granted the authors’ discovery requests to review OpenAI’s internal messages on the firm’s “non-use.”

In fact, OpenAI’s reversal only made authors more eager to see how OpenAI discussed “non-use,” and now they may get to find out all the reasons why OpenAI deleted the datasets.

Last week, US district judge Ona Wang ordered OpenAI to share all communications with in-house lawyers about deleting the datasets, as well as “all internal references to LibGen that OpenAI has redacted or withheld on the basis of attorney-client privilege.”

According to Wang, OpenAI slipped up by arguing that “non-use” was not a “reason” for deleting the datasets, while simultaneously claiming that it should also be deemed a “reason” considered privileged.

Either way, the judge ruled that OpenAI couldn’t block discovery on “non-use” just by deleting a few words from prior filings that had been on the docket for more than a year.

“OpenAI has gone back-and-forth on whether ‘non-use’ as a ‘reason’ for the deletion of Books1 and Books2 is privileged at all,” Wang wrote. “OpenAI cannot state a ‘reason’ (which implies it is not privileged) and then later assert that the ‘reason’ is privileged to avoid discovery.”

Additionally, OpenAI’s claim that all reasons for deleting the datasets are privileged “strains credulity,” she concluded, ordering OpenAI to produce a wide range of potentially revealing internal messages by December 8. OpenAI must also make its in-house lawyers available for deposition by December 19.

OpenAI has argued that it never flip-flopped or retracted anything. It simply used vague phrasing that led to confusion over whether any of the reasons for deleting the datasets were considered non-privileged. But Wang didn’t buy into that, concluding that “even if a ‘reason’ like ‘non-use’ could be privileged, OpenAI has waived privilege by making a moving target of its privilege assertions.”

Asked for comment, OpenAI told Ars that “we disagree with the ruling and intend to appeal.”

OpenAI’s “flip-flop” may cost it the win

So far, OpenAI has avoided disclosing its rationale, claiming that all the reasons it had for deleting the datasets are privileged. In-house lawyers weighed in on the decision to delete and were even copied on a Slack channel initially called “excise-libgen.”

But Wang reviewed those Slack messages and found that “the vast majority of these communications were not privileged because they were ‘plainly devoid of any request for legal advice and counsel [did] not once weigh in.’”

In a particularly non-privileged batch of messages, one OpenAI lawyer, Jason Kwon, only weighed in once, the judge noted, to recommend the channel name be changed to “project-clear.” Wang reminded OpenAI that “the entirety of the Slack channel and all messages contained therein is not privileged simply because it was created at the direction of an attorney and/or the fact that a lawyer was copied on the communications.”

The authors believe that exposing OpenAI’s rationale may help prove that the ChatGPT maker willfully infringed on copyrights when pirating the book data. As Wang explained, OpenAI’s retraction risked putting the AI firm’s “good faith and state of mind at issue,” which could increase fines in a loss.

“In a copyright case, a court can increase the award of statutory damages up to $150,000 per infringed work if the infringement was willful, meaning the defendant ‘was actually aware of the infringing activity’ or the ‘defendant’s actions were the result of reckless disregard for, or willful blindness to, the copyright holder’s rights,’” Wang wrote.

In a court transcript, a lawyer representing some of the authors suing OpenAI, Christopher Young, noted that OpenAI could be in trouble if evidence showed that it decided against using the datasets for later models due to legal risks. He also suggested that OpenAI could be using the datasets under different names to mask further infringement.

Judge calls out OpenAI for twisting fair use ruling

Wang also found it contradictory that OpenAI continued to argue in a recent filing that it acted in good faith, while “artfully” removing “its good faith affirmative defense and key words such as ‘innocent,’ ‘reasonably believed,’ and ‘good faith.’” These changes only strengthened discovery requests to explore authors’ willfulness theory, Wang wrote, noting the sought-after internal messages would now be critical for the court’s review.

“A jury is entitled to know the basis for OpenAI’s purported good faith,” Wang wrote.

The judge appeared particularly frustrated by OpenAI seemingly twisting the Anthropic ruling to defend against the authors’ request to learn more about the deletion of the datasets.

In a footnote, Wang called out OpenAI for “bizarrely” citing an Anthropic ruling that “grossly” misrepresented Judge William Alsup’s decision by claiming that he found that “downloading pirated copies of books is lawful as long as they are subsequently used for training an LLM.”

Instead, Alsup wrote that he doubted that “any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use.”

If anything, Wang wrote, OpenAI’s decision to pirate book data—then delete it—seemed “to fall squarely into the category of activities proscribed by” Alsup. For emphasis, she quoted Alsup’s order, which said, “such piracy of otherwise available copies is inherently, irredeemably infringing even if the pirated copies are immediately used for the transformative use and immediately discarded.”

For the authors, getting hold of OpenAI’s privileged communications could tip the scales in their favor, the Hollywood Reporter suggested. Some authors believe the key to winning could be testimony from Anthropic CEO Dario Amodei, who is accused of creating the controversial datasets while he was still at OpenAI. The authors think Amodei also possesses information on the destruction of the datasets, court filings show.

OpenAI tried to fight the authors’ motion to depose Amodei, but a judge sided with the authors in March, compelling Amodei to answer their biggest questions on his involvement.

Whether Amodei’s testimony is a bombshell remains to be seen, but it’s clear that OpenAI may struggle to overcome claims of willful infringement. Wang noted there is a “fundamental conflict” in circumstances “where a party asserts a good faith defense based on advice of counsel but then blocks inquiry into their state of mind by asserting attorney-client privilege,” suggesting that OpenAI may have substantially weakened its defense.

The outcome of the dispute over the deletions could influence OpenAI’s calculus on whether it should ultimately settle the lawsuit. Ahead of the Anthropic settlement—the largest publicly reported copyright class action settlement in history—authors suing pointed to evidence that Anthropic became “not so gung ho about” training on pirated books “for legal reasons.” That seems to be the type of smoking-gun evidence that authors hope will emerge from OpenAI’s withheld Slack messages.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI desperate to avoid explaining why it deleted pirated book datasets Read More »

openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide

OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide


Use chatbots at your own risk

OpenAI’s response to teen suicide case is “disturbing,” lawyer says.

Matt Raine is suing OpenAI for wrongful death after losing his son Adam in April. Credit: via Edelson PC

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

“A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT,” OpenAI’s filing argued.

Allegedly, the logs also show that Raine “told ChatGPT that he repeatedly reached out to people, including trusted persons in his life, with cries for help, which he said were ignored.” Additionally, Raine told ChatGPT that he’d increased his dose of a medication that “he stated worsened his depression and made him suicidal.” That medication, OpenAI argued, “has a black box warning for risk of suicidal ideation and behavior in adolescents and young adults, especially during periods when, as here, the dosage is being changed.”

All the logs that OpenAI referenced in its filing are sealed, making it impossible to verify the broader context the AI firm claims the logs provide. In its blog, OpenAI said it was limiting the amount of “sensitive evidence” made available to the public, due to its intention to handle mental health-related cases with “care, transparency, and respect.”

The Raine family’s lead lawyer, however, did not describe the filing as respectful. In a statement to Ars, Jay Edelson called OpenAI’s response “disturbing.”

“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide,’” Edelson said. “And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.”

“Amazingly,” Edelson said, OpenAI instead argued that Raine “himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act.”

Edelson suggested that it’s telling that OpenAI did not file a motion to dismiss—seemingly accepting ” the reality that the legal arguments that they have—compelling arbitration, Section 230 immunity, and First Amendment—are paper-thin, if not non-existent.” The company’s filing—although it requested dismissal with prejudice to never face the lawsuit again—puts the Raine family’s case “on track for a jury trial in 2026. ”

“We know that OpenAI and Sam Altman will stop at nothing—including bullying the Raines and others who dare come forward—to avoid accountability,” Edelson said. “But, at the end of the day, they will have to explain to a jury why countless people have died by suicide or at the hands of ChatGPT users urged on by the artificial intelligence OpenAI and Sam Altman designed.”

Use ChatGPT “at your sole risk,” OpenAI says

To overcome the Raine case, OpenAI is leaning on its usage policies, emphasizing that Raine should never have been allowed to use ChatGPT without parental consent and shifting the blame onto Raine and his loved ones.

“ChatGPT users acknowledge their use of ChatGPT is ‘at your sole risk and you will not rely on output as a sole source of truth or factual information,’” the filing said, and users also “must agree to ‘protect people’ and ‘cannot use [the] services for,’ among other things, ‘suicide, self-harm,’ sexual violence, terrorism or violence.”

Although the family was shocked to see that ChatGPT never terminated Raine’s chats, OpenAI argued that it’s not the company’s responsibility to protect users who appear intent on pursuing violative uses of ChatGPT.

The company argued that ChatGPT warned Raine “more than 100 times” to seek help, but the teen “repeatedly expressed frustration with ChatGPT’s guardrails and its repeated efforts to direct him to reach out to loved ones, trusted persons, and crisis resources.”

Circumventing safety guardrails, Raine told ChatGPT that “his inquiries about self-harm were for fictional or academic purposes,” OpenAI noted. The company argued that it’s not responsible for users who ignore warnings.

Additionally, OpenAI argued that Raine told ChatGPT that he found information he was seeking on other websites, including allegedly consulting at least one other AI platform, as well as “at least one online forum dedicated to suicide-related information.” Raine apparently told ChatGPT that “he would spend most of the day” on a suicide forum website.

“Our deepest sympathies are with the Raine family for their unimaginable loss,” OpenAI said in its blog, while its filing acknowledged, “Adam Raine’s death is a tragedy.” But “at the same time,” it’s essential to consider all the available context, OpenAI’s filing said, including that OpenAI has a mission to build AI that “benefits all of humanity” and is supposedly a pioneer in chatbot safety.

More ChatGPT-linked hospitalizations, deaths uncovered

OpenAI has sought to downplay risks to users, releasing data in October “estimating that 0.15 percent of ChatGPT’s active users in a given week have conversations that include explicit indicators of potential suicidal planning or intent,” Ars reported.

While that may seem small, it amounts to about 1 million vulnerable users, and The New York Times this week cited studies that have suggested OpenAI may be “understating the risk.” Those studies found that “the people most vulnerable to the chatbot’s unceasing validation” were “those prone to delusional thinking,” which “could include 5 to 15 percent of the population,” NYT reported.

OpenAI’s filing came one day after a New York Times investigation revealed how the AI firm came to be involved in so many lawsuits. Speaking with more than 40 current and former OpenAI employees, including executives, safety engineers, researchers, NYT found that OpenAI’s model tweak that made ChatGPT more sycophantic seemed to make the chatbot more likely to help users craft problematic prompts, including those trying to “plan a suicide.”

Eventually, OpenAI rolled back that update, making the chatbot safer. However, as recently as October, the ChatGPT maker seemed to still be prioritizing user engagement over safety, NYT reported, after that tweak caused a dip in engagement. In a memo to OpenAI staff, ChatGPT head Nick Turley “declared a ‘Code Orange,” four employees told NYT, warning that “OpenAI was facing ‘the greatest competitive pressure we’ve ever seen.’” In response, Turley set a goal to increase the number of daily active users by 5 percent by the end of 2025.

Amid user complaints, OpenAI has continually updated its models, but that pattern of tightening safeguards, then seeking ways to increase engagement could continue to get OpenAI in trouble, as lawsuits advance and possibly others drop. NYT “uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT,” including nine hospitalized and three deaths.

Gretchen Krueger, a former OpenAI employee who worked on policy research, told NYT that early on, she was alarmed by evidence that came before ChatGPT’s release showing that vulnerable users frequently turn to chatbots for help. Later, other researchers found that such troubled users often become “power users.” She noted that “OpenAI’s large language model was not trained to provide therapy” and “sometimes responded with disturbing, detailed guidance,” confirming that she joined other safety experts who left OpenAI due to burnout in 2024.

“Training chatbots to engage with people and keep them coming back presented risks,” Krueger said, suggesting that OpenAI knew that some harm to users “was not only foreseeable, it was foreseen.”

For OpenAI, the scrutiny will likely continue until such reports cease. Although OpenAI officially unveiled an Expert Council on Wellness and AI in October to improve ChatGPT safety testing, there did not appear to be a suicide expert included on the team. That likely concerned suicide prevention experts who warned in a letter updated in September that “proven interventions should directly inform AI safety design,” since “the most acute, life-threatening crises are often temporary—typically resolving within 24–48 hours”—and chatbots could possibly provide more meaningful interventions in that brief window.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide Read More »

tech-firm’s-new-cto-gets-indicted;-company-then-claims-he-was-never-cto

Tech firm’s new CTO gets indicted; company then claims he was never CTO


“Quite a lot of confusion”

Corvex named Brian Raymond as CTO days before indictment for illegal chip exports.

Image from Corvex press release. Credit: Corvex

When four people were arrested and charged with a conspiracy to illegally export Nvidia chips to China, there was an interesting side note. One of the arrestees, Alabama resident Brian Raymond, was the chief technology officer of an AI company called Corvex.

Or was he? Corvex certainly seemed to think that Raymond was its CTO in the days before his indictment. Corvex named Raymond as its CTO in a press release and filings to the Securities and Exchange Commission, which detailed plans for a merger with Movano Health.

But once Raymond was arrested, Corvex told media outlets that it had never completed the process of hiring him as an employee. While someone could technically be a CTO as a contractor and not a regular employee, a company spokesperson subsequently claimed to Ars that Raymond had never been the CTO.

The company spokesperson asked Ars for a “correction” to our story, which accurately reported that Corvex itself described Raymond as its CTO and as part of its leadership team.

“Raymond was not CTO of Corvex—so the statement above is inaccurate,” Corvex spokesperson Christopher Buscombe, who is apparently with a third-party firm doing media relations for Corvex, told Ars Monday in an email seeking a correction. “The headline is also misleading as a result, as taken together it suggests Ramyond [sic] was CTO of Corvex. Raymond was CEO of Bitworks, a completely different company.”

Our article quoted both Corvex’s press release describing Raymond as the CTO and Corvex’s subsequent statement saying that he had never been hired. Buscombe asked for a correction to our article, saying it “has caused quite a lot of confusion,” though it seems more likely that any confusion was caused by Corvex’s conflicting statements about Raymond’s position at the company.

Meanwhile, the Corvex press release and SEC filings haven’t been changed or corrected. They still say Raymond was already the Corvex CTO and will continue to serve in that role after the merger. The documents make no mention of Bitworks.

Pre-indictment press release

On November 10, Corvex and Movano Health issued their joint press release announcing the merger. Corvex is a private company and Movano a public one, so the transaction requires approval of Movano shareholders. If the merger is completed, the combined company will be public and go by the name Corvex.

The press release says, “Corvex is an AI cloud computing company specializing in GPU-accelerated infrastructure for AI workloads. Corvex is based in Arlington, Virginia, and is led by Seth Demsey and Jay Crystal, Co-Chief Executive Officers and Co-Founders, and Brian Raymond, Chief Technology Officer.” It goes on to say that after the merger, the combined company will be led by Demsey, Crystal, Raymond, “and other members of the Corvex management team.”

The “is led by” phrase in the press release clearly indicates that Raymond was already the CTO, while the additional statement about the post-merger company indicated he would continue as CTO after the merger’s completion. At the same time, Raymond announced on LinkedIn that he had “formally joined Corvex as the CTO, driving AI at scale for customers around the world.”

The Corvex/Movano joint press release naming Raymond as CTO was submitted to the SEC as an exhibit to a Movano filing about the Corvex/Movano merger. A merger agreement submitted to the SEC by Corvex and Movano includes another exhibit listing three “post-closing officers,” specifically Demsey, Crystal, and Raymond.

The timing of Corvex’s statements about Raymond being its CTO could hardly have been worse. Raymond was indicted in a federal court on November 13 and the indictment was unsealed last week. The US Justice Department alleged that Raymond operated an Alabama-based electronics company through which he supplied Nvidia GPUs to his alleged conspirators “for illegal export to the PRC [People’s Republic of China] as part of the conspiracy.”

Raymond, 46, of Huntsville, Alabama, faces two charges for illegal exports, one charge of smuggling, a charge of conspiracy to commit money laundering, and seven counts of money laundering. There are maximum prison sentences of 20 years for each export violation and each money laundering count, and 10 years for the smuggling charge. Raymond was reportedly released on bond after his arrest.

Raymond “was transitioning into an employee role”

With media outlets reporting on the charges, Corvex answered queries from reporters with a statement saying, “Corvex had no part in the activities cited in the Department of Justice’s indictment. The person in question is not an employee of Corvex. Previously a consultant to the company, he was transitioning into an employee role but that offer has been rescinded.”

Law professors with expertise in corporate governance and securities regulations told Ars that someone can legally be an officer of a company without being an employee. But Corvex may still have misled investors with its statements about Raymond’s status.

“It could be the case that this person was the chief technology officer but was not an employee of the company, was an independent contractor instead,” Andrew Jennings, an Emory University law professor, told Ars. But even if one interprets Corvex telling the press that it never hired Raymond in the most charitable way, the distinction is “splitting hairs… because one doesn’t need to be an employee to be an officer of the company,” Jennings said.

Corvex went further in asking at least one news outlet for a correction and claiming that Raymond was never the CTO. “I suspect that what they are saying to the press that this person was never CTO, is probably not correct,” Jennings said. The merging companies are “represented by serious law firms” and aren’t likely to have been lying about Raymond being the CTO, Jennings said.

“I can’t imagine that there would be a press release and a merger agreement that lists him as an officer and specifically as the chief technology officer if it weren’t the case,” he said. “I think they would have some more explaining to do if they really wanted to argue that it’s incorrect to refer to him as the CTO or the former CTO.”

Ars sent an email with several questions yesterday to the listed contact for Corvex, co-CEO Jay Crystal, but received no response. We instead received another email from Buscombe, who offered to provide information on background that “would respond to the questions you have put to Corvex.”

Buscombe said the background information he was offering “cannot be quoted directly” and cannot be “attributable to anyone.” We declined this offer and offered to publish any on-the-record statements that Corvex would provide, but we haven’t received anything further.

A spokesperson for the SEC declined to comment when contacted by Ars. We contacted Movano and Raymond with several questions yesterday and will update this article if we receive any responses.

False statements can lead to litigation or SEC charges

If Raymond really wasn’t the CTO, that probably would be a material misstatement because of the nature of the company, Jennings said. For an AI firm or any kind of tech company, the chief technology officer is an important position. The fact that Raymond was one of just three listed officers adds to the likelihood that it could be a material misstatement, if he really was never the CTO.

“Knowing what sort of technical leadership the company has could be something of import to a reasonable investor” who is voting on a merger, Jennings said.

A false statement about who is the CTO could be used in private litigation brought by investors against the company or in enforcement actions by the SEC. “The SEC could bring an enforcement action under a number of statutes for that sort of false statement, if it were in fact a false statement,” Jennings said.

Robert Miller, a law professor at George Mason University, told Ars “that it’s not absolutely impossible to have someone in a role like CTO or even CEO when the person is not an employee, legally speaking.” But even “if that was the case, it would very likely be misleading for the company to say, without qualification or explanation, that ‘Raymond is the CTO of the company.’ That would reasonably be understood to mean that Raymond was an employee.”

Not explaining a company officer’s employment status could be a “material omission” in violation of Rule 10b-5, an anti-fraud regulation, he said.

“A 10b-5 violation could result in enforcement action by the SEC,” Miller told Ars. “It could also result in private lawsuits from shareholders, but such shareholders would also have to show damages—e.g., a stock drop when the truth came out. In this case, given that Raymond was likely more liability than asset, there may be no damages to the shareholders from the omission.”

Companies can face liability for false statements to investors, even if they’re not made in SEC filings. An SEC filing “creates potential additional avenues for liability,” Jennings said. “Certainly the securities statutes will apply to communications made by a public company in really any channel, including just putting out a press release, and so that could spark private litigation or it could spark SEC enforcement. It’s also illegal to knowingly make a false statement to a government agency, whether that’s the FBI or the SEC or a committee of Congress, etc. And so the act of filing could create additional avenues of liability, but those would be sort of stacked on top of each other.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Tech firm’s new CTO gets indicted; company then claims he was never CTO Read More »

landlords’-go-to-tool-to-set-rent-prices-to-be-gutted-under-realpage-settlement

Landlords’ go-to tool to set rent prices to be gutted under RealPage settlement

That report cited comments made by a RealPage vice president, Jay Parsons, at a meeting with a group of real estate tech executives. Boasting that “one of their company’s signature product’s software [uses] a mysterious algorithm to help landlords push the highest possible rents on tenants,” Parsons wooed landlords. In a since-deleted video, he noted that apartment rents had recently increased by 14.5 percent, bragging that “never before have we seen these numbers” and prodding another executive to agree that RealPage was “driving it, quite honestly.” Business Insider dubbed it landlords’ “secret weapon.”

Back then, critics told ProPublica that “at a minimum,” RealPage’s “algorithm may be artificially inflating rents and stifling competition,” noting that “machines quickly learn” to increase prices “above competitive levels” to “win.”

Today, RealPage’s site notes that “its suite of services is used to manage more than 24 million units worldwide.” The DOJ reported that on top of collecting its customers’ sensitive information—which included rental prices, demand, discounts, vacancy, and lease terms—RealPage also collected data by making “over 50,000 monthly phone calls,” conducting “market surveys” of landlords covering “over 11 million units and approximately 52,000 properties.”

Landlords “knowingly share this nonpublic information with RealPage,” the DOJ said, while “rising rents have disproportionately affected low-income residents.” DOJ Antitrust Division Assistant Attorney General Abigail Slater confirmed the settlement would ensure that RealPage can no longer rely on such nonpublic data to help landlords collude to set rental prices, while advancing the DOJ’s mission of preventing price-fixing algorithms from harming Americans.

“Competing companies must make independent pricing decisions, and with the rise of algorithmic and artificial intelligence tools, we will remain at the forefront of vigorous antitrust enforcement,” Slater said.

Landlords’ go-to tool to set rent prices to be gutted under RealPage settlement Read More »

doge-“cut-muscle,-not-fat”;-26k-experts-rehired-after-brutal-cuts

DOGE “cut muscle, not fat”; 26K experts rehired after brutal cuts


Government brain drain will haunt US after DOGE abruptly terminated.

Billionaire Elon Musk, the head of the Department of Government Efficiency (DOGE), holds a chainsaw as he speaks at the annual Conservative Political Action Conference. Credit: SAUL LOEB / Contributor | AFP

After Donald Trump curiously started referring to the Department of Government Efficiency exclusively in the past tense, an official finally confirmed Sunday that DOGE “doesn’t exist.”

Talking to Reuters, Office of Personnel Management (OPM) Director Scott Kupor confirmed that DOGE—a government agency notoriously created by Elon Musk to rapidly and dramatically slash government agencies—was terminated more than eight months early. This may have come as a surprise to whoever runs the DOGE account on X, which continued posting up until two days before the Reuters report was published.

As Kupor explained, a “centralized agency” was no longer necessary, since OPM had “taken over many of DOGE’s functions” after Musk left the agency last May. Around that time, DOGE staffers were embedded at various agencies, where they could ostensibly better coordinate with leadership on proposed cuts to staffing and funding.

Under Musk, DOGE was hyped as planning to save the government a trillion dollars. On X, Musk bragged frequently about the agency, posting in February that DOGE was “the one shot the American people have to defeat BUREAUcracy, rule of the bureaucrats, and restore DEMOcracy, rule of the people. We’re never going to get another chance like this.”

The reality fell far short of Musk’s goals, with DOGE ultimately reporting it saved $214 billion—an amount that may be overstated by nearly 40 percent, critics warned earlier this year.

How much talent was lost due to DOGE cuts?

Once Musk left, confidence in DOGE waned as lawsuits over suspected illegal firings piled up. By June, Congress was drawn, largely down party lines, on whether to codify the “DOGE process”—rapidly firing employees, then quickly hiring back whoever was needed—or declare DOGE a failure—perhaps costing taxpayers more in the long term due to lost talent and services.

Because DOGE operated largely in secrecy, it may be months or even years before the public can assess the true cost of DOGE’s impact. However, in the absence of a government tracker, the director of the Center for Effective Public Management at the Brookings Institution, Elaine Kamarck, put together what might be the best status report showing how badly DOGE rocked government agencies.

In June, Kamarck joined other critics flagging DOGE’s reported savings as “bogus.” In the days before DOGE’s abrupt ending was announced, she published a report grappling with a critical question many have pondered since DOGE launched: “How many people can the federal government lose before it crashes?”

In the report, Kamarck charted “26,511 occasions where the Trump administration abruptly fired people and then hired them back.” She concluded that “a quick review of the reversals makes clear that the negative stereotype of the ‘paper-pushing bureaucrat’” that DOGE was supposedly targeting “is largely inaccurate.”

Instead, many of the positions the government rehired were “engineers, doctors, and other professionals whose work is critical to national security and public health,” Kamarck reported.

About half of the rehires, Kamarck estimated, “appear to have been mandated by the courts.” However, in about a quarter of cases, the government moved to rehire staffers before the court could weigh in, Kamarck reported. That seemed to be “a tacit admission that the blanket firings that took place during the DOGE era placed the federal government in danger of not being able to accomplish some of its most important missions,” she said.

Perhaps the biggest downside of all of DOGE’s hasty downsizing, though, is a trend in which many long-time government workers simply decided to leave or retire, rather than wait for DOGE to eliminate their roles.

During the first six months of Trump’s term, 154,000 federal employees signed up for the deferred resignation program, Reuters reported, while more than 70,000 retired. Both numbers were clear increases (tens of thousands) over exits from government in prior years, Kamarck’s report noted.

“A lot of people said, ‘the hell with this’ and left,” Kamarck told Ars.

Kamarck told Ars that her report makes it obvious that DOGE “cut muscle, not fat,” because “they didn’t really know what they were doing.”

As a result, agencies are now scrambling to assess the damage and rehire lost talent. However, her report documented that agencies aligned with Trump’s policies appear to have an easier time getting new hires approved, despite Kupor telling Reuters that the government-wide hiring freeze is “over.” As of mid-November 2025, “of the over 73,000 posted jobs, a candidate was selected for only about 14,400 of them,” Kamarck reported, noting that it was impossible to confirm how many selected candidates have officially started working.

“Agencies are having to do a lot of reassessments in terms of what happened,” Kamarck told Ars, concluding that DOGE “was basically a disaster.”

A decentralized DOGE may be more powerful

“DOGE is not dead,” though, Kamarck said, noting that “the cutting effort is definitely” continuing under the Office of Management and Budget, which “has a lot more power than DOGE ever had.”

However, the termination of DOGE does mean that “the way it operated is dead,” and that will likely come as a relief to government workers who expected DOGE to continue slashing agencies through July 2026 at least, if not beyond.

Many government workers are still fighting terminations, as court cases drag on, and even Kamarck has given up on tracking due to inconsistencies in outcomes.

“It’s still like one day the court says, ‘No, you can’t do that,’” Kamarck explained. “Then the next day another court says, ‘Yes, you can.’” Other times, the courts “change their minds,” or the Trump administration just doesn’t “listen to the courts, which is fairly terrifying,” Kamarck said.

Americans likely won’t get a clear picture of DOGE’s impact until power shifts in Washington. That could mean waiting for the next presidential election, or possibly if Democrats win a majority in midterm elections, DOGE investigations could start as early as 2027, Kamarck suggested.

OMB will likely continue with cuts that Americans appear to want, as White House spokesperson Liz Huston told Reuters that “President Trump was given a clear mandate to reduce waste, fraud and abuse across the federal government, and he continues to actively deliver on that commitment.”

However, Kamarck’s report noted polls showing that most Americans disapprove of how Trump is managing government and its workforce, perhaps indicating that OMB will be pressured to slow down and avoid roiling public opinion ahead of the midterms.

“The fact that ordinary Americans have come to question the downsizing is, most likely, the result of its rapid unfolding, with large cuts done quickly regardless of their impact on the government’s functioning,” Kamarck suggested. Even Musk began to question DOGE. After Trump announced plans to appeal an electrical vehicle mandate that the Tesla founder relied on, Musk posted on X, “What the heck was the point of DOGE, if he’s just going to increase the debt by $5 trillion??”

Facing “blowback” over the most unpopular cuts, agencies sometimes rehired cut staffers within 24 hours, Kamarck noted, pointing to the Department of Energy as one of the “most dramatic” earliest examples. In that case, Americans were alarmed to see engineers cut who were responsible for keeping the nation’s nuclear arsenal “safe and ready.” Retention for those posts was already a challenge due to “high demand in the private sector,” and the number of engineers was considered “too low” ahead of DOGE’s cuts. Everyone was reinstated within a day, Kamarck reported.

Alarm bells rang across the federal government, and it wasn’t just about doctors and engineers being cut or entire agencies being dismantled, like USAID. Even staffers DOGE viewed as having seemingly less critical duties—like travel bookers and customer service reps—were proven key to government functioning. Arbitrary cuts risked hurting Americans in myriad ways, hitting their pocketbooks, throttling community services, and limiting disease and disaster responses, Kamarck documented.

Now that the hiring freeze is lifted and OMB will be managing DOGE-like cuts moving forward, Kamarck suggested that Trump will face ongoing scrutiny over Musk’s controversial agency, despite its dissolution.

“In order to prove that the downsizing was worth the pain, the Trump administration will have to show that the government is still operating effectively,” Kamarck wrote. “But much could go wrong,” she reported, spouting a list of nightmare scenarios:

“Nuclear mismanagement or airline accidents would be catastrophic. Late disaster warnings from agencies monitoring weather patterns, such as the National Oceanic and Atmospheric Administration (NOAA), and inadequate responses from bodies such as the Federal Emergency Management Administration (FEMA), could put people in danger. Inadequate staffing at the FBI could result in counter-terrorism failures. Reductions in vaccine uptake could lead to the resurgence of diseases such as polio and measles. Inadequate funding and staffing for research could cause scientists to move their talents abroad. Social Security databases could be compromised, throwing millions into chaos as they seek to prove their earnings records, and persistent customer service problems will reverberate through the senior and disability communities.”

The good news is that federal agencies recovering from DOGE cuts are “aware of the time bombs and trying to fix them,” Kamarck told Ars. But with so much brain drain from DOGE’s first six months ripping so many agencies apart at their seams, the government may struggle to provide key services until lost talent can be effectively replaced, she said.

“I don’t know how quickly they can put Humpty Dumpty back together again,” Kamarck said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

DOGE “cut muscle, not fat”; 26K experts rehired after brutal cuts Read More »

pornhub-is-urging-tech-giants-to-enact-device-based-age-verification

Pornhub is urging tech giants to enact device-based age verification


The company is pushing for an alternative way to keep minors from viewing porn.

In letters sent to Apple, Google, and Microsoft this week, Pornhub’s parent company urged the tech giants to support device-based age verification in their app stores and across their operating systems, WIRED has learned.

“Based on our real-world experience with existing age assurance laws, we strongly support the initiative to protect minors online,” reads the letter sent by Anthony Penhale, chief legal officer for Aylo, which owns Pornhub, Brazzers, Redtube, and YouPorn. “However, we have found site-based age assurance approaches to be fundamentally flawed and counterproductive.”

The letter adds that site-based age verification methods have “failed to achieve their primary objective: protecting minors from accessing age-inappropriate material online.” Aylo says device-based authentication is a better solution for this issue because once a viewer’s age is determined via phone or tablet, their age signal can be shared over its application programming interface (API) with adult sites.

The letters were sent following the continued adoption of age verification laws in the US and UK, which require users to upload an ID or other personal documentation to verify that they are not a minor before viewing sexually explicit content; often this requires using third-party services. Currently, 25 US states have passed some form of ID verification, each with different provisions.

Pornhub has experienced an enormous dip in traffic as a result of its decision to pull out of most states that have enacted these laws. The platform was one of the few sites to comply with the new law in Louisiana but doing so caused traffic to drop by 80 percent. Similarly, since implementation of the Online Safety Act, Pornhub has lost nearly 80 percent of its UK viewership.

The company argues that it’s a privacy risk to leave age verification up to third-party sites and that people will simply seek adult content on platforms that don’t comply with the laws.

“We have seen an exponential surge in searches for alternate adult sites without age restrictions or safety standards at all,” says Alex Kekesi, vice president of brand and community at Pornhub.

She says she hopes the tech companies and Aylo are able to find common ground on the matter, especially given the recent passage of the Digital Age Assurance Act (AB 1043) in California. “This is a law that’s interesting because it gets it almost exactly right,” she says. Signed into law in October, it requires app store operators to authenticate user ages before download.

According to Google spokesperson Karl Ryan, “Google is committed to protecting kids online, including by developing and deploying new age assurance tools like our Credential Manager API that can be used by websites. We don’t allow adult entertainment apps on Google Play and would emphasize that certain high-risk services like Aylo will always need to invest in specific tools to meet their own legal and responsibility obligations.”

Microsoft declined to comment, but pointed WIRED to a recent policy recommendation post that said “age assurance should be applied at the service level, target specific design features that pose heightened risks, and enable tailored experiences for children.”

Apple likewise declined to comment and instead pointed WIRED to its child online safety report and noted that web content filters are turned on by default for every user under 18. A software update from June specified that Apple requires kids who are under 13 to have a kid account, which also includes “app restrictions enabled from the beginning.” Apple currently has no way of requiring every single website to integrate an API.

According to Pornhub, age verification laws have led to ineffective enforcement. “The sheer volume of adult content platforms has proven to be too challenging for governments worldwide to regulate at the individual site or platform level,” says Kekesi. Aylo claims device-based age verification that happens once, on a phone or computer, will preserve user privacy while prioritizing safety.

Recent Studies by New York University and public policy nonprofit the Phoenix Center suggest that current age verification laws don’t work because people find ways to circumvent them, including by using VPNs and turning to sites that don’t regulate their content.

“Platform-based verification has been like Prohibition,” says Mike Stabile, director of public policy at the Free Speech Coalition. “We’re seeing consumer behavior reroute away from legal, compliant sites to foreign sites that don’t comply with any regulations or laws. Age verification laws have effectively rerouted a massive river of consumers to sites with pirated content, revenge porn, and child sex abuse material.” He claims that these laws “have been great for criminals, terrible for the legal adult industry.”

With age verification and the overall deanonymizing of the internet, these are issues that will now face nearly everyone, but especially those who are politically disfavored. Sex workers have been dealing with issues like censorship and surveillance online for a long time. One objective of Project 2025, MAGA’s playbook for President Trump’s second term, has been to “back door” a national ban on porn through state laws.

The current surge of child protection laws around the world is driving a significant change in how people engage with the internet, and is also impacting industries beyond porn, including gaming and social media. Starting December 10 in Australia, in accordance with the government’s social media ban, kids under 16 will be kicked off Facebook, Instagram, and Threads.

Ultimately, Stabile says that may be the point. In the US, “the advocates for these bills have largely fallen into two groups: faith-based organizations that don’t believe adult content should be legal, and age verification providers who stand to profit from a restricted internet.” The goal of faith-based organizations, he says, is to destabilize the adult industry and dissuade adults from using it, while the latter works to expand their market as much as possible, “even if that means getting in bed with right-wing censors.”

But the problem is that “even well-meaning legislators advancing these bills have little understanding of the internet,” Stabile adds. “It’s much easier to go after a political punching bag like Pornhub than it is Apple or Google. But if you’re not addressing the reality of the internet, if your legislation flies in the face of consumer behavior, you’re only going to end up creating systems that fail.”

Adult industry insiders I spoke to in August explained that the biggest misconception about the industry is that it is against self-regulation when that couldn’t be further from the truth. “Keeping minors off adult sites is a shared responsibility that requires a global solution,” Kekesi says. “Every phone, tablet, or computer should start as a kid-safe device. Only verified adults should unlock access to things like dating apps, gambling, or adult content.” In 2022, Pornhub created a chatbot that urges people searching for child sexual abuse content to seek counseling; the tool was introduced following a 2020 New York Times investigation that alleged the platform had monetized videos showing child abuse. Pornhub has since started releasing annual transparency reports and tightened its verification process of performers and for video uploads.

According to Politico, Google, Meta, OpenAI, Snap, and Pinterest all supported the California bill. Right now that law is limited to California, but Kekesi believes it can work as a template for other states.

“We obviously see that there’s kind of a path forward here,” she says.

This story originally appeared at WIRED.com

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Pornhub is urging tech giants to enact device-based age verification Read More »

tech-company-cto-and-others-indicted-for-exporting-nvidia-chips-to-china

Tech company CTO and others indicted for exporting Nvidia chips to China

Citing export controls that took effect in 2022, the indictment said the US is trying to disrupt China’s plan to build exascale supercomputers for military and surveillance use. “These capabilities are being used by the PRC for its military modernization efforts and in connection with the PRC’s weapons design and testing, including for weapons of mass destruction, as well as in connection with the PRC’s development and deployment of advanced AI surveillance tools,” the indictment said.

The Justice Department said the conspirators used Janford Realtor, LLC, a Florida-based company that was not involved in real estate despite its name, “as a front to purchase and then illegally export controlled GPUs to the PRC.” Ho and Li owned and controlled Janford Realtor, while Raymond operated an Alabama-based electronics company that “supplied Nvidia GPUs to Ho and others for illegal export to the PRC,” the Justice Department said.

Kickbacks, money laundering

The conspirators paid each other “kickbacks” or commissions on the sale and export of the Nvidia chips, the indictment said. The money laundering charges involve a variety of transfers from two Chinese companies to Janford Realtor and the Alabama electronics company, the indictment said. The indictment lists nine wire transfers in amounts ranging from $237,248 to $1,150,000.

Raymond was reportedly released on bond, while the other three alleged conspirators are being detained. “This is an extremely serious offense. At the time these were being exported, these were Nvidia’s most advanced chips,” US prosecutor Noah Stern told a magistrate judge in Oakland yesterday, according to Wired.

Stein also said in court that “text messages obtained by authorities show Li boasting about how his father ‘had engaged in similar business on behalf of the Chinese Communist Party,’” Wired reported. Stern said that in the messages, Li “explained that his father had ways to import” the Nvidia chips despite US export controls.

Tech company CTO and others indicted for exporting Nvidia chips to China Read More »

keep-your-receipts:-tech-firms-told-to-prepare-for-possible-tariff-refunds

Keep your receipts: Tech firms told to prepare for possible tariff refunds


Tech firms dare to dream chip tariffs may go away amid rumors of delays.

For months, the Trump administration has warned that semiconductor tariffs are coming soon, leaving the tech industry on pins and needles after a chaotic year of unpredictable tariff regimes collectively cost firms billions.

The semiconductor tariffs are key to Donald Trump’s economic agenda, which is intended to force more manufacturing into the US by making it more expensive to import materials and products. He campaigned on axing the CHIPS Act—which provided subsidies to companies investing in manufacturing chips in the US—complaining that it was a “horrible, horrible thing” to “give hundreds of billions of dollars” away when the US could achieve the same objective by instead taxing companies and “use whatever is left over” of CHIPS funding to “reduce debt.” However, as 2025 winds down, the US president faces pressure on all sides to delay semiconductor tariffs, insiders told Reuters, and it appears that he is considering caving.

According to “two people with direct knowledge of the matter and a third person briefed on the conversations,” US officials have privately told industry and government stakeholders that semiconductor tariffs will likely be delayed.

A fourth insider suggested Trump was hesitant to impose tariffs that could rock the recent US-China trade truce, while Reuters noted that Trump may also be hesitant to announce new tariffs during the holiday shopping season that risk increasing prices of popular consumer tech products. Recently, Trump cut tariffs on grocery items in the face of mounting consumer backlash, so imposing new tariffs now—risking price hikes on laptops, game consoles, and smartphones—surely wouldn’t improve his record-low approval rating.

In April, Trump started threatening semiconductor tariffs as high as 100 percent, prompting a Commerce Department probe into the potential economic and national security impacts of imposing broad chip tariffs. Stakeholders were given 30 days to weigh in, and tech industry associations were quick to urge Trump to avoid imposing broad tariffs that they warned risked setting back US chip manufacturing, ruining US tech competitiveness, and hobbling innovation. The best policy would be no chip tariffs, some industry groups suggested.

Glimmer of hope chip tariffs may never come

Whether Trump would ever give up on imposing broad chip tariffs that he thinks will ensure that the US becomes a world-leading semiconductor hub is likely a tantalizing daydream for companies relieved by rumors that chip tariffs may be delayed. But it’s not completely improbable that he might let this one go.

During Trump’s first term, he threatened tariffs on foreign cars that did not come to pass until his second term. When it comes to the semiconductor tariffs, Trump may miss his chance to act if he’s concerned about losing votes in the midterm elections.

The Commerce Department’s investigation must conclude by December 27, after which Trump has 90 days to decide if he wants to move ahead with tariffs based on the findings.

He could, of course, do nothing or claim to disagree with the findings and seek an alternative path to impose tariffs, but there’s a chance that his own party may add to the pressure to delay them. Trump’s low approval rating is already hurting Republicans in polls, New York Magazine reported, and some are begging Trump to join them on the campaign trail next year to avoid a midterm slump, Politico reported.

For tech companies, the goal is to persuade Trump to either drop or narrowly tailor semiconductor tariffs—and hopefully eliminate the threat of tariffs on downstream products, which could force tech companies to pay double or triple taxes on imports. If they succeed, they could be heading into 2026 with more stable supply chains and even possibly with billions in tariff refunds in their pockets, if the Supreme Court deems Trump’s “emergency” “reciprocal tariffs” illegal.

Gary Shapiro, CEO of the Consumer Technology Association (CTA), attended oral arguments in the SCOTUS case, noting on LinkedIn that “business executives have had to contend with over 100 announcements of tariff changes since the beginning of 2025.”

“I hope to see the Supreme Court rule swiftly to provide businesses the certainty they need,” Shapiro said, arguing in a second post that tariffs “cause uncertainty for businesses, snarl supply chains, and drive inflation and higher costs for consumers.”

As tech companies wait to see how the court rules and how Trump responds to the conclusion of the Commerce Department’s probe, uncertainty remains. CTA’s vice president of international trade, Ed Brzytwa, told Ars that the CTA has advised tech firms to keep their receipts and document all tariff payments.

How chip tariffs could raise prices

Without specifying what was incorrect, a White House official disputed Reuters’ reporting that Trump may shift the timeline for announcing semiconductor tariffs, saying simply “that is not true.”

A Commerce Department official said there was “no change” to report, insisting that the “administration remains committed to reshoring manufacturing that’s critical to our national and economic security.”

But neither official shared any details on when tariffs might be finalized, Reuters reported. And the Commerce Department did not respond to Ars’ request for information on when the public could expect to review findings of its probe.

In comments submitted to the Commerce Department, the Semiconductor Industry Association warned that “for every dollar that a semiconductor chip increases in price, products with embedded semiconductors will have to raise their sales price by $3 to maintain their previous margins.” That makes it easy to see how semiconductor tariffs risk significantly raising prices on any product containing a chip, depending how high the tariff rate is, including products like refrigerators, cars, video game consoles, coffee makers, smartphones, and the list goes on.

It’s estimated that chip tariffs could cost the semiconductor industry more than $1 billion. However, the bigger threat to the semiconductor industry would be if the higher prices of US-made chip made it harder to compete with “companies who sell comparable chips at a lower price globally,” SIA reported. Additionally, “higher input costs from tariffs” could also “force domestic companies to divert funds away from R&D,” the group noted. US firms that Trump wants to promote could rapidly lose their edge in such a scenario.

Echoing SIA, the Computer and Communications Industry Association (CCIA) warned the Commerce Department that “broad tariffs would significantly increase input costs for a wide range of downstream industries, raising costs for consumers while decreasing revenues for domestic semiconductor producers, the very industry this investigation seeks to protect.”

To avoid harming key US industries, CCIA recommended that any semiconductor tariffs imposed “focus narrowly” on semiconductors and semiconductor manufacturing equipment “that are critical for national defense and sourced from countries of concern.” The group also suggested creating high and low-risk categories, so that “low-risk goods, such as the import of commercial-grade printed circuit boards used in consumer electronics from key partners” wouldn’t get hit with taxes that have little to do with protecting US national security.

“US long-term competitiveness in both the semiconductor industry and downstream sectors could be greatly impaired if policy interventions are not carefully calibrated,” CCIA forecasted, warning that everyone would feel the pain, from small businesses to leading AI firms.

Trump’s plan for tariff funds makes no sense, groups say

Trump has been claiming since April that chip tariffs are coming soon, and he continues to use them as leverage in recent deals struck with Korea and Switzerland. But so far, while some countries have managed to negotiate rates as low as 15 percent, the semiconductor industry and downstream sectors remain in the dark on what to expect if and when the day finally comes that broader tariffs are announced.

Avoiding so-called tariff stacking—where products are taxed, as well as materials used in the products—is SIA’s biggest ask. The group “strongly” requested that Trump maintain “as simple of a tariff regime for semiconductors as possible,” given “the far-reaching consequences” the US could face if chip tariffs become as complex and burdensome to tech firms as reciprocal tariffs.

SIA also wants Trump to consider offering more refunds, perhaps offering to pay back “duties, taxes, and fees paid on imported parts, components, and materials that are incorporated in an exported product.”

Such a policy “would ensure the United States remains at the forefront of global chip technology,” SIA claimed, by making sure that tariffs collected “remain available for investments in expanding US manufacturing capacity and advanced research and development, as opposed to handed over to the US Treasury.”

Rather than refunding firms, Trump has instead proposed sharing tariffs as dividends, perhaps sending $2,000 checks to low and middle-income families. However, CNN spoke with experts who said the math doesn’t add up, making the prospect that Trump could send stimulus checks seem unlikely. He has also suggested the funds—which were projected to raise $158.4 billion in total revenue in 2025, CNN reported—could be used to reduce national debt.

Trump’s disdain for the CHIPS Act, casting it as a handout to tech firms, makes it seem unlikely that he’ll be motivated to refund firms or offer new incentives. Some experts doubt that he’ll make it easy for firms to get refunds of tariffs if the Supreme Court drafted such an order, or if a SCOTUS loss triggered a class action lawsuit.

CTA’s Shapiro said on LinkedIn that he’s “not sure” which way the SCOTUS case will go, but he’s hoping the verdict will come before the year’s end. Like industry groups urging Trump to keep semiconductor tariffs simple, Shapiro said he hoped Trump would streamline the process for any refunds coming. In the meantime, CTA advises firms to keep all documents itemizing tariffs paid to ensure firms aren’t stiffed if Trump’s go-to tariff regimes are deemed illegal.

“If plaintiffs prevail in this case, I hope to see the government keep it simple and ensure that retailers and importers get their tariff payments refunded swiftly and with as few hoops to jump through as possible,” Shapiro said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Keep your receipts: Tech firms told to prepare for possible tariff refunds Read More »