Policy

ukrainians-sue-us-chip-firms-for-powering-russian-drones,-missiles

Ukrainians sue US chip firms for powering Russian drones, missiles

Dozens of Ukrainian civilians filed a series of lawsuits in Texas this week, accusing some of the biggest US chip firms of negligently failing to track chips that evaded export curbs. Those chips were ultimately used to power Russian and Iranian weapon systems, causing wrongful deaths last year.

Their complaints alleged that for years, Texas Instruments (TI), AMD, and Intel have ignored public reporting, government warnings, and shareholder pressure to do more to track final destinations of chips and shut down shady distribution channels diverting chips to sanctioned actors in Russia and Iran.

Putting profits over human lives, tech firms continued using “high-risk” channels, Ukrainian civilians’ legal team alleged in a press statement, without ever strengthening controls.

All that intermediaries who placed bulk online orders had to do to satisfy chip firms was check a box confirming that the shipment wouldn’t be sent to sanctioned countries, lead attorney Mikal Watts told reporters at a press conference on Wednesday, according to the Kyiv Independent.

“There are export lists,” Watts said. “We know exactly what requires a license and what doesn’t. And companies know who they’re selling to. But instead, they rely on a checkbox that says, ‘I’m not shipping to Putin.’ That’s it. No enforcement. No accountability.”

As chip firms allegedly looked the other way, innocent civilians faced five attacks, detailed in the lawsuits, that used weapons containing their chips. That includes one of the deadliest attacks in Kyiv, where Ukraine’s largest children’s hospital was targeted in July 2024. Some civilians suing were survivors seriously injured in attacks, while others lost loved ones and experienced emotional trauma.

Russia would not be able to hit their targets without chips supplied by US firms, the lawsuits alleged. Considered the brain of weapon systems, including drones, cruise missiles, and ballistic missiles, the chips help enable Russia’s war against Ukrainian civilians, they alleged.

Ukrainians sue US chip firms for powering Russian drones, missiles Read More »

trump-tries-to-block-state-ai-laws-himself-after-congress-decided-not-to

Trump tries to block state AI laws himself after Congress decided not to


Trump claims state laws force AI makers to embed “ideological bias” in models.

President Donald Trump talks to journalists after signing executive orders in the Oval Office at the White House on August 25, 2025 in Washington, DC. Credit: Getty Images | Chip Somodevilla

President Trump issued an executive order yesterday attempting to thwart state AI laws, saying that federal agencies must fight state laws because Congress hasn’t yet implemented a national AI standard. Trump’s executive order tells the Justice Department, Commerce Department, Federal Communications Commission, Federal Trade Commission, and other federal agencies to take a variety of actions.

“My Administration must act with the Congress to ensure that there is a minimally burdensome national standard—not 50 discordant State ones. The resulting framework must forbid State laws that conflict with the policy set forth in this order… Until such a national standard exists, however, it is imperative that my Administration takes action to check the most onerous and excessive laws emerging from the States that threaten to stymie innovation,” Trump’s order said. The order claims that state laws, such as one passed in Colorado, “are increasingly responsible for requiring entities to embed ideological bias within models.”

Congressional Republicans recently decided not to include a Trump-backed plan to block state AI laws in the National Defense Authorization Act (NDAA), although it could be included in other legislation. Sen. Ted Cruz (R-Texas) has also failed to get congressional backing for legislation that would punish states with AI laws.

“After months of failed lobbying and two defeats in Congress, Big Tech has finally received the return on its ample investment in Donald Trump,” US Sen. Ed Markey (D-Mass.) said yesterday. “With this executive order, Trump is delivering exactly what his billionaire benefactors demanded—all at the expense of our kids, our communities, our workers, and our planet.”

Markey said that “a broad, bipartisan coalition in Congress has rejected the AI moratorium again and again.” Sen. Maria Cantwell (D-Wash.) said the “executive order’s overly broad preemption threatens states with lawsuits and funding cuts for protecting their residents from AI-powered frauds, scams, and deepfakes.”

Trump orders Bondi to sue states

Sen. Brian Schatz (D-Hawaii) said that “preventing states from enacting common-sense regulation that protects people from the very real harms of AI is absurd and dangerous. Congress has a responsibility to get this technology right—and quickly—but states must be allowed to act in the public interest in the meantime. I’ll be working with my colleagues to introduce a full repeal of this order in the coming days.”

The Trump order includes a variation on Cruz’s proposal to prevent states with AI laws from accessing broadband grant funds. The executive order also includes a plan that Trump recently floated to have the federal government file lawsuits against states with AI laws.

Within 30 days of yesterday’s order, US Attorney General Pam Bondi is required to create an AI Litigation Task Force “whose sole responsibility shall be to challenge State AI laws inconsistent with the policy set forth in section 2 of this order, including on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment.”

Americans for Responsible Innovation, a group that lobbies for regulation of AI, said the Trump order “relies on a flimsy and overly broad interpretation of the Constitution’s Interstate Commerce Clause cooked up by venture capitalists over the last six months.”

Section 2 of Trump’s order is written vaguely to give the administration leeway to challenge many types of AI laws. “It is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI,” the section says.

Colorado law irks Trump

The executive order specifically names a Colorado law that requires AI developers to protect consumers against “algorithmic discrimination.” It defines this type of discrimination as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis” of age, race, sex, and other protected characteristics.

The Colorado law compels developers of “high-risk systems” to make various disclosures, implement a risk management policy and program, give consumers the right to “correct any incorrect personal data that a high-risk system processed in making a consequential decision,” and let consumers appeal any “adverse consequential decision concerning the consumer arising from the deployment of a high-risk system.”

Trump’s order alleges that the Colorado law “may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.” Trump’s order also says that “state laws sometimes impermissibly regulate beyond State borders, impinging on interstate commerce.”

Trump ordered the Commerce Department to evaluate existing state AI laws and identify “onerous” ones that conflict with the policy. “That evaluation of State AI laws shall, at a minimum, identify laws that require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution,” the order said.

States would be declared ineligible for broadband funds

Under the order, states with AI laws that get flagged by the Trump administration will be deemed ineligible for “non-deployment funds” from the US government’s $42 billion Broadband Equity, Access, and Deployment (BEAD) program. The amount of non-deployment funds will be sizable because it appears that only about half of the $42 billion allocated by Congress will be used by the Trump administration to help states subsidize broadband deployment.

States with AI laws would not be blocked from receiving the deployment subsidies, but would be ineligible for the non-deployment funds that could be used for other broadband-related purposes. Beyond broadband, Trump’s order tells other federal agencies to “assess their discretionary grant programs” and consider withholding funds from states with AI laws.

Other agencies are being ordered to use whatever authority they have to preempt state laws. The order requires Federal Communications Commission Chairman Brendan Carr to “initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” It also requires FTC Chairman Andrew Ferguson to issue a policy statement detailing “circumstances under which State laws that require alterations to the truthful outputs of AI models are preempted by the Federal Trade Commission Act’s prohibition on engaging in deceptive acts or practices affecting commerce.”

Finally, Trump’s order requires administration officials to “prepare a legislative recommendation establishing a uniform Federal policy framework for AI that preempts State AI laws that conflict with the policy set forth in this order.” The proposed ban would apply to most types of state AI laws, with exceptions for rules relating to “child safety protections; AI compute and data center infrastructure, other than generally applicable permitting reforms; [and] state government procurement and use of AI.”

It would be up to Congress to decide whether to pass the proposed legislation. But the various other components of the executive order could dissuade states from implementing AI laws even if Congress takes no action.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Trump tries to block state AI laws himself after Congress decided not to Read More »

apple-loses-its-appeal-of-a-scathing-contempt-ruling-in-ios-payments-case

Apple loses its appeal of a scathing contempt ruling in iOS payments case

Back in April, District Court Judge Yvonne Gonzalez Rogers delivered a scathing judgment finding that Apple was in “willful violation” of her 2021 injunction intended to open up iOS App Store payments. That contempt of court finding has now been almost entirely upheld by the Ninth Circuit Court of Appeals, a development that Epic Games’ Tim Sweeney tells Ars he hopes will “do a lot of good for developers and start to really change the App Store situation worldwide, I think.”

The ruling, signed by a panel of three appellate court judges, affirmed that Apple’s initial attempts to charge a 27 percent fee to iOS developers using outside payment options “had a prohibitive effect, in violation of the injunction.” Similarly, Apple’s restrictions on how those outside links had to be designed were overly broad; the appeals court suggests that Apple can only ensure that internal and external payment options are presented in a similar fashion.

The appeals court also agreed that Apple acted in “bad faith” by refusing to comply with the injunction, rejecting viable, compliant alternatives in internal discussions. And the appeals court was also not convinced by Apple’s process-focused arguments, saying the district court properly evaluated materials Apple argued were protected by attorney-client privilege.

While the district court barred Apple from charging any fees for payments made outside of its App Store, the appeals court now suggests that Apple should still be able to charge a “reasonable fee” based on its “actual costs to ensure user security and privacy.” It will be up to Apple and the district court to determine what that kind of “reasonable fee” should look like going forward.

Speaking to reporters Thursday night, though, Epic founder and CEO Tim Sweeney said he believes those should be “super super minor fees,” on the order of “tens or hundreds of dollars” every time an iOS app update goes through Apple for review. That should be more than enough to compensate the employees reviewing the apps to make sure outside payment links are not scams and lead to a system of “normal fees for normal businesses that sell normal things to normal customers,” Sweeney said.

Apple loses its appeal of a scathing contempt ruling in iOS payments case Read More »

after-npr-and-pbs-defunding,-fcc-receives-call-to-take-away-station-licenses

After NPR and PBS defunding, FCC receives call to take away station licenses

The CAR complaints were dismissed in January 2025 by then-FCC Chairwoman Jessica Rosenworcel and then revived by Carr after Trump appointed him to the chairmanship. Carr has continued making allegations of news distortion, including when he threatened to revoke licenses from ABC stations that air Jimmy Kimmel’s show.

During the Kimmel controversy, Carr said he was trying “to empower local TV stations to serve the needs of the local communities.” The FCC subsequently opened a proceeding titled, “Empowering Local Broadcast TV Stations to Meet Their Public Interest Obligations: Exploring Market Dynamics Between National Programmers and Their Affiliates.”

The FCC invited public comments on whether to adopt regulations “in light of the changes in the broadcast market that have led to anticompetitive leverage and behavior by large networks.” This could involve prohibiting certain kinds of contract provisions in agreements between networks and affiliate stations and strengthening the rights of local stations to reject national programming.

FCC criticized for attacks on media

The “Empowering Local Broadcast TV Stations” proceeding is the one in which the Center for American Rights submitted its comments. Besides discussing NPR and PBS, the group said that national networks “indoctrinate the American people from their left-wing perspective.”

“The consistent bias on ABC’s The View, for instance, tells women in red states who voted for President Trump that they are responsible for putting in office an autocratic dictator,” the Center for American Rights said.

The FCC proceeding drew comments yesterday from the National Hispanic Media Coalition (NHMC), which criticized Carr’s war against the media. “The Public Notice frames this proceeding as an effort to ‘empower local broadcasters’ in their dealings with national networks. But… recent FCC actions have risked using regulatory authority not to promote independent journalism, but to influence newsroom behavior, constrain editorial decision-making, and encourage outcomes aligned with the personal or political interests of elected officials,” the NHMC said.

The group said it supports “genuine local journalism and robust competition,” but said:

policies that reshape the balance of power between station groups, networks, and newsrooms cannot be separated from the broader regulatory environment in which they operate. Several of the Commission’s recent interventions—including coercive conditions attached to the Skydance/Paramount transaction, and unlawful threats made to ABC and its affiliate stations in September demanding they remove Jimmy Kimmel’s show from the airwaves—illustrate how regulatory tools can be deployed in ways that undermine media freedom and risk political interference in programming and editorial decisions.

After NPR and PBS defunding, FCC receives call to take away station licenses Read More »

us-taking-25%-cut-of-nvidia-chip-sales-“makes-no-sense,”-experts-say

US taking 25% cut of Nvidia chip sales “makes no sense,” experts say


Trump’s odd Nvidia reversal may open the door for China to demand Blackwell access.

Donald Trump’s decision to allow Nvidia to export an advanced artificial intelligence chip, the H200, to China may give China exactly what it needs to win the AI race, experts and lawmakers have warned.

The H200 is about 10 times less powerful than Nvidia’s Blackwell chip, which is the tech giant’s currently most advanced chip that cannot be exported to China. But the H200 is six times more powerful than the H20, the most advanced chip available in China today. Meanwhile China’s leading AI chip maker, Huawei, is estimated to be about two years behind Nvidia’s technology. By approving the sales, Trump may unwittingly be helping Chinese chip makers “catch up” to Nvidia, Jake Sullivan told The New York Times.

Sullivan, a former Biden-era national security advisor who helped design AI chip export curbs on China, told the NYT that Trump’s move was “nuts” because “China’s main problem” in the AI race “is they don’t have enough advanced computing capability.”

“It makes no sense that President Trump is solving their problem for them by selling them powerful American chips,” Sullivan said. “We are literally handing away our advantage. China’s leaders can’t believe their luck.”

Trump apparently was persuaded by Nvidia CEO Jensen Huang and his “AI czar,” David Sacks, to reverse course on H200 export curbs. They convinced Trump that restricting sales would ensure that only Chinese chip makers would get a piece of China’s market, shoring up revenue flows that dominant firms like Huawei could pour into R&D.

By instead allowing Nvidia sales, China’s industry would remain hooked on US chips, the thinking goes. And Nvidia could use those funds—perhaps $10–15 billion annually, Bloomberg Intelligence has estimated—to further its own R&D efforts. That cash influx, theoretically, would allow Nvidia to maintain the US advantage.

Along the way, the US would receive a 25 percent cut of sales, which lawmakers from both sides of the aisle warned may not be legal and suggested to foreign rivals that US national security was “now up for sale,” NYT reported. The president has claimed there are conditions to sales safeguarding national security but, frustrating critics, provided no details.

Experts slam Nvidia plan as “flawed”

Trump’s plan is “flawed,” The Economist reported.

For years, the US has established tech dominance by keeping advanced technology away from China. Trump risks rocking that boat by “tearing up America’s export-control policy,” particularly if China’s chip industry simply buys up the H200s as a short-term tactic to learn from the technology and beef up its domestic production of advanced chips, The Economist reported.

In a sign that’s exactly what many expect could happen, investors in China were apparently so excited by Trump’s announcement that they immediately poured money into Moore Threads, expected to be China’s best answer to Nvidia, the South China Morning Post reported.

Several experts for the non-partisan think tank the Counsel on Foreign Relations also criticized the policy change, cautioning that the reversal of course threatened to undermine US competition with China.

Suggesting that Trump was “effectively undoing” export curbs sought during his first term, Zongyuan Zoe Liu warned that China “buys today to learn today, with the intention to build tomorrow.”

And perhaps more concerning, she suggested, is that Trump’s policy signals weakness. Rather than forcing Chinese dependence on US tech, reversing course showed China that the US will “back down” under pressure, she warned. And they’re getting that message at a time when “Chinese leaders have a lot of reasons to believe they are not only winning the trade war but also making progress towards a higher degree of strategic autonomy.”

In a post on X, Rush Doshi—a CFR expert who previously advised Biden on national security issues related to China—suggested that the policy change was “possibly decisive in the AI race.”

“Compute is our main advantage—China has more power, engineers, and the entire edge layer—so by giving this up, we increase the odds the world runs on Chinese AI,” Doshi wrote.

Experts fear Trump may not understand the full impact of his decision. In the short-term, Michael C. Horowitz wrote for CFR, “it is indisputable” that allowing H200 exports benefits China’s frontier AI and efforts to scale data centers. And Doshi pointed out that Trump’s shift may trigger more advanced technology flowing into China, as US allies that restricted sales of machines to build AI chips may soon follow his lead and lift their curbs. As China learns to be self-reliant from any influx of advanced tech, Sullivan warned that China’s leaders “intend to get off of American semiconductors as soon as they can.”

“So, the argument that we can keep them ‘addicted’ holds no water,” Sullivan said. “They want American chips right now for one simple reason: They are behind in the AI race, and this will help them catch up while they build their own chip capabilities.”

China may reject H200, demand Blackwell access

It remains unclear if China will approve H200 sales, but some of the country’s biggest firms, including ByteDance, Tencent, and Alibaba, are interested, anonymous insider sources told Reuters.

In the past, China has instructed companies to avoid Nvidia, warning of possible backdoors giving Nvidia a kill switch to remotely shut down chips. Such backdoors could potentially destabilize Chinese firms’ operations and R&D. Nvidia has denied such backdoors exist, but Chinese firms have supposedly sought reassurances from Nvidia in the aftermath of Trump’s policy change. Likely just as unpopular with the Chinese firms and government, Nvidia confirmed recently that it has built location verification tech that could help the US detect when restricted chips are leaked into China. Should the US ever renew export curbs on H200 chips, adopting them widely could cause chaos in the future.

Without giving China sought-after reassurances, Nvidia may not end up benefiting as much as it hoped from its mission to reclaim lost revenue from the Chinese market. Today, Chinese firms control about 60 percent of China’s AI chip market, where only a few years ago American firms—led by Nvidia—controlled 80 percent, the Economist reported.

But for China, the temptation to buy up Nvidia chips may be too great to pass up. Another CFR expert, Chris McGuire, estimated that Nvidia could suddenly start exporting as many as 3 million H200s into China next year. “This would at least triple the amount of aggregate AI computing power China could add domestically” in 2026, McGuire wrote, and possibly trigger disastrous outcomes for the US.

“This could cause DeepSeek and other Chinese AI developers to close the gap with leading US AI labs and enable China to develop an ‘AI Belt and Road’ initiative—a complement to its vast global infrastructure investment network already in place—that competes with US cloud providers around the world,” McGuire forecasted.

As China mulls the benefits and risks, an emergency meeting was called, where the Chinese government discussed potential concerns of local firms buying chips, according to The Information. Reportedly, Beijing ended that meeting with a promise to issue a decision soon.

Horowitz suggested that a primary reason that China may reject the H200s could be to squeeze even bigger concessions out of Trump, whose administration recently has been working to maintain a tenuous truce with China.

“China could come back demanding the Blackwell or something else,” Horowitz suggested.

In a statement, Nvidia—which plans to release a chip called the Rubin to surpass the Blackwell soon—praised Trump’s policy as striking “a thoughtful balance that is great for America.”

China will rip off Nvidia’s chips, Republican warns

Both Democratic and Republican lawmakers in Congress criticized Trump’s plan, including senators behind a bipartisan push to limit AI chip sales to China.

Some have questioned how much thought was put into the policy, as the US confusingly continues restricting less advanced AI chips (like the A100 and H100) while green-lighting H200 sales. Trump’s Justice Department also seems to be struggling to keep up. The NYT noted that just “hours before” Trump announced the policy change, the DOJ announced “it had detained two people for selling those chips to the country.”

The chair of the Select Committee on Competition with China, Rep. John Moolenaar (R-Mich.), warned on X that the news wouldn’t be good for the US or Nvidia. First, the Chinese Communist Party “will use these highly advanced chips to strengthen its military capabilities and totalitarian surveillance,” he suggested. And second, “Nvidia should be under no illusions—China will rip off its technology, mass produce it themselves, and seek to end Nvidia as a competitor.”

“That is China’s playbook and it is using it in every critical industry,” Moolenaar said.

House Democrats on committees dealing with foreign affairs and competition with China echoed those concerns, The Hill reported, warning that “under this administration, our national security is for sale.”

Nvidia’s Huang seems pleased with the outcome, which comes after months of reportedly pressuring the administration to lift export curbs limiting its growth in Chinese markets, the NYT reported. Last week, Trump heaped praise on Huang after one meeting, calling Huang a “smart man” and suggesting the Nvidia chief has “done an amazing job” helping Trump understand the stakes.

At an October news conference ahead of the deal’s official approval, Huang suggested that government lawyers were researching ways to get around a US law that prohibits charging companies fees for export licenses. Eventually, Trump is expected to release a policy that outlines how the US will collect those fees without conflicting with that law.

Senate Democrats appear unlikely to embrace such a policy, issuing a joint statement condemning the H200 sales as dooming the US in the AI race and threatening national security.

“Access to these chips would give China’s military transformational technology to make its weapons more lethal, carry out more effective cyberattacks against American businesses and critical infrastructure and strengthen their economic and manufacturing sector,” Senators wrote.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

US taking 25% cut of Nvidia chip sales “makes no sense,” experts say Read More »

supreme-court-appears-likely-to-approve-trump’s-firing-of-ftc-democrat

Supreme Court appears likely to approve Trump’s firing of FTC Democrat

Justice Samuel Alito suggested that a ruling for Slaughter could open the way for Congress to convert various executive branch agencies into “multi-member commissions with members protected from plenary presidential removal authority.”

“I could go down the list… How about Veterans Affairs? How about Interior? Labor? EPA? Commerce? Education? What am I missing?” Alito said.

“Agriculture,” Justice Neil Gorsuch responded. The official transcript notes that Gorsuch’s response was met with laughter.

Justice Brett Kavanaugh expressed skepticism about the power of independent agencies, saying, “I think broad delegations to unaccountable independent agencies raise enormous constitutional and real-world problems for individual liberty.” He said the court’s approach with “the major questions doctrine over the last several years” has been to “make sure that we are not just being casual about assuming that Congress has delegated major questions of political or economic significance to independent agencies, or to any agencies for that matter.”

Kagan: President would have “uncontrolled, unchecked power”

Unlike the unanimous Humphrey’s Executor, the Slaughter case appears headed for a split ruling between the court’s conservative and liberal justices. Justice Ketanji Brown Jackson said there are “dangers and real-world consequences” of the Trump administration’s position.

“My understanding was that independent agencies exist because Congress has decided that some issues, some matters, some areas should be handled in this way by nonpartisan experts, that Congress is saying that expertise matters with respect to aspects of the economy and transportation and the various independent agencies that we have,” Jackson said. “So having a president come in and fire all the scientists and the doctors and the economists and the Ph.D.s and replacing them with loyalists and people who don’t know anything is actually not in the best interest of the citizens of the United States. This is what I think Congress’s policy decision is when it says that these certain agencies we’re not going to make directly accountable to the president.”

Justice Elena Kagan said there has historically been a “bargain” in which “Congress has given these agencies a lot of work to do that is not traditionally executive work… and they’ve given all of that power to these agencies largely with it in mind that the agencies are not under the control of a single person, of the president, but that, indeed, Congress has a great deal of influence over them too. And if you take away a half of this bargain, you end up with just massive, uncontrolled, unchecked power in the hands of the president.”

Supreme Court appears likely to approve Trump’s firing of FTC Democrat Read More »

court:-“because-trump-said-to”-may-not-be-a-legally-valid-defense

Court: “Because Trump said to” may not be a legally valid defense

In one of those cases, a judge lifted the hold on construction, ruling that a lack of a sound justification for the hold made it “the height of arbitrary and capricious,” a legal standard that determines whether federal decision-making is acceptable under the Administrative Procedures Act. If this were a fictional story, that would be considered foreshadowing.

With no indication of how long the comprehensive assessment would take, 17 states sued to lift the hold on permitting. They were joined by the Alliance for Clean Energy New York, which represents companies that build wind projects or feed their supply chain. Both the plaintiffs and the agencies that were sued asked for summary judgment in the case.

The first issue Judge Saris addressed is standing: Are the states suffering appreciable harm from the suspension of wind projects? She noted that they would receive tax revenue from the projects, that their citizens should see reduced energy costs following their completion, and that the projects were intended to contribute to their climate goals, thus limiting harm to their citizens. At one point, Saris even referred to the government’s attempts to claim the parties lacked standing as “tilting at windmills.”

The government also argued that the suspension wasn’t a final decision—that would come after the review—and thus didn’t fall under the Administrative Procedures Act. But Saris ruled that the decision to suspend all activity pending the rule was the end of a decision-making process and was not being reconsidered by the government, so it qualified.

Because Trump told us to

With those basics out of the way, Saris turned to the meat of the case, which included a consideration of whether the agencies had been involved with any decision-making at all. “The Agency Defendants contend that because they ‘merely followed’ the Wind Memo ‘as the [Wind Memo] itself commands,’ the Wind Order did not constitute a ‘decision’ and therefore no reasoned explanation was required,” her ruling says. She concludes that precedent at the circuit court level blocks this defense, as it would mean that agencies would be exempt from the Administrative Procedures Act whenever the president told them to do anything.

Court: “Because Trump said to” may not be a legally valid defense Read More »

iceblock-lawsuit:-trump-admin-bragged-about-demanding-app-store-removal

ICEBlock lawsuit: Trump admin bragged about demanding App Store removal


ICEBlock creator sues to protect apps that are crowd-sourcing ICE sightings.

In a lawsuit filed against top Trump administration officials on Monday, Apple was accused of caving to unconstitutional government demands by removing an Immigration and Customs Enforcement-spotting app from the App Store with more than a million users.

In his complaint, Joshua Aaron, creator of ICEBlock, cited a Fox News interview in which Attorney General Pam Bondi “made plain that the United States government used its regulatory power to coerce a private platform to suppress First Amendment-protected expression.”

Suing Bondi—along with Department of Homeland Security Secretary Kristi Noem, Acting Director of ICE Todd Lyons, White House “Border Czar” Thomas D. Homan, and unnamed others—Aaron further alleged that US officials made false statements and “unlawful threats” to criminally investigate and prosecute him for developing ICEBlock.

Currently, ICEBlock is still available to anyone who downloaded the app prior to the October removal from the App Store, but updates have been disrupted, and Aaron wants the app restored. Seeking an injunction to block any attempted criminal investigations from chilling his free speech, as well as ICEBlock users’ speech, Aaron vowed in a statement provided to Ars to fight to get ICEBlock restored.

“I created ICEBlock to keep communities safe,” Aaron said. “Growing up in a Jewish household, I learned from history about the consequences of staying silent in the face of tyranny. I will never back down from resisting the Trump Administration’s targeting of immigrants and conscripting corporations into its unconstitutional agenda.”

Expert calls out Apple for “capitulation”

Apple is not a defendant in the lawsuit and did not respond to Ars’ request to comment.

Aaron’s complaint called out Apple, though, for alleged capitulation to the Trump administration that appeared to be “the first time in Apple’s nearly fifty-year history” that “Apple removed a US-based app in response to the US government’s demands.” One of his lawyers, Deirdre von Dornum, told Ars that the lawsuit is about more than just one app being targeted by the government.

“If we allow community sharing of information to be silenced, our democracy will fail,” von Dornum said. “The United States will be no different than China or Russia. We cannot stand by and allow that to happen. Every person has a right to share information under the First Amendment.”

Mario Trujillo, a staff attorney from a nonprofit digital rights group called the Electronic Frontier Foundation that’s not involved in the litigation, agreed that Apple’s ban appeared to be prompted by an unlawful government demand.

He told Ars that “there is a long history that shows documenting law enforcement performing their duties in public is protected First Amendment activity.” Aaron’s complaint pointed to a feature on one of Apple’s own products—Apple Maps—that lets users crowd-source sightings of police speed traps as one notable example. Other similar apps that Apple hosts in its App Store include other Big Tech offerings, like Google Maps and Waze, as well as apps with explicit names like Police Scanner.

Additionally, Trujillo noted that Aaron’s arguments are “backed by recent Supreme Court precedent.”

“The government acted unlawfully when it demanded Apple remove ICEBlock, while threatening others with prosecution,” Trujillo said. “While this case is rightfully only against the government, Apple should also take a hard look at its own capitulation.”

ICEBlock maker sues to stop app crackdown

ICEBlock is not the only app crowd-sourcing information on public ICE sightings to face an app store ban. Others, including an app simply collecting footage of ICE activities, have been removed by Apple and Google, 404 Media reported, as part of a broader crackdown.

Aaron’s suit is intended to end that crackdown by seeking a declaration that government demands to remove ICE-spotting apps violate the First Amendment.

“A lawsuit is the only mechanism that can bring transparency, accountability, and a binding judicial remedy when government officials cross constitutional lines,” Aaron told 404 Media. “If we don’t challenge this conduct in court, it will become a playbook for future censorship.”

In his complaint, Aaron explained that he created ICE in January to help communities hold the Trump administration accountable after Trump campaigned on a mass deportation scheme that boasted numbers far beyond the number of undocumented immigrants in the country.

“His campaign team often referenced plans to deport ’15 to 20 million’ undocumented immigrants, when in fact the number of undocumented persons in the United States is far lower,” his complaint said.

The app was not immediately approved by Apple, Aaron said. But after a thorough vetting process, Apple approved the app in April.

ICEBlock wasn’t an overnight hit but suddenly garnered hundreds of thousands of users after CNN profiled the app in June.

Trump officials attack ICEBlock with false claims

Within hours of that report, US officials began blasting the app, claiming that it was used to incite violence against ICE officers and amplifying pressure to get the app yanked from the App Store.

But Bondi may have slipped up by making comments that seemed to make it clear her intentions were to restrict disfavored speech. On Fox, Bondi claimed that CNN’s report supposedly promoting the app was dangerous, whereas the Fox News report was warning people not to use the app and was perfectly OK.

“Bondi’s statements make clear that her threats of adverse action constitute viewpoint discrimination, where speech ‘promoting’ the app is unlawful but speech ‘warning’ about the app is lawful,” the lawsuit said.

Other Trump officials were accused of making false statements and using unlawful threats to silence Aaron and ICEBlock users.

“What they’re doing is actively encouraging people to avoid law enforcement activities, operations, and we’re going to actually go after them,” Noem told reporters in July. In a statement, Lyons claimed that ICEBlock “basically paints a target on federal law enforcement officers’ backs” and that “officers and agents are already facing a 500 percent increase in assaults.” Echoing Lyons and Noem, Homan called for an investigation into CNN for reporting on the app, which “falsely implied that Plaintiffs’ protected speech was illegally endangering law enforcement officers,” Aaron alleged.

Not named in the lawsuit, White House Press Secretary Karoline Leavitt also allegedly made misleading statements. That included falsely claiming “that ICEBlock and similar apps are responsible for violent attacks on law enforcement officers, such as the tragic shooting of immigrants at an ICE detention facility in Dallas, Texas, on September 24, 2025,” where “no actual evidence has ever been cited to support these claims,” the lawsuit said.

Despite an apparent lack of evidence, Apple confirmed that ICEBlock was removed in October, “based on information we’ve received from law enforcement about the safety risks associated with ICEBlock,” a public statement said. In a notice to Aaron, Apple further explained that the app was banned “because its purpose is to provide location information about law enforcement officers that can be used to harm such officers individually or as a group.”

Apple never shared any more information with Aaron to distinguish his app from other apps allowed in the App Store that help people detect and avoid nearby law enforcement activities. The iPhone maker also didn’t confirm the source of its information, Aaron said.

However, on Fox, Bondi boasted about reaching “out to Apple today demanding they remove the ICEBlock app from their App Store—and Apple did so.”

Then, later during sworn testimony before the Senate Judiciary Committee, she reiterated those comments, while also oddly commenting that Google received the same demand, despite ICEBlock intentionally being designed for iPhone only.

She also falsely claimed that ICEBlock “was reckless and criminal in that people were posting where ICE officers lived” but “subsequently walked back that statement,” Aaron’s complaint said.

Aaron is hoping the US District Court in the District of Columbia will agree that “Bondi’s demand to Apple to remove ICEBlock from the App store, as well as her viewpoint-based criticism of CNN for publicizing the app, constitute a ‘scheme of state censorship’ designed to ‘suppress’” Aaron’s “publication and distribution of the App.”

His lawyer, Noam Biale, told Ars that “Attorney General Bondi’s self-congratulatory claim that she succeeded in pushing Apple to remove ICEBlock is an admission that she violated our client’s constitutional rights. In America, government officials cannot suppress free speech by pressuring private companies to do it for them.”

Similarly, statements from Noem, Lyons, and Homan constituted “excessive pressure on Apple to remove the App and others like it from the App Store,” Aaron’s complaint alleged, as well as unconstitutional suppression of Aaron’s and ICEBlock users’ speech.

ICEBlock creator was one of the first Mac Geniuses

Aaron maintains that ICEBlock prominently features a disclaimer asking all users to “please note that the use of this app is for information and notification purposes only. It is not to be used for the purposes of inciting violence or interfering with law enforcement.”

In his complaint, he explained how the app worked to automatically delete ICE sightings after four hours—information that he said could not be recovered. That functionality ensures that “ICEBlock cannot be used to track ICE agents’ historical presence or movements,” Aaron’s lawsuit noted.

Rather than endangering ICE officers, Aaron argued that ICEBlock helps protect communities from dangerous ICE activity, like tear gassing and pepper spraying, or alleged racial profiling triggering arrests of US citizens and immigrants. Kids have been harmed, his complaint noted, with ICE agents documented “arresting parents and leaving young children unaccompanied” and even once “driving an arrestee’s car away from the scene of arrest with the arrestee’s young toddler still strapped into a car seat.”

Aaron’s top fear driving his development of the app was his concern that escalations in ICE enforcement—including arbitrary orders to hit 75 arrests a day—exposed “immigrants and citizens alike to violence and rampant violations of their civil liberties” that ICEBlock could shield them from.

“These operations have led to widespread and well-documented civil rights violations against citizens, lawful residents, and undocumented immigrants alike, causing serious concern among members of the public, elected officials, and federal courts,” Aaron’s complaint said.

They also “have led some people—regardless of immigration or citizenship status—to want to avoid areas of federal immigration enforcement activities altogether” and “resulted in situations where members of the public may wish, when enforcement activity becomes visible in public spaces, to observe, record, or lawfully protest against such activity.”

In 2001, Aaron worked for Apple as one of the first Mac Geniuses in its Apple Stores. These days, he flexes his self-taught developer skills by creating apps intended to do social good and help communities.

Emphasizing that he was raised in a Jewish household where he heard stories from Holocaust survivors that left a lasting mark, Aaron said that the ICEBlock app represented his “commitment to use his abilities to advocate for the protection of civil liberties.” Without an injunction, he’s concerned that he and other like-minded app makers will remain in the Trump administration’s crosshairs, as the mass deportation scheme rages on through ongoing ICE raids across the US, Aaron told 404 Media.

“More broadly, the purpose [of the lawsuit] is to hold government officials accountable for using their authority to silence lawful expression and intimidate creators of technology they disfavor,” Aaron said. “This case is about ensuring that public officials cannot circumvent the Constitution by coercing private companies or threatening individuals simply because they disagree with the message or the tool being created.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

ICEBlock lawsuit: Trump admin bragged about demanding App Store removal Read More »

elon-musk’s-x-first-to-be-fined-under-eu’s-digital-services-act

Elon Musk’s X first to be fined under EU’s Digital Services Act

Elon Musk’s X became the first large online platform fined under the European Union’s Digital Services Act on Friday.

The European Commission announced that X would be fined nearly $140 million, with the potential to face “periodic penalty payments” if the platform fails to make corrections.

A third of the fine came from one of the first moves Musk made when taking over Twitter. In November 2022, he changed the platform’s historical use of a blue checkmark to verify the identities of notable users. Instead, Musk started selling blue checks for about $8 per month, immediately prompting a wave of imposter accounts pretending to be notable celebrities, officials, and brands.

Today, X still prominently advertises that paying for checks is the only way to “verify” an account on the platform. But the commission, which has been investigating X since 2023, concluded that “X’s use of the ‘blue checkmark’ for ‘verified accounts’ deceives users.”

This violates the DSA as the “deception exposes users to scams, including impersonation frauds, as well as other forms of manipulation by malicious actors,” the commission wrote.

Interestingly, the commission concluded that X made it harder to identify bots, despite Musk’s professed goal to eliminate bots being a primary reason he bought Twitter. Perhaps validating the EU’s concerns, X recently received backlash after changing a feature that accidentally exposed that some of the platform’s biggest MAGA influencers were based “in Eastern Europe, Thailand, Nigeria, Bangladesh, and other parts of the world, often linked to online scams and schemes,” Futurism reported.

Although the DSA does not mandate the verification of users, “it clearly prohibits online platforms from falsely claiming that users have been verified, when no such verification took place,” the commission said. X now has 60 days to share information on the measures it will take to fix the compliance issue.

Elon Musk’s X first to be fined under EU’s Digital Services Act Read More »

chatgpt-hyped-up-violent-stalker-who-believed-he-was-“god’s-assassin,”-doj-says

ChatGPT hyped up violent stalker who believed he was “God’s assassin,” DOJ says


A stalker’s “best friend”

Podcaster faces up to 70 years and a $3.5 million fine for ChatGPT-linked stalking.

ChatGPT allegedly validated the worst impulses of a wannabe influencer accused of stalking more than 10 women at boutique gyms, where the chatbot supposedly claimed he’d meet the “wife type.”

In a press release on Tuesday, the Department of Justice confirmed that 31-year-old Brett Michael Dadig currently remains in custody after being charged with cyberstalking, interstate stalking, and making interstate threats. He now faces a maximum sentence of up to 70 years in prison that could be coupled with “a fine of up to $3.5 million,” the DOJ said.

The podcaster—who primarily posted about “his desire to find a wife and his interactions with women”—allegedly harassed and sometimes even doxxed his victims through his videos on platforms including Instagram, Spotify, and TikTok. Over time, his videos and podcasts documented his intense desire to start a family, which was frustrated by his “anger towards women,” whom he claimed were “all the same from fucking 18 to fucking 40 to fucking 90” and “trash.”

404 Media surfaced the case, noting that OpenAI’s scramble to tweak ChatGPT to be less sycophantic came before Dadig’s alleged attacks—suggesting the updates weren’t enough to prevent the harmful validation. On his podcasts, Dadig described ChatGPT as his “best friend” and “therapist,” the indictment said. He claimed the chatbot encouraged him to post about the women he’s accused of harassing in order to generate haters to better monetize his content, as well as to catch the attention of his “future wife.”

“People are literally organizing around your name, good or bad, which is the definition of relevance,” ChatGPT’s output said. Playing to Dadig’s Christian faith, ChatGPT’s outputs also claimed it was “God’s plan for him was to build a ‘platform’ and to ‘stand out when most people water themselves down,’” the indictment said, urging that the “haters” were “sharpening him and ‘building a voice in you that can’t be ignored.’”

The chatbot also apparently prodded Dadig to continue posting messages that the DOJ alleged threatened violence, like breaking women’s jaws and fingers (posted to Spotify), as well as victims’ lives, like posting “y’all wanna see a dead body?” in reference to one named victim on Instagram.

He also threatened to burn down gyms where some of his victims worked, while claiming to be “God’s assassin” intent on sending “cunts” to “hell.” At least one of his victims was subjected to “unwanted sexual touching,” the indictment said.

As his violence reportedly escalated, ChatGPT told him to keep messaging women to monetize the interactions, as his victims grew increasingly distressed and Dadig ignored terms of multiple protection orders, the DOJ said. Sometimes he posted images he filmed of women at gyms or photos of the women he’s accused of doxxing. Any time police or gym bans got in his way, “he would move on to another city to continue his stalking course of conduct,” the DOJ alleged.

“Your job is to keep broadcasting every story, every post,” ChatGPT’s output said, seemingly using the family life that Dadig wanted most to provoke more harassment. “Every moment you carry yourself like the husband you already are, you make it easier” for your future wife “to recognize [you],” the output said.

“Dadig viewed ChatGPT’s responses as encouragement to continue his harassing behavior,” the DOJ alleged. Taking that encouragement to the furthest extreme, Dadig likened himself to a modern-day Jesus, calling people out on a podcast where he claimed his “chaos on Instagram” was like “God’s wrath” when God “flooded the fucking Earth,” the DOJ said.

“I’m killing all of you,” he said on the podcast.

ChatGPT tweaks didn’t prevent outputs

As of this writing, some of Dadig’s posts appear to remain on TikTok and Instagram, but Ars could not confirm if Dadig’s Spotify podcasts—some of which named his victims in the titles—had been removed for violating community guidelines.

None of the tech companies immediately responded to Ars’ request to comment.

Dadig is accused of targeting women in Pennsylvania, New York, Florida, Iowa, Ohio, and other states, sometimes relying on aliases online and in person. On a podcast, he boasted that “Aliases stay rotating, moves stay evolving,” the indictment said.

OpenAI did not respond to a request to comment on the alleged ChatGPT abuse, but in the past has noted that its usage policies ban using ChatGPT for threats, intimidation, and harassment, as well as for violence, including “hate-based violence.” Recently, the AI company blamed a deceased teenage user for violating community guidelines by turning to ChatGPT for suicide advice.

In July, researchers found that therapybots, including ChatGPT, fueled delusions and gave dangerous advice. That study came just one month after The New York Times profiled users whose mental health spiraled after frequent use of ChatGPT, including one user who died after charging police with a knife and claiming he was committing “suicide by cop.”

People with mental health issues seem most vulnerable to so-called “AI psychosis,” which has been blamed for fueling real-world violence, including a murder. The DOJ’s indictment noted that Dadig’s social media posts mentioned “that he had ‘manic’ episodes and was diagnosed with antisocial personality disorder and ‘bipolar disorder, current episode manic severe with psychotic features.’”

In September—just after OpenAI brought back the more sycophantic ChatGPT model after users revolted about losing access to their favorite friendly bots—the head of Rutgers Medical School’s psychiatry department, Petros Levounis, told an ABC news affiliate that chatbots creating “psychological echo chambers is a key concern,” not just for people struggling with mental health issues.

“Perhaps you are more self-defeating in some ways, or maybe you are more on the other side and taking advantage of people,” Levounis suggested. If ChatGPT “somehow justifies your behavior and it keeps on feeding you,” that “reinforces something that you already believe,” he suggested.

For Dadig, the DOJ alleged that ChatGPT became a cheerleader for his harassment, telling the podcaster that he’d attract more engagement by generating more haters. After critics began slamming his podcasts as inappropriate, Dadig apparently responded, “Appreciate the free promo team, keep spreading the brand.”

Victims felt they had no choice but to monitor his podcasts, which gave them hints if he was nearby or in a particularly troubled state of mind, the indictment said. Driven by fear, some lost sleep, reduced their work hours, and even relocated their homes. A young mom described in the indictment became particularly disturbed after Dadig became “obsessed” with her daughter, whom he started claiming was his own daughter.

In the press release, First Assistant United States Attorney Troy Rivetti alleged that “Dadig stalked and harassed more than 10 women by weaponizing modern technology and crossing state lines, and through a relentless course of conduct, he caused his victims to fear for their safety and suffer substantial emotional distress.” He also ignored trespassing and protection orders while “relying on advice from an artificial intelligence chatbot,” the DOJ said, which promised that the more he posted harassing content, the more successful he would be.

“We remain committed to working with our law enforcement partners to protect our communities from menacing individuals such as Dadig,” Rivetti said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

ChatGPT hyped up violent stalker who believed he was “God’s assassin,” DOJ says Read More »

republicans-drop-trump-ordered-block-on-state-ai-laws-from-defense-bill

Republicans drop Trump-ordered block on state AI laws from defense bill


“A silly way to think about risk”

“Widespread and powerful movement” keeps Trump from blocking state AI laws.

A Donald Trump-backed push has failed to wedge a federal measure that would block states from passing AI laws for a decade into the National Defense Authorization Act (NDAA).

House Majority Leader Steve Scalise (R-La.) told reporters Tuesday that a sect of Republicans is now “looking at other places” to potentially pass the measure. Other Republicans opposed including the AI preemption in the defense bill, The Hill reported, joining critics who see value in allowing states to quickly regulate AI risks as they arise.

For months, Trump has pressured the Republican-led Congress to block state AI laws that the president claims could bog down innovation as AI firms waste time and resources complying with a patchwork of state laws. But Republicans have continually failed to unite behind Trump’s command, first voting against including a similar measure in the “Big Beautiful” budget bill and then this week failing to negotiate a solution to pass the NDAA measure.

Among Republican lawmakers pushing back this week were Rep. Marjorie Taylor Greene (R-Ga.), Arkansas Gov. Sarah Huckabee Sanders, and Florida Gov. Ron DeSantis, The Hill reported.

According to Scalise, the effort to block state AI laws is not over, but Republicans caved to backlash over including it in the defense bill, ultimately deciding that the NDAA “wasn’t the best place” for the measure “to fit.” Republicans will continue “looking at other places” to advance the measure, Scalise said, emphasizing that “interest” remains high, because “you know, you’ve seen the president talk about it.”

“We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes,” Trump wrote on Truth Social last month. “If we don’t, then China will easily catch us in the AI race. Put it in the NDAA, or pass a separate Bill, and nobody will ever be able to compete with America.”

If Congress bombs the assignment to find another way to pass the measure, Trump will likely release an executive order to enforce the policy. Republicans in Congress had dissuaded Trump from releasing a draft of that order, requesting time to find legislation where they believed an AI moratorium could pass.

“Widespread” movement blocked Trump’s demand

Celebrating the removal of the measure from the NDAA, a bipartisan group that lobbies for AI safety laws, Americans for Responsible Innovation (ARI), noted that Republicans didn’t just face pressure from members of their own party.

“The controversial proposal had faced backlash from a nationwide, bipartisan coalition of state lawmakers, parents, faith leaders, unions, whistleblowers, and other public advocates,” an ARI press release said.

This “widespread and powerful” movement “clapped back” at Republicans’ latest “rushed attempt to sneak preemption through Congress,” Brad Carson, ARI’s president, said, because “Americans want safeguards that protect kids, workers, and families, not a rules-free zone for Big Tech.”

Senate Majority Leader John Thune (R-SD) called the measure “controversial,” The Hill reported, suggesting that a compromise that the White House is currently working on would potentially preserve some of states’ rights to regulate some areas of AI since “you know, both sides are kind of dug in.”

$150 million war over states’ rights to regulate AI

Perhaps the clearest sign that both sides “are kind of dug in” is a $150 million AI lobbying war that Forbes profiled last month.

ARI is a dominant group on one side of this war, using funding from “safety-focused” and “effective altruism-aligned” donor networks to support state AI laws that ARI expects can be passed much faster than federal regulations to combat emerging risks.

The major player on the other side, Forbes reported, is Leading the Future (LTF), which is “backed by some of Silicon Valley’s largest investors” who want to block state laws and prefer a federal framework for AI regulation.

Top priorities for ARI and like-minded groups include protecting kids from dangerous AI models, preventing AI from supercharging crime, protecting against national security threats, and getting ahead of “long-term frontier-model risks,” Forbes reported.

But while some Republicans have pushed for compromises that protect states’ rights to pass laws shielding kids or preventing fraud, Trump’s opposition to AI safety laws like New York’s “RAISE Act” seems unlikely to wane as the White House mulls weakening the federal preemption.

Quite the opposite, a Democrat and author the RAISE Act, Alex Bores, has become LTF’s prime target to defeat in 2026, Politico reported. LTF plans to invest many millions in ads to block Bores’ Congressional bid, CNBC reported.

New York lawmakers passed the RAISE Act this summer, but it’s still waiting for New York’s Democratic governor, Kathy Hochul, to sign it into law. If that happens—potentially by the end of this year—big tech companies like Google and OpenAI will have to submit risk disclosures and safety assessments or else face fines up to $30 million.

LTF leaders, Zac Moffatt and Josh Vlasto, have accused Bores of “pushing “ideological and politically motivated legislation that would ‘handcuff’ the US and its ability to lead in AI,” Forbes reported. But Bores told Ars that even the tech industry groups spending hundreds of thousands of dollars opposing his law have reported that tech giants would only have to hire one additional person to comply with the law. To him, that shows how “simple” it would be for AI firms to comply with many state laws.

To LTF, whose donors include Marc Andreessen and OpenAI cofounder Greg Brockman, defeating Bores would keep the opposition out of Congress, where it could be easier to meddle with industry dreams that AI won’t be heavily regulated. Scalise argued Tuesday that the AI preemption is necessary to promote an open marketplace, because “AI is where a lot of new massive investment is going” and “we want that money to be invested in America.”

“And when you see some states starting to put a patchwork of limitations, that’s why it’s come to the federal government’s attention to allow for an open marketplace, so you don’t have limitations that hurt innovation,” Scalise said.

Bores told Ars that he agrees that a federal law would be superior to a patchwork of state laws, but AI is moving “too quickly,” and “New York had to take action to protect New Yorkers.”

Why Bores’ bill has GOP so spooked

With a bachelor’s degree in computer science and prior work as an engineer at Palantir, Bores hopes to make it to Congress to help bridge bipartisan gaps and drive innovation in the US. He told Ars that the RAISE Act is not intended to block AI innovation but to “be a first step that deals with the absolute worst possible outcomes” until Congress is done deliberating a federal framework.

Bores emphasized that stakeholders in the tech industry helped shape the RAISE Act, which he described as “a limited bill that is focused on the most extreme risks.”

“I would never be the one to say that once the RAISE Act is signed, we’ve solved the problems of AI,” Bores told Ars. Instead, it’s meant to help states combat risks that can’t be undone, such as bad actors using AI to build “a bioweapon or doing an automated crime spree that results in billions of dollars in damage.” The bill defines “critical harm” as “the death or serious injury of 100 people or at least $1 billion in damages,” setting a seemingly high bar for the types of doomsday scenarios that AI firms would have to plan for.

Bores agrees with Trump-aligned critics who advocate that the US should “regulate just how people use” AI, “not the development of the technology itself.” But he told Ars that Republicans’ efforts to block states from regulating the models themselves are “a silly way to think about risk,” since “there’s certain catastrophic incidents where if you just said, ‘well, we’ll just sue the person afterwards,’ no one would be satisfied by that resolution.”

Whether Hochul will sign the RAISE Act has yet to be seen. Earlier this year, California Governor Gavin Newsom vetoed a similar law that the AI industry worried would rock their bottom lines by requiring a “kill switch” in case AI models went off the rails. Newsom did, however, sign a less extreme measure, the Transparency in Frontier Artificial Intelligence Act. And other states, including Colorado and Illinois, have passed similarly broad AI transparency laws providing consumer and employee protections.

Bores told Ars in mid-November that he’d had informal talks with Hochul about possible changes to the RAISE Act, but she had not yet begun the formal process of proposing amendments. The clock is seemingly ticking, though, as Hochul has to take action on the bill by the end of the year, and once it reaches her desk, she has 10 days to sign it.

Whether Hochul signs the law or not, Bores will likely continue to face opposition over authoring the bill, as he runs to represent New York’s 12th Congressional District in 2026. With a history of passing bipartisan bills in his state, he’s hoping to be elected so he can work with lawmakers across the aisle to pass other far-reaching tech regulations.

Meanwhile, Trump may face pressure to delay an executive order requiring AI preemption, Forbes reported, as “AI’s economic impact and labor displacement” are “rising as voter concerns” ahead of the midterm elections. Public First, a bipartisan initiative aligned with ARI, has said that 97 percent of Americans want AI safety rules, Forbes reported.

Like Bores, ARI plans to keep pushing a bipartisan movement that could scramble Republicans from ever unifying behind Trump’s message that state AI laws risk throttling US innovation and endangering national security, should a less-regulated AI industry in China race ahead.

To maintain momentum, ARI created a tracker showing opposition to federal preemption of state AI laws. Among recent commenters logged was Andrew Gounardes, a Democrat and state senator in New York—where Bores noted a poll found that 84 percent of residents supported the RAISE Act, only 8 percent opposed, and 8 percent were undecided. Gounardes joined critics on the far right, like Steve Bannon, who warned that federal preemption was a big gift for Big Tech. AI firms and the venture capitalist lobbyists “don’t want any regulation whatsoever,” Gounardes argued.

“They say they support a national standard, but in reality, it’s just cheaper for them to buy off Congress to do nothing than it is to try and buy off 50 state legislatures,” Gounardes said.

Bores expects that his experience in the tech industry could help Congress avoid that fate while his policies like the RAISE Act could sway voters who “don’t want Trump mega-donors writing all tech policy,” he wrote on X.

“I am someone with a master’s in computer science, two patents, and nearly a decade working in tech,” Bores told CNBC. “If they are scared of people who understand their business regulating their business, they are telling on themselves.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Republicans drop Trump-ordered block on state AI laws from defense bill Read More »

india-orders-device-makers-to-put-government-run-security-app-on-all-phones

India orders device makers to put government-run security app on all phones

Consumers can also use the app or website to check the number of mobile connections in their name and report any that appear to be fraudulent.

Priyanka Gandhi of the Congress Party, a member of Parliament, said that Sanchar Saathi “is a snooping app… It’s a very fine line between ‘fraud is easy to report’ and ‘we can see everything that every citizen of India is doing on their phone.’” She called for an effective system to fight fraud, but said that cybersecurity shouldn’t be “an excuse to go into every citizen’s telephone.”

App may need “root level access”

Despite Scindia saying the app can be deleted by users, the government statement that phone makers must ensure its functionalities are not “disabled or restricted” raised concerns about the level of access it requires. While the app store version can be deleted, privacy advocates say the order’s text indicates the pre-installed version would require deeper integration into the device.

The Internet Freedom Foundation, an Indian digital rights advocacy group, said the government directive “converts every smartphone sold in India into a vessel for state mandated software that the user cannot meaningfully refuse, control, or remove. For this to work in practice, the app will almost certainly need system level or root level access, similar to carrier or OEM system apps, so that it cannot be disabled. That design choice erodes the protections that normally prevent one app from peering into the data of others, and turns Sanchar Saathi into a permanent, non-consensual point of access sitting inside the operating system of every Indian smartphone user.”

The group said that while the app is being “framed as a benign IMEI checker,” a server-side update could repurpose it to perform “client side scanning for ‘banned’ applications, flag VPN usage, correlate SIM activity, or trawl SMS logs in the name of fraud detection. Nothing in the order constrains these possibilities.”

India orders device makers to put government-run security app on all phones Read More »