Policy

ted-cruz-can’t-get-all-republicans-to-back-his-fight-against-state-ai-laws

Ted Cruz can’t get all Republicans to back his fight against state AI laws


Cruz plan moves ahead but was reportedly watered down amid Republican opposition.

Sen. Ted Cruz (R-Texas) presides over a subcommittee hearing on June 3, 2025 in Washington, DC. Credit: Getty Images | Chip Somodevilla

A Republican proposal to penalize states that regulate artificial intelligence can move forward without requiring approval from 60 senators, the Senate parliamentarian decided on Saturday. But the moratorium on state AI laws did not have unanimous Republican support and has reportedly been watered down in an effort to push it toward passage.

In early June, Sen. Ted Cruz (R-Texas) proposed enforcing a 10-year moratorium on AI regulation by making states ineligible for broadband funding if they try to impose any limits on development of artificial intelligence. While the House previously approved a version of the so-called “One Big Beautiful Bill” with an outright 10-year ban on state AI regulation, Cruz took a different approach because of the Senate rule that limits inclusion of “extraneous matter” in budget reconciliation legislation.

Under the Senate’s Byrd rule, a senator can object to a potentially extraneous budget provision. A motion to waive the Byrd rule requires a vote of 60 percent of the Senate.

As originally drafted, Cruz’s backdoor ban on state AI laws would have made it impossible for states to receive money from the $42 billion Broadband Equity, Access, and Deployment (BEAD) program if they try to regulate AI. He tied the provision into the budget bill by proposing an extra $500 million for the broadband-deployment grant program and expanding its purpose to also subsidize construction and deployment of infrastructure for artificial intelligence systems.

Punchbowl News reported today that Cruz made changes in order to gain more Republican support and comply with Senate procedural rules. Cruz was quoted as saying that under his current version, states that regulate AI would only be shut out of the $500 million AI fund.

This would seem to protect states’ access to the $42 billion broadband deployment fund that will offer subsidies to ISPs that expand access to Internet service. Losing that funding would be a major blow to states that have spent the last couple of years developing plans to connect more of their residents to modern broadband. The latest Senate bill text was not available today. We contacted Cruz’s office and will update this article if we get a response.

A spokesperson for Sen. Maria Cantwell (D-Wash.) told Ars today that Cruz’s latest version could still prevent states from getting broadband funding. The text has “a backdoor to apply new AI requirements to the entire $42.45 billion program, not just the new $500 million,” Cantwell’s representative said.

Plan has opponents from both parties

Senate Parliamentarian Elizabeth MacDonough ruled that several parts of the Republican budget bill are subject to the Byrd rule and its 60-vote requirement, but Cruz’s AI proposal wasn’t one of them. A press release from Senate Budget Committee Ranking Member Jeff Merkley (D-Ore.) noted that “the parliamentarian’s advice is based on whether a provision is appropriate for reconciliation and conforms to the limitations of the Byrd rule; it is not a judgement on the relative merits of a particular policy.”

Surviving the parliamentarian review doesn’t guarantee passage. A Bloomberg article said the parliamentarian’s decision is “a win for tech companies pushing to stall and override dozens of AI safety laws across the country,” but that the “provision will likely still be challenged on the Senate floor, where stripping the provision would need just a simple majority. Some Republicans in both the House and Senate have pushed back on the AI provision.”

Republicans have a 53–47 edge in the Senate. Cantwell and Sen. Marsha Blackburn (R-Tenn.) teamed up for a press conference last week in which they spoke out against the proposed moratorium on state regulation.

Cantwell said that 24 states last year started “regulating AI in some way, and they have adopted these laws that fill a gap while we are waiting for federal action. Now Congress is threatening these laws, which will leave hundreds of millions of Americans vulnerable to AI harm by abolishing those state law protections.”

Blackburn said she agreed with Cantwell that the AI regulation proposal “is not the type of thing that we put into reconciliation bills.” Blackburn added that lawmakers “are working to move forward with legislation at the federal level, but we do not need a moratorium that would prohibit our states from stepping up and protecting citizens in their state.”

Sens. Ron Johnson (R-Wis.) and Josh Hawley (R-Mo.) have also criticized the idea of stopping states from regulating AI.

Cruz accused states of “strangling AI”

Cruz argued that his proposal stops states “from strangling AI deployment with EU-style regulation.” Under his first proposal, no BEAD funds were to be given to any state or territory that enforces “any law or regulation… limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.”

The Cantwell/Blackburn press conference also included Washington Attorney General Nick Brown, a Democrat; and Tennessee Attorney General Jonathan Skrmetti, a Republican. Brown said that “Washington has a law that prohibits deep fakes being used against political candidates by mimicking their appearance and their speech,” another “that prohibits sharing fabricated sexual images without consent and provides for penalties for those who possess and distribute such images,” and a third “that prohibits the knowing distribution of forged digital likenesses that can be used to harm or defraud people.”

“All of those laws, in my reading, would be invalid if this was to pass through Congress, and each of those laws are prohibiting and protecting people here in our state,” Brown said.

Skrmetti said that if the Senate proposal becomes law “there would be arguments out there for the big tech companies that the moratorium does, in fact, preclude any enforcement of any consumer protection laws if there’s an AI component to the product that we’re looking at.”

Other Republican plans fail Byrd rule test

Senate Democrats said they are pleased that the parliamentarian ruled that several other parts of the bill are subject to the Byrd rule. “We continue to see Republicans’ blatant disregard for the rules of reconciliation when drafting this bill… Democrats plan to challenge every part of this bill that hurts working families and violates this process,” Merkley said.

Merkley’s press release said the provisions that are subject to a 60-vote threshold include one that “limits certain grant funding for ‘sanctuary cities,’ and where the Attorney General disagrees with states’ and localities’ immigration enforcement,” and another that “gives state and local officials the authority to arrest any noncitizen suspected of being in the US unlawfully.”

The Byrd rule also applies to a section that “limits the ability of federal courts to issue preliminary injunctions or temporary restraining orders against the federal government by requiring litigants to post a potentially enormous bond,” and another that “limits when the federal government can enter into or enforce settlement agreements that provide for payments to third parties to fully compensate victims, remedy harm, and punish and deter future violations,” Merkley’s office said.

The office of Senate Democratic Leader Chuck Schumer (D-N.Y.) said yesterday that the provision requiring litigants to post bonds has been struck from the legislation. “This Senate Republican provision, which was even worse than the similar House-passed version, required a plaintiff seeking an emergency court order, preliminary injunction, or a temporary restraining order against the Trump Administration or the federal government to pay a costly bond up front—essentially making the justice system pay-to-play,” Schumer’s office said.

Schumer said that “if enacted, this would have been one of the most brazen power grabs we’ve seen in American history—an attempt to let a future President Trump ignore court orders with impunity, putting him above the law.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Ted Cruz can’t get all Republicans to back his fight against state AI laws Read More »

to-avoid-admitting-ignorance,-meta-ai-says-man’s-number-is-a-company-helpline

To avoid admitting ignorance, Meta AI says man’s number is a company helpline

Although that statement may provide comfort to those who have kept their WhatsApp numbers off the Internet, it doesn’t resolve the issue of WhatsApp’s AI helper potentially randomly generating a real person’s private number that may be a few digits off from the business contact information WhatsApp users are seeking.

Expert pushes for chatbot design tweaks

AI companies have recently been grappling with the problem of chatbots being programmed to tell users what they want to hear, instead of providing accurate information. Not only are users sick of “overly flattering” chatbot responses—potentially reinforcing users’ poor decisions—but the chatbots could be inducing users to share more private information than they would otherwise.

The latter could make it easier for AI companies to monetize the interactions, gathering private data to target advertising, which could deter AI companies from solving the sycophantic chatbot problem. Developers for Meta rival OpenAI, The Guardian noted, last month shared examples of “systemic deception behavior masked as helpfulness” and chatbots’ tendency to tell little white lies to mask incompetence.

“When pushed hard—under pressure, deadlines, expectations—it will often say whatever it needs to to appear competent,” developers noted.

Mike Stanhope, the managing director of strategic data consultants Carruthers and Jackson, told The Guardian that Meta should be more transparent about the design of its AI so that users can know if the chatbot is designed to rely on deception to reduce user friction.

“If the engineers at Meta are designing ‘white lie’ tendencies into their AI, the public need to be informed, even if the intention of the feature is to minimize harm,” Stanhope said. “If this behavior is novel, uncommon, or not explicitly designed, this raises even more questions around what safeguards are in place and just how predictable we can force an AI’s behavior to be.”

To avoid admitting ignorance, Meta AI says man’s number is a company helpline Read More »

trump-suggests-he-needs-china-to-sign-off-on-tiktok-sale,-delays-deal-again

Trump suggests he needs China to sign off on TikTok sale, delays deal again

For many Americans, losing TikTok would be disruptive. TikTok has warned that US businesses could lose $1 billion in one month if TikTok shuts down. As these businesses wait in limbo for a resolution to the situation, it’s getting harder to take the alleged national security threat seriously, as clinching the deal appears to lack urgency.

On Wednesday, the White House continued to warn that Americans are not safe using TikTok, though, despite leaving Americans vulnerable for an extended period that could now stretch to eight months.

In a statement, White House press secretary Karoline Leavitt only explained that “President Trump does not want TikTok to go dark” and would sign an executive order “to keep TikTok up and running” through mid-September. Leavitt confirmed that the Trump administration would focus on finishing the deal in this three-month period, “making sure the sale closes so that Americans can keep using TikTok with the assurance that their data is safe and secure,” Reuters reported.

US-China tensions continue, despite truce

Trump’s negotiations with China have been shaky, but a truce was reestablished last week that could potentially pave the way for a TikTok deal.

Initially, Trump had planned to use the TikTok deal as a bargaining chip, but the tit-for-tat retaliations between the US and China all spring reportedly left China hesitant to agree to any deal. Perhaps sensing the power shift in negotiations, Trump offered to reduce China’s highest tariffs to complete the deal in March. But by April, analysts opined that Trump was still “desperate” to close, while China saw no advantage in letting go of TikTok any time soon.

Despite the current truce, tensions between the US and China continue, as China has begun setting its own deadlines to maintain leverage in the trade war. According to The Wall Street Journal, China put a six-month limit “on the sales of rare earths to US carmakers and manufacturers, giving Beijing leverage if the trade conflict flares up again.”

Trump suggests he needs China to sign off on TikTok sale, delays deal again Read More »

senate-passes-genius-act—criticized-as-gifting-trump-ample-opportunity-to-grift

Senate passes GENIUS Act—criticized as gifting Trump ample opportunity to grift

“Why—beyond the obvious benefit of gaining favor, directly or indirectly, with the Trump administration—did you select USD1, a newly launched, untested cryptocurrency with no track record?” the senators asked.

Responding, World Liberty Financial’s lawyers claimed MGX was simply investing in “legitimate financial innovation,” CBS News reported, noting a Trump family-affiliated entity owns a 60 percent stake in the company.

Trump has denied any wrongdoing in the MGX deal, ABC News reported. However, Warren fears the GENIUS Act will provide “even more opportunities to reward buyers of Trump’s coins with favors like tariff exemptions, pardons, and government appointments” if it becomes law.

Although House supporters of the bill have reportedly promised to push the bill through, so Trump can sign it into law by July, the GENIUS Act is likely to face hurdles. And resistance may come from not just Democrats with ongoing concerns about Trump’s and future presidents’ potential conflicts of interest—but also from Republicans who think passing the bill is pointless without additional market regulations to drive more stablecoin adoption.

Dems: Opportunities for Trump grifts are “mind-boggling”

Although 18 Democrats helped the GENIUS Act pass in the Senate, most Democrats opposed the law over concerns of Trump’s feared conflicts of interest, PBS News reported.

Merkley remains one of the staunchest opponents to the GENIUS Act. In a statement, he alleged that the Senate passing the bill was essentially “rubberstamping Trump’s crypto corruption.”

According to Merkley, he and other Democrats pushed to remove the exemption from the GENIUS Act before the Senate vote—hoping to add “strong anti-corruption measures.” But Senate Republicans “repeatedly blocked” his efforts to hold votes on anti-corruption measures. Instead, they “rammed through this fatally flawed legislation without considering any amendments on the Senate floor—despite promises of an open amendment process and debate before the American people,” Merkley said.

Ultimately, it passed with the exemption intact, which Merkley considered “profoundly corrupt,” promising, “I will keep fighting to ban Trump-style crypto corruption to prevent the sale of government policy by elected federal officials in Congress and the White House.”

Senate passes GENIUS Act—criticized as gifting Trump ample opportunity to grift Read More »

xai-faces-legal-threat-over-alleged-colossus-data-center-pollution-in-memphis

xAI faces legal threat over alleged Colossus data center pollution in Memphis

“For instance, if all the 35 turbines operated by xAI were using” add-on air pollution control technology “to achieve a NOx emission rate of 2 ppm”—as xAI’s consultant agreed it would—”they would emit about 177 tons of NOx per year, as opposed to the 1,200 to 2,100 tons per year they currently emit,” the letter said.

Allegedly, all of xAI’s active turbines “continue to operate without utilizing best available control technology” (BACT) and “there is no dispute” that since xAI has yet to obtain permitting, it’s not meeting BACT requirements today, the letter said.

“xAI’s failure to comply with the BACT requirement is not only a Clean Air Act violation on paper, but also a significant and ongoing violation that is resulting in substantial amounts of harmful excess emissions,” the letter said.

Additionally, xAI’s turbines are considered a major source of a hazardous air pollutant, formaldehyde, the letter said, with “the potential to emit more than 16 tons” since xAI operations began. “xAI was required to conduct initial emissions testing for formaldehyde within 180 days of becoming a major source,” the letter alleged, but it appears that a year after moving into Memphis, still “xAI has not conducted this testing.”

Terms of xAI’s permitting exemption remain vague

The NAACP and SELC suggested that the exemption that xAI is seemingly operating under could be a “nonroad engine exemption.” However, they alleged that xAI’s turbines don’t qualify for that yearlong exemption, and even if they did, any turbines still onsite after a year would surely not be covered and should have permitting by now.

“While some local leaders, including the Memphis Mayor and Shelby County Health Department, have claimed there is a ‘364-exemption’ for xAI’s gas turbines, they have never been able to point to a specific exemption that would apply to turbines as large as the ones at the xAI site,” SELC’s press release alleged.

xAI faces legal threat over alleged Colossus data center pollution in Memphis Read More »

cybersecurity-takes-a-big-hit-in-new-trump-executive-order

Cybersecurity takes a big hit in new Trump executive order

Cybersecurity practitioners are voicing concerns over a recent executive order issued by the White House that guts requirements for: securing software the government uses, punishing people who compromise sensitive networks, preparing new encryption schemes that will withstand attacks from quantum computers, and other existing controls.

The executive order (EO), issued on June 6, reverses several key cybersecurity orders put in place by President Joe Biden, some as recently as a few days before his term ended in January. A statement that accompanied Donald Trump’s EO said the Biden directives “attempted to sneak problematic and distracting issues into cybersecurity policy” and amounted to “political football.”

Pro-business, anti-regulation

Specific orders Trump dropped or relaxed included ones mandating (1) federal agencies and contractors adopt products with quantum-safe encryption as they become available in the marketplace, (2) a stringent Secure Software Development Framework (SSDF) for software and services used by federal agencies and contractors, (3) the adoption of phishing-resistant regimens such as the WebAuthn standard for logging into networks used by contractors and agencies, (4) the implementation new tools for securing Internet routing through the Border Gateway Protocol, and (5) the encouragement of digital forms of identity.

In many respects, executive orders are at least as much performative displays as they are a vehicle for creating sound policy. Biden’s cybersecurity directives were mostly in this second camp.

The provisions regarding the secure software development framework, for instance, was born out of the devastating consequences of the SolarWinds supply chain attack of 2020. During the event, hackers linked to the Russian government breached the network of a widely used cloud service, SolarWinds. The hackers went on to push a malicious update that distributed a backdoor to more than 18,000 customers, many of whom were contractors and agencies of the federal government.

Cybersecurity takes a big hit in new Trump executive order Read More »

x-sues-to-block-copycat-ny-content-moderation-law-after-california-win

X sues to block copycat NY content moderation law after California win

“It is our sincere belief that the current social media landscape makes it far too easy for bad actors to promote false claims, hatred and dangerous conspiracies online, and some large social media companies are not able or willing to regulate this hate speech themselves,” the letter said.

Although the letter acknowledged that X was not the only platform targeted by the law, the lawmakers further noted that Musk taking over Twitter spiked hateful and harmful content on the platform. They said it seemed “clear to us that X needs to provide greater transparency for their moderation policies and we believe that our law, as written, will do that.”

This clearly aggravated X. In their complaint, X alleged that the letter made it clear that New York’s law was “tainted by viewpoint discriminatory motives”—alleging that the lawmakers were biased against X and Musk.

X seeks injunction in New York

Just as X alleged in the California lawsuit, the social media company has claimed that the New York law forces X “to make politically charged disclosures about content moderation” in order to “generate public controversy about content moderation in a way that will pressure social media companies, such as X Corp., to restrict, limit, disfavor, or censor certain constitutionally protected content on X that the State dislikes,” X alleged.

“These forced disclosures violate the First Amendment” and the New York constitution, X alleged, and the content categories covered in the disclosures “were taken word-for-word” from California’s enjoined law.

X is arguing that New York has no compelling interest, or any legitimate interest at all, in applying “pressure” to govern social media platforms’ content moderation choices. Because X faces penalties up to $15,000 per day per violation, the company has asked for a jury to grant an injunction blocking enforcement of key provisions of the law.

“Deciding what content should appear on a social media platform is a question that engenders considerable debate among reasonable people about where to draw the correct proverbial line,” X’s complaint said. “This is not a role that the government may play.”

X sues to block copycat NY content moderation law after California win Read More »

worst-hiding-spot-ever:-/nsfw/nope/don’t-open/you-were-warned/

Worst hiding spot ever: /NSFW/Nope/Don’t open/You were Warned/

Last Friday, a Michigan man named David Bartels was sentenced to five years in federal prison for “Possession of Child Pornography by a Person Employed by the Armed Forces Outside of the United States.” The unusual nature of the charge stems from the fact that Bartels bought and viewed the illegal material while working as a military contractor for Maytag Fuels at Naval Station Guantanamo Bay, Cuba.

Bartels had made some cursory efforts to cover his tracks, such as using the TOR browser. (This may sound simple enough, but according to the US government, only 12.3 percent of people charged with similar offenses used “the Dark Web” at all.) Bartels knew enough about tech to use Discord, Telegram, VLC, and Megasync to further his searches. And he had at least eight external USB hard drives or SSDs, plus laptops, an Apple iPad Mini, and a Samsung Galaxy Z Fold 3.

But for all his baseline technical knowledge, Bartels simultaneously showed little security awareness. He bought collections of child sex abuse material (CSAM) using PayPal, for instance. He received CSAM from other people who possessed his actual contact information. And he stored his contraband on a Western Digital 5TB hard drive under the astonishingly guilty-sounding folder hierarchy “https://arstechnica.com/NSFW/Nope/Don’t open/You were Warned/Deeper/.”

Not hard to catch

According to Bartels’ lawyer, authorities found Bartels in January 2023, after “a person he had received child porn from was caught by law enforcement. Apparently they were able to see who this individual had sent material to, one of which was Mr. Bartels.”

Worst hiding spot ever: /NSFW/Nope/Don’t open/You were Warned/ Read More »

trump-fires-commissioner-of-preeminent-nuclear-safety-institution

Trump fires commissioner of preeminent nuclear safety institution


Commissioner fired as Trump pivots US policy to accept more nuclear risks.

Critics warn that the United States may soon be taking on more nuclear safety risks after Donald Trump fired one of five members of an independent commission that monitors the country’s nuclear reactors.

In a statement Monday, Christopher Hanson confirmed that Trump fired him from the US Nuclear Regulatory Commission (NRC) on Friday. He alleged that the firing was “without cause” and “contrary to existing law and longstanding precedent regarding removal of independent agency appointees.” According to NPR, he received an email that simply said his firing was “effective immediately.”

Hanson had enjoyed bipartisan support for his work for years. Trump initially appointed Hanson to the NRC in 2020, then he was renominated by Joe Biden in 2024. In his statement, he said it was an “honor” to serve, citing accomplishments over his long stint as chair, which ended in January 2025.

It’s unclear why Trump fired Hanson. Among the committee chair’s accomplishments, Hanson highlighted revisions to safety regulations, as well as efforts to ramp up recruitment by re-establishing the Minority Serving Institution Grant Program. Both may have put him in opposition to Trump, who wants to loosen regulations to boost the nuclear industry and eliminate diversity initiatives across government.

In a statement to NPR, White House Deputy Press Secretary Anna Kelly suggested it was a political firing.

“All organizations are more effective when leaders are rowing in the same direction,” Kelly said. “President Trump reserves the right to remove employees within his own Executive Branch who exert his executive authority.”

On social media, some Trump critics suggested that Trump lacked the authority to fire Hanson, arguing that Hanson could have ignored the email and kept on working, like the Smithsonian museum director whom Trump failed to fire. (And who eventually quit.)

But Hanson accepted the termination. Instead of raising any concerns, he used his statement as an opportunity to praise those left at NRC, who will be tasked with continuing to protect Americans from nuclear safety risks at a time when Trump has said that he wants industry interests to carry equal weight as public health and environmental concerns.

“My focus over the last five years has been to prepare the agency for anticipated change in the energy sector, while preserving the independence, integrity, and bipartisan nature of the world’s gold standard nuclear safety institution,” Hanson said. “It has been an honor to serve alongside the dedicated public servants at the NRC. I continue to have full trust and confidence in their commitment to serve the American people by protecting public health and safety and the environment.”

Trump pushing “unsettled” science on nuclear risks

The firing followed an executive order in May that demanded an overhaul of the NRC, including reductions in force and expedited approvals on nuclear reactors. All final decisions on new reactors must be made within 18 months, and requests to continue operating existing reactors should be rubber-stamped within a year, Trump ordered.

Likely most alarming to critics, the desired reforms emphasized tossing out the standards that the NRC currently uses that “posit there is no safe threshold of radiation exposure, and that harm is directly proportional to the amount of exposure.”

Until Trump started meddling, the NRC established those guidelines after agreeing with studies examining “cancer cases among 86,600 survivors of the atomic bombs dropped on Hiroshima and Nagasaki in Japan during World War II,” Science reported. Those studies concluded that “the incidence of cancer in the survivors rose linearly—in a straight line—with the radiation dose.” By rejecting that evidence, Trump could be slowly creeping up the radiation dose and leading Americans to blindly take greater risks.

But according to Trump, by adopting those current standards, the NRC is supposedly bogging down the nuclear industry by trying to “insulate Americans from the most remote risks without appropriate regard for the severe domestic and geopolitical costs of such risk aversion.” Instead, the US should prioritize solving the riddle of what might be safe radiation levels, Trump suggests, while restoring US dominance in the nuclear industry, which Trump views as vital to national security and economic growth.

Although Trump claimed the NRC’s current standards were “irrational” and “lack scientific basis,” Science reported that the so-called “linear no-threshold (LNT) model of ionizing radiation” that Trump is criticizing “is widely accepted in the scientific community and informs almost all regulation of the US nuclear industry.”

Further, the NRC rejected past attempts to switch to a model based on the “hormesis theory” that Trump seemingly supports—which posits that some radiation exposure can be beneficial. The NRC found there was “insufficient evidence to justify any changes” that could endanger public health, Science reported.

One health researcher at the University of California, Irvine, Stephen Bondy, told Science that his 2023 review on the science of hormesis showed it is “still unsettled.” His characterization of the executive order suggests that the NRC embracing that model “clearly places health hazards as of secondary importance relative to economic and business interests.”

Trump’s pro-industry push could backfire

If the administration charges ahead with such changes, experts have warned that Trump could end up inadvertently hobbling the nuclear industry. If health hazards become extreme—or a nuclear event occurs—”altering NRC’s safety standards could ultimately reduce public support for nuclear power,” analysts told Science.

Among the staunchest critics of Trump’s order is Edwin Lyman, the director of nuclear power safety at the Union of Concerned Scientists. In a May statement, Lyman warned that “the US nuclear industry will fail if safety is not made a priority.”

He also cautioned that it was critical for the NRC to remain independent, not just to shield Americans from risks but to protect US nuclear technology’s prominence in global markets.

“By fatally compromising the independence and integrity of the NRC, and by encouraging pathways for nuclear deployment that bypass the regulator entirely, the Trump administration is virtually guaranteeing that this country will see a serious accident or other radiological release that will affect the health, safety, and livelihoods of millions,” Lyman said. “Such a disaster will destroy public trust in nuclear power and cause other nations to reject US nuclear technology for decades to come.”

Since Trump wants regulations changed, there will likely be a public commenting period where concerned citizens can weigh in on what they think are acceptable radiation levels in their communities. But Trump’s order also pushed for that public comment period to be streamlined, potentially making it easier to push through his agenda. If that happens, the NRC may face lawsuits under the 1954 Atomic Energy Act, which requires the commission to “minimize danger to life or property,” Science noted.

Following Hanson’s firing, Lyman reiterated to NPR that Trump’s ongoing attacks on the NRC “could have serious implications for nuclear safety.

“It’s critical that the NRC make its judgments about protecting health and safety without regard for the financial health of the nuclear industry,” Lyman said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Trump fires commissioner of preeminent nuclear safety institution Read More »

biofuels-policy-has-been-a-failure-for-the-climate,-new-report-claims

Biofuels policy has been a failure for the climate, new report claims

The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has. (The report defines the Midwest as Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin.)

“The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.”

New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead.

President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel.

The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews.

An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist.

They also noted that the impact of soy-based fuels in 2024 was more than $42 billion, providing over 100,000 jobs.

“Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said.

Biofuels policy has been a failure for the climate, new report claims Read More »

trump’s-ftc-may-impose-merger-condition-that-forbids-advertising-boycotts

Trump’s FTC may impose merger condition that forbids advertising boycotts

FTC chair alleged “serious risk” from ad boycotts

After Musk’s purchase of Twitter, the social network lost advertisers for various reasons, including changes to content moderation and an incident in which Musk posted a favorable response to an antisemitic tweet and then told concerned advertisers to “go fuck yourself.”

FTC Chairman Andrew Ferguson said at a conference in April that “the risk of an advertiser boycott is a pretty serious risk to the free exchange of ideas.”

“If advertisers get into a back room and agree, ‘We aren’t going to put our stuff next to this guy or woman or his or her ideas,’ that is a form of concerted refusal to deal,” Ferguson said. “The antitrust laws condemn concerted refusals to deal. Now, of course, because of the First Amendment, we don’t have a categorical antitrust prohibition on boycotts. When a boycott ceases to be economic for purposes of the antitrust laws and becomes purely First Amendment activity, the courts have not been super clear—[it’s] sort of a ‘we know it when we see it’ type of thing.”

The FTC website says that any individual company acting on its own may “refuse to do business with another firm, but an agreement among competitors not to do business with targeted individuals or businesses may be an illegal boycott, especially if the group of competitors working together has market power.” The examples given on the FTC webpage are mostly about price competition and do not address the widespread practice of companies choosing where to place advertising based on concerns about their brands.

We contacted the FTC about the merger review today and will update this article if it provides any comment.

X’s ad lawsuit

X’s lawsuit targets a World Federation of Advertisers initiative called the Global Alliance for Responsible Media (GARM), a now-defunct program that Omnicom and Interpublic participated in. X itself was part of the GARM initiative, which shut down after X filed the lawsuit. X alleged that the defendants conspired “to collectively withhold billions of dollars in advertising revenue.”

The World Federation of Advertisers said in a court filing last month that GARM was founded “to bring clarity and transparency to disparate definitions and understandings in advertising and brand safety in the context of social media. For example, certain advertisers did not want platforms to advertise their brands alongside content that could negatively impact their brands.”

Trump’s FTC may impose merger condition that forbids advertising boycotts Read More »

how-to-draft-a-will-to-avoid-becoming-an-ai-ghost—it’s-not-easy

How to draft a will to avoid becoming an AI ghost—it’s not easy


Why requests for “no AI resurrections” will probably go ignored.

Proton beams capturing the ghost of OpenAI to suck it into a trap where it belongs

All right! This AI is TOAST! Credit: Aurich Lawson

All right! This AI is TOAST! Credit: Aurich Lawson

As artificial intelligence has advanced, AI tools have emerged to make it possible to easily create digital replicas of lost loved ones, which can be generated without the knowledge or consent of the person who died.

Trained on the data of the dead, these tools, sometimes called grief bots or AI ghosts, may be text-, audio-, or even video-based. Chatting provides what some mourners feel is a close approximation to ongoing interactions with the people they love most. But the tech remains controversial, perhaps complicating the grieving process while threatening to infringe upon the privacy of the deceased, whose data could still be vulnerable to manipulation or identity theft.

Because of suspected harms and perhaps a general repulsion to the idea of it, not everybody wants to become an AI ghost.

After a realistic video simulation was recently used to provide a murder victim’s impact statement in court, Futurism summed up social media backlash, noting that the use of AI was “just as unsettling as you think.” And it’s not the first time people have expressed discomfort with the growing trend. Last May, The Wall Street Journal conducted a reader survey seeking opinions on the ethics of so-called AI resurrections. Responding, a California woman, Dorothy McGarrah, suggested there should be a way to prevent AI resurrections in your will.

“Having photos or videos of lost loved ones is a comfort. But the idea of an algorithm, which is as prone to generate nonsense as anything lucid, representing a deceased person’s thoughts or behaviors seems terrifying. It would be like generating digital dementia after your loved ones’ passing,” McGarrah said. “I would very much hope people have the right to preclude their images being used in this fashion after death. Perhaps something else we need to consider in estate planning?”

For experts in estate planning, the question may start to arise as more AI ghosts pop up. But for now, writing “no AI resurrections” into a will remains a complicated process, experts suggest, and such requests may not be honored by all unless laws are changed to reinforce a culture of respecting the wishes of people who feel uncomfortable with the idea of haunting their favorite people through AI simulations.

Can you draft a will to prevent AI resurrection?

Ars contacted several law associations to find out if estate planners are seriously talking about AI ghosts. Only the National Association of Estate Planners and Councils responded; it connected Ars to Katie Sheehan, an expert in the estate planning field who serves as a managing director and wealth strategist for Crestwood Advisors.

Sheehan told Ars that very few estate planners are prepared to answer questions about AI ghosts. She said not only does the question never come up in her daily work, but it’s also “essentially uncharted territory for estate planners since AI is relatively new to the scene.”

“I have not seen any documents drafted to date taking this into consideration, and I review estate plans for clients every day, so that should be telling,” Sheehan told Ars.

Although Sheehan has yet to see a will attempting to prevent AI resurrection, she told Ars that there could be a path to make it harder for someone to create a digital replica without consent.

“You certainly could draft into a power of attorney (for use during lifetime) and a will (for use post death) preventing the fiduciary (attorney in fact or executor) from lending any of your texts, voice, image, writings, etc. to any AI tools and prevent their use for any purpose during life or after you pass away, and/or lay the ground rules for when they can and cannot be used after you pass away,” Sheehan told Ars.

“This could also invoke issues with contract, property and intellectual property rights, and right of publicity as well if AI replicas (image, voice, text, etc.) are being used without authorization,” Sheehan said.

And there are likely more protections for celebrities than for everyday people, Sheehan suggested.

“As far as I know, there is no law” preventing unauthorized non-commercial digital replicas, Sheehan said.

Widely adopted by states, the Revised Uniform Fiduciary Access to Digital Assets Act—which governs who gets access to online accounts of the deceased, like social media or email accounts—could be helpful but isn’t a perfect remedy.

That law doesn’t directly “cover someone’s AI ghost bot, though it may cover some of the digital material some may seek to use to create a ghost bot,” Sheehan said.

“Absent any law” blocking non-commercial digital replicas, Sheehan expects that people’s requests for “no AI resurrections” will likely “be dealt with in the courts and governed by the terms of one’s estate plan, if it is addressed within the estate plan.”

Those potential fights seemingly could get hairy, as “it may be some time before we get any kind of clarity or uniform law surrounding this,” Sheehan suggested.

In the future, Sheehan said, requests prohibiting digital replicas may eventually become “boilerplate language in almost every will, trust, and power of attorney,” just as instructions on digital assets are now.

As “all things AI become more and more a part of our lives,” Sheehan said, “some aspects of AI and its components may also be woven throughout the estate plan regularly.”

“But we definitely aren’t there yet,” she said. “I have had zero clients ask about this.”

Requests for “no AI resurrections” will likely be ignored

Whether loved ones would—or even should—respect requests blocking digital replicas appears to be debatable. But at least one person who built a grief bot wished he’d done more to get his dad’s permission before moving forward with his own creation.

A computer science professor at the University of Washington Bothell, Muhammad Aurangzeb Ahmad, was one of the earliest AI researchers to create a grief bot more than a decade ago after his father died. He built the bot to ensure that his future kids would be able to interact with his father after seeing how incredible his dad was as a grandfather.

When Ahmad started his project, there was no ChatGPT or other advanced AI model to serve as the foundation, so he had to train his own model based on his dad’s data. Putting immense thought into the effort, Ahmad decided to close off the system from the rest of the Internet so that only his dad’s memories would inform the model. To prevent unauthorized chats, he kept the bot on a laptop that only his family could access.

Ahmad was so intent on building a digital replica that felt just like his dad that it didn’t occur to him until after his family started using the bot that he never asked his dad if this was what he wanted. Over time, he realized that the bot was biased to his view of his dad, perhaps even feeling off to his siblings who had a slightly different relationship with their father. It’s unclear if his dad would similarly view the bot as preserving just one side of him.

Ultimately, Ahmad didn’t regret building the bot, and he told Ars he thinks his father “would have been fine with it.”

But he did regret not getting his father’s consent.

For people creating bots today, seeking consent may be appropriate if there’s any chance the bot may be publicly accessed, Ahmad suggested. He told Ars that he would never have been comfortable with the idea of his dad’s digital replica being publicly available because the question of an “accurate representation” would come even more into play, as malicious actors could potentially access it and sully his dad’s memory.

Today, anybody can use ChatGPT’s model to freely create a similar bot with their own loved one’s data. And a wide range of grief tech services have popped up online, including HereAfter AI, SeanceAI, and StoryFile, Axios noted in an October report detailing the latest ways “AI could be used to ‘resurrect’ loved ones.” As this trend continues “evolving very fast,” Ahmad told Ars that estate planning is probably the best way to communicate one’s AI ghost preferences.

But in a recently published article on “The Law of Digital Resurrection,” law professor Victoria Haneman warned that “there is no legal or regulatory landscape against which to estate plan to protect those who would avoid digital resurrection, and few privacy rights for the deceased. This is an intersection of death, technology, and privacy law that has remained relatively ignored until recently.”

Haneman agreed with Sheehan that “existing protections are likely sufficient to protect against unauthorized commercial resurrections”—like when actors or musicians are resurrected for posthumous performances. However, she thinks that for personal uses, digital resurrections may best be blocked not through estate planning but by passing a “right to deletion” that would focus on granting the living or next of kin the rights to delete the data that could be used to create the AI ghost rather than regulating the output.

A “right to deletion” could help people fight inappropriate uses of their loved ones’ data, whether AI is involved or not. After her article was published, a lawyer reached out to Haneman about a client’s deceased grandmother whose likeness was used to create a meme of her dancing in a church. The grandmother wasn’t a public figure, and the client had no idea “why or how somebody decided to resurrect her deceased grandmother,” Haneman told Ars.

Although Haneman sympathized with the client, “if it’s not being used for a commercial purpose, she really has no control over this use,” Haneman said. “And she’s deeply troubled by this.”

Haneman’s article offers a rare deep dive into the legal topic. It sensitively maps out the vague territory of digital rights of the dead and explains how those laws—or the lack thereof—interact with various laws dealing with death, from human remains to property rights.

In it, Haneman also points out that, on balance, the rights of the living typically outweigh the rights of the dead, and even specific instructions on how to handle human remains aren’t generally considered binding. Some requests, like organ donation that can benefit the living, are considered critical, Haneman noted. But there are mixed results on how courts enforce other interests of the dead—like a famous writer’s request to destroy all unpublished work or a pet lover’s insistence to destroy their cat or dog at death.

She told Ars that right now, “a lot of people are like, ‘Why do I care if somebody resurrects me after I’m dead?’ You know, ‘They can do what they want.’ And they think that, until they find a family member who’s been resurrected by a creepy ex-boyfriend or their dead grandmother’s resurrected, and then it becomes a different story.”

Existing law may protect “the privacy interests of the loved ones of the deceased from outrageous or harmful digital resurrections of the deceased,” Haneman noted, but in the case of the dancing grandma, her meme may not be deemed harmful, no matter how much it troubles the grandchild to see her grandma’s memory warped.

Limited legal protections may not matter so much if, culturally, communities end up developing a distaste for digital replicas, particularly if it becomes widely viewed as disrespectful to the dead, Haneman suggested. Right now, however, society is more fixated on solving other problems with deepfakes rather than clarifying the digital rights of the dead. That could be because few people have been impacted so far, or it could also reflect a broader cultural tendency to ignore death, Haneman told Ars.

“We don’t want to think about our own death, so we really kind of brush aside whether or not we care about somebody else being digitally resurrected until it’s in our face,” Haneman said.

Over time, attitudes may change, especially if the so-called “digital afterlife industry” takes off. And there is some precedent that the law could be changed to reinforce any culture shift.

“The throughline revealed by the law of the dead is that a sacred trust exists between the living and the deceased, with an emphasis upon protecting common humanity, such that data afforded no legal status (or personal data of the deceased) may nonetheless be treated with dignity and receive some basic protections,” Haneman wrote.

An alternative path to prevent AI resurrection

Preventing yourself from becoming an AI ghost seemingly now falls in a legal gray zone that policymakers may need to address.

Haneman calls for a solution that doesn’t depend on estate planning, which she warned “is a structurally inequitable and anachronistic approach that maximizes social welfare only for those who do estate planning.” More than 60 percent of Americans die without a will, often including “those without wealth,” as well as women and racial minorities who “are less likely to die with a valid estate plan in effect,” Haneman reported.”We can do better in a technology-based world,” Haneman wrote. “Any modern framework should recognize a lack of accessibility as an obstacle to fairness and protect the rights of the most vulnerable through approaches that do not depend upon hiring an attorney and executing an estate plan.”

Rather than twist the law to “recognize postmortem privacy rights,” Haneman advocates for a path for people resistant to digital replicas that focuses on a right to delete the data that would be used to create the AI ghost.

“Put simply, the deceased may exert control over digital legacy through the right to deletion of data but may not exert broader rights over non-commercial digital resurrection through estate planning,” Haneman recommended.

Sheehan told Ars that a right to deletion would likely involve estate planners, too.

“If this is not addressed in an estate planning document and not specifically addressed in the statute (or deemed under the authority of the executor via statute), then the only way to address this would be to go to court,” Sheehan said. “Even with a right of deletion, the deceased would need to delete said data before death or authorize his executor to do so post death, which would require an estate planning document, statutory authority, or court authority.”

Haneman agreed that for many people, estate planners would still be involved, recommending that “the right to deletion would ideally, from the perspective of estate administration, provide for a term of deletion within 12 months.” That “allows the living to manage grief and open administration of the estate before having to address data management issues,” Haneman wrote, and perhaps adequately balances “the interests of society against the rights of the deceased.”

To Haneman, it’s also the better solution for the people left behind because “creating a right beyond data deletion to curtail unauthorized non-commercial digital resurrection creates unnecessary complexity that overreaches, as well as placing the interests of the deceased over those of the living.”

Future generations may be raised with AI ghosts

If a dystopia that experts paint comes true, Big Tech companies may one day profit by targeting grieving individuals to seize the data of the dead, which could be more easily abused since it’s granted fewer rights than data of the living.

Perhaps in that future, critics suggest, people will be tempted into free trials in moments when they’re missing their loved ones most, then forced to either pay a subscription to continue accessing the bot or else perhaps be subjected to ad-based models where their chats with AI ghosts may even feature ads in the voices of the deceased.

Today, even in a world where AI ghosts aren’t yet compelling ad clicks, some experts have warned that interacting with AI ghosts could cause mental health harms, New Scientist reported, especially if the digital afterlife industry isn’t carefully designed, AI ethicists warned. Some people may end up getting stuck maintaining an AI ghost if it’s left behind as a gift, and ethicists suggested that the emotional weight of that could also eventually take a negative toll. While saying goodbye is hard, letting go is considered a critical part of healing during the mourning process, and AI ghosts may make that harder.

But the bots can be a helpful tool to manage grief, some experts suggest, provided that their use is limited to allow for a typical mourning process or combined with therapy from a trained professional, Al Jazeera reported. Ahmad told Ars that working on his bot has not only kept his father close to him but also helped him think more deeply about relationships and memory.

Haneman noted that people have many ways of honoring the dead. Some erect statues, and others listen to saved voicemails or watch old home movies. For some, just “smelling an old sweater” is a comfort. And creating digital replicas, as creepy as some people might find them, is not that far off from these traditions, Haneman said.

“Feeding text messages and emails into existing AI platforms such as ChatGPT and asking the AI to respond in the voice of the deceased is simply a change in degree, not in kind,” Haneman said.

For Ahmad, the decision to create a digital replica of his dad was a learning experience, and perhaps his experience shows why any family or loved one weighing the option should carefully consider it before starting the process.

In particular, he warns families to be careful introducing young kids to grief bots, as they may not be able to grasp that the bot is not a real person. When he initially saw his young kids growing confused with whether their grandfather was alive or not—the introduction of the bot was complicated by the early stages of the pandemic, a time when they met many relatives virtually—he decided to restrict access to the bot until they were older. For a time, the bot only came out for special events like birthdays.

He also realized that introducing the bot also forced him to have conversations about life and death with his kids at ages younger than he remembered fully understanding those concepts in his own childhood.

Now, Ahmad’s kids are among the first to be raised among AI ghosts. To continually enhance the family’s experience, their father continuously updates his father’s digital replica. Ahmad is currently most excited about recent audio advancements that make it easier to add a voice element. He hopes that within the next year, he might be able to use AI to finally nail down his South Asian father’s accent, which up to now has always sounded “just off.” For others working in this space, the next frontier is realistic video or even augmented reality tools, Ahmad told Ars.

To this day, the bot retains sentimental value for Ahmad, but, as Haneman suggested, the bot was not the only way he memorialized his dad. He also created a mosaic, and while his father never saw it, either, Ahmad thinks his dad would have approved.

“He would have been very happy,” Ahmad said.

There’s no way to predict how future generations may view grief tech. But while Ahmad said he’s not sure he’d be interested in an augmented reality interaction with his dad’s digital replica, kids raised seeing AI ghosts as a natural part of their lives may not be as hesitant to embrace or even build new features. Talking to Ars, Ahmad fondly remembered his young daughter once saw that he was feeling sad and came up with her own AI idea to help her dad feel better.

“It would be really nice if you can just take this program and we build a robot that looks like your dad, and then add it to the robot, and then you can go and hug the robot,” she said, according to her father’s memory.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

How to draft a will to avoid becoming an AI ghost—it’s not easy Read More »