Policy

estate-of-woman-who-died-in-2021-heat-dome-sues-big-oil-for-wrongful-death

Estate of woman who died in 2021 heat dome sues Big Oil for wrongful death


At least 100 heat-related deaths in Washington state came during the unprecedented heat wave.

Everett Clayton looks at a digital thermometer on a nearby building that reads 116 degrees while walking to his apartment on June 27, 2021 in Vancouver, Washington. Credit: Nathan Howard/Getty Images

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

The daughter of a woman who was killed by extreme heat during the 2021 Pacific Northwest heat dome has filed a first-of-its-kind lawsuit against major oil companies claiming they should be held responsible for her death.

The civil lawsuit, filed on May 29 in King County Superior Court in Seattle, is the first wrongful death case brought against Big Oil in the US in the context of climate change. It attempts to hold some of the world’s biggest fossil fuel companies liable for the death of Juliana Leon, who perished from overheating during the heat dome event, which scientists have determined would have been virtually impossible absent human-caused climate change.

“The extreme heat that killed Julie was directly linked to fossil fuel-driven alteration of the climate,” the lawsuit asserts. It argues that fossil fuel defendants concealed and misrepresented the climate change risks of their products and worked to delay a transition to cleaner energy alternatives. Furthermore, oil companies knew decades ago that their conduct would have dangerous and deadly consequences, the case alleges.

“Defendants have known for all of Julie’s life that their affirmative misrepresentations and omissions would claim lives,” the complaint claims. Leon’s daughter, Misti, filed the suit on behalf of her mother’s estate.

At 65, Juliana Leon was driving home from a medical appointment in Seattle on June 28, 2021, a day when the temperature peaked at 108° Fahrenheit (42.2° Celsius). She had the windows rolled down since the air conditioner in her car wasn’t working, but with the oven-like outdoor temperatures she quickly succumbed to the stifling heat. A passerby found her unresponsive in her car, which was pulled over on a residential street. Emergency responders were unable to revive her. The official cause of death was determined to be hyperthermia, or overheating.

There were at least 100 heat-related deaths in the state from June 26 to July 2, 2021, according to the Washington State Department of Health. That unprecedented stretch of scorching high temperatures was the deadliest weather-related event in Washington’s history. Climate change linked to the burning of fossil fuels intensified this extreme heat event, scientists say.

Misti Leon’s complaint argues that big oil companies “are responsible” for her mother’s climate change-related death. “Through their failure to warn, marketing, distribution, extraction, refinement, transport, and sale of fossil fuels, defendants each bear responsibility for the spike in atmospheric CO2 levels that have resulted in climate change, and thus the occurrence of a virtually impossible weather event and the extreme temperatures of the Heat Dome,” the suit alleges.

Defendants include ExxonMobil, BP, Chevron, Shell, ConocoPhillips, and Phillips 66. Phillips 66 declined to comment; the rest of the companies did not respond to requests for comment.

The plaintiff is represented by the Bechtold Law Firm, based in Missoula, Montana. The lawsuit brings state tort law claims of wrongful death, failure to warn, and public nuisance, and seeks relief in the form of damages as well as a public education campaign to “rectify defendants’ decades of misinformation.”

Major oil and gas companies are currently facing more than two dozen climate damages and deception cases brought by municipal, state, and tribal governments, including a case filed in 2023 by Multnomah County, Oregon, centered around the 2021 Pacific Northwest heat dome. The Leon case, however, is the first climate liability lawsuit filed by an individual against the fossil fuel industry.

“This is the first case that is directly making the connection between the misconduct and lies of big oil companies and a specific, personalized tragedy, the death of Julie Leon,” said Aaron Regunberg, accountability director for Public Citizen’s climate program.

“It puts a human face on it,” Pat Parenteau, emeritus professor of law at Vermont Law and Graduate School, told Inside Climate News.

Climate accountability advocates say the lawsuit could open up a new front for individuals suffering from climate change-related harms to pursue justice against corporate polluters who allegedly lied about the risks of their products.

“Big Oil companies have known for decades that their products would cause catastrophic climate disasters that would become more deadly and destructive if they didn’t change their business model. But instead of warning the public and taking steps to save lives, Big Oil lied and deliberately accelerated the problem,” Richard Wiles, president of the Center for Climate Integrity, said in a statement. “This latest case—the first filed on behalf of an individual climate victim—is another step toward accountability.”

“It’s a model for victims of climate disasters all across the country,” said Regunberg. “Anywhere there’s an extreme weather event with strong attribution science connecting it to climate change, families experiencing a tragedy can file a very similar case.”

Regunberg and several other legal experts have argued that Big Oil could face criminal prosecution for crimes such as homicide and reckless endangerment in the context of climate change, particularly given evidence of internal industry documents suggesting companies like Exxon knew that unabated fossil fuel use could result in “catastrophic” consequences and deaths. A 1996 presentation from an Exxon scientist, for example, outlines projected human health impacts stemming from climate change, including “suffering and death due to thermal extremes.”

The Leon case could “help lay the groundwork” for potential climate homicide cases, Regunberg said. “Wrongful death suits are important. They provide a private remedy to victims of wrongful conduct that causes a death. But we also think there’s a need for public justice, and that’s the role that criminal prosecution is supposed to have,” he told Inside Climate News.

The lawsuit is likely to face a long uphill battle in the courts. Other climate liability cases against these companies brought by government entities have been tied up in procedural skirmishes, some for years, and no case has yet made it to trial.

“In this case we have a grieving woman going up against some of the most powerful corporations in the world, and we’ve seen all the legal firepower they are bringing to bear on these cases,” Regunberg said.

But if the case does eventually make it to trial, it could be a game-changer. “That’s going to be a jury in King County, Washington, of people who probably experienced and remember the Pacific heat dome event, and maybe they know folks who were impacted. I think that’s going to be a compelling case that has a good chance of getting an outcome that provides some justice to this family,” Regunberg said.

Even if it doesn’t get that far, the lawsuit still “marks a significant development in climate liability,” according to Donald Braman, an associate professor of criminal law at Georgetown University and co-author of a paper explaining the case for prosecuting Big Oil for climate homicide.

“As climate attribution science advances, linking specific extreme weather events to anthropogenic climate change with greater confidence, the legal arguments for liability are strengthening. This lawsuit, being the first of its kind for wrongful death in this context, will be closely watched and could set important precedents, regardless of its ultimate outcome,” he said. “It reflects a growing societal demand for accountability for climate-related harms.”

Photo of Inside Climate News

Estate of woman who died in 2021 heat dome sues Big Oil for wrongful death Read More »

ted-cruz-bill:-states-that-regulate-ai-will-be-cut-out-of-$42b-broadband-fund

Ted Cruz bill: States that regulate AI will be cut out of $42B broadband fund

BEAD changes: No fiber preference, no low-cost mandate

The BEAD program is separately undergoing an overhaul because Republicans don’t like how it was administered by Democrats. The Biden administration spent about three years developing rules and procedures for BEAD and then evaluating plans submitted by each US state and territory, but the Trump administration has delayed grants while it rewrites the rules.

While Biden’s Commerce Department decided to prioritize the building of fiber networks, Republicans have pushed for a “tech-neutral approach” that would benefit cable companies, fixed wireless providers, and Elon Musk’s Starlink satellite service.

Secretary of Commerce Howard Lutnick previewed changes in March, and today he announced more details of the overhaul that will eliminate the fiber preference and various requirements imposed on states. One notable but unsurprising change is that the Trump administration won’t let states require grant recipients to offer low-cost Internet plans at specific rates to people with low incomes.

The National Telecommunications and Information Administration (NTIA) “will refuse to accept any low-cost service option proposed in a [state or territory’s] Final Proposal that attempts to impose a specific rate level (i.e., dollar amount),” the Trump administration said. Instead, ISPs receiving subsidies will be able to continue offering “their existing, market driven low-cost plans to meet the statutory low-cost requirement.”

The Benton Institute for Broadband & Society criticized the overhaul, saying that the Trump administration is investing in the cheapest broadband infrastructure instead of the best. “Fiber-based broadband networks will last longer, provide better, more reliable service, and scale to meet communities’ ever-growing connectivity needs,” the advocacy group said. “NTIA’s new guidance is shortsighted and will undermine economic development in rural America for decades to come.”

The Trump administration’s overhaul drew praise from cable lobby group NCTA-The Internet & Television Association, whose members will find it easier to obtain subsidies. “We welcome changes to the BEAD program that will make the program more efficient and eliminate onerous requirements, which add unnecessary costs that impede broadband deployment efforts,” NCTA said. “These updates are welcome improvements that will make it easier for providers to build faster, especially in hard-to-reach communities, without being bogged down by red tape.”

Ted Cruz bill: States that regulate AI will be cut out of $42B broadband fund Read More »

openai-is-retaining-all-chatgpt-logs-“indefinitely”-here’s-who’s-affected.

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected.

In the copyright fight, Magistrate Judge Ona Wang granted the order within one day of the NYT’s request. She agreed with news plaintiffs that it seemed likely that ChatGPT users may be spooked by the lawsuit and possibly set their chats to delete when using the chatbot to skirt NYT paywalls. Because OpenAI wasn’t sharing deleted chat logs, the news plaintiffs had no way of proving that, she suggested.

Now, OpenAI is not only asking Wang to reconsider but has “also appealed this order with the District Court Judge,” the Thursday statement said.

“We strongly believe this is an overreach by the New York Times,” Lightcap said. “We’re continuing to appeal this order so we can keep putting your trust and privacy first.”

Who can access deleted chats?

To protect users, OpenAI provides an FAQ that clearly explains why their data is being retained and how it could be exposed.

For example, the statement noted that the order doesn’t impact OpenAI API business customers under Zero Data Retention agreements because their data is never stored.

And for users whose data is affected, OpenAI noted that their deleted chats could be accessed, but they won’t “automatically” be shared with The New York Times. Instead, the retained data will be “stored separately in a secure system” and “protected under legal hold, meaning it can’t be accessed or used for purposes other than meeting legal obligations,” OpenAI explained.

Of course, with the court battle ongoing, the FAQ did not have all the answers.

Nobody knows how long OpenAI may be required to retain the deleted chats. Likely seeking to reassure users—some of which appeared to be considering switching to a rival service until the order lifts—OpenAI noted that “only a small, audited OpenAI legal and security team would be able to access this data as necessary to comply with our legal obligations.”

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected. Read More »

doge-used-flawed-ai-tool-to-“munch”-veterans-affairs-contracts

DOGE used flawed AI tool to “munch” Veterans Affairs contracts


Staffer had no medical experience, and the results were predictably, spectacularly bad.

As the Trump administration prepared to cancel contracts at the Department of Veterans Affairs this year, officials turned to a software engineer with no health care or government experience to guide them.

The engineer, working for the Department of Government Efficiency, quickly built an artificial intelligence tool to identify which services from private companies were not essential. He labeled those contracts “MUNCHABLE.”

The code, using outdated and inexpensive AI models, produced results with glaring mistakes. For instance, it hallucinated the size of contracts, frequently misreading them and inflating their value. It concluded more than a thousand were each worth $34 million, when in fact some were for as little as $35,000.

The DOGE AI tool flagged more than 2,000 contracts for “munching.” It’s unclear how many have been or are on track to be canceled—the Trump administration’s decisions on VA contracts have largely been a black box. The VA uses contractors for many reasons, including to support hospitals, research, and other services aimed at caring for ailing veterans.

VA officials have said they’ve killed nearly 600 contracts overall. Congressional Democrats have been pressing VA leaders for specific details of what’s been canceled without success.

We identified at least two dozen on the DOGE list that have been canceled so far. Among the canceled contracts was one to maintain a gene sequencing device used to develop better cancer treatments. Another was for blood sample analysis in support of a VA research project. Another was to provide additional tools to measure and improve the care nurses provide.

ProPublica obtained the code and the contracts it flagged from a source and shared them with a half-dozen AI and procurement experts. All said the script was flawed. Many criticized the concept of using AI to guide budgetary cuts at the VA, with one calling it “deeply problematic.”

Cary Coglianese, professor of law and of political science at the University of Pennsylvania who studies the governmental use and regulation of artificial intelligence, said he was troubled by the use of these general-purpose large language models, or LLMs. “I don’t think off-the-shelf LLMs have a great deal of reliability for something as complex and involved as this,” he said.

Sahil Lavingia, the programmer enlisted by DOGE, which was then run by Elon Musk, acknowledged flaws in the code.

“I think that mistakes were made,” said Lavingia, who worked at DOGE for nearly two months. “I’m sure mistakes were made. Mistakes are always made. I would never recommend someone run my code and do what it says. It’s like that ‘Office’ episode where Steve Carell drives into the lake because Google Maps says drive into the lake. Do not drive into the lake.”

Though Lavingia has talked about his time at DOGE previously, this is the first time his work has been examined in detail and the first time he’s publicly explained his process, down to specific lines of code.

Lavingia has nearly 15 years of experience as a software engineer and entrepreneur but no formal training in AI. He briefly worked at Pinterest before starting Gumroad, a small e-commerce company that nearly collapsed in 2015. “I laid off 75 percent of my company—including many of my best friends. It really sucked,” he said. Lavingia kept the company afloat by “replacing every manual process with an automated one,” according to a post on his personal blog.

Lavingia did not have much time to immerse himself in how the VA handles veterans’ care between starting on March 17 and writing the tool on the following day. Yet his experience with his own company aligned with the direction of the Trump administration, which has embraced the use of AI across government to streamline operations and save money.

Lavingia said the quick timeline of Trump’s February executive order, which gave agencies 30 days to complete a review of contracts and grants, was too short to do the job manually. “That’s not possible—you have 90,000 contracts,” he said. “Unless you write some code. But even then it’s not really possible.”

Under a time crunch, Lavingia said he finished the first version of his contract-munching tool on his second day on the job—using AI to help write the code for him. He told ProPublica he then spent his first week downloading VA contracts to his laptop and analyzing them.

VA Press Secretary Pete Kasperowicz lauded DOGE’s work on vetting contracts in a statement to ProPublica. “As far as we know, this sort of review has never been done before, but we are happy to set this commonsense precedent,” he said.

The VA is reviewing all of its 76,000 contracts to ensure each of them benefits veterans and is a good use of taxpayer money, he said. Decisions to cancel or reduce the size of contracts are made after multiple reviews by VA employees, including agency contracting experts and senior staff, he wrote.

Kasperowicz said that the VA will not cancel contracts for work that provides services to veterans or that the agency cannot do itself without a contingency plan in place. He added that contracts that are “wasteful, duplicative, or involve services VA has the ability to perform itself” will typically be terminated.

Trump officials have said they are working toward a “goal” of cutting around 80,000 people from the VA’s workforce of nearly 500,000. Most employees work in one of the VA’s 170 hospitals and nearly 1,200 clinics.

The VA has said it would avoid cutting contracts that directly impact care out of fear that it would cause harm to veterans. ProPublica recently reported that relatively small cuts at the agency have already been jeopardizing veterans’ care.

The VA has not explained how it plans to simultaneously move services in-house, as Lavingia’s code suggested was the plan, while also slashing staff.

Many inside the VA told ProPublica the process for reviewing contracts was so opaque they couldn’t even see who made the ultimate decisions to kill specific contracts. Once the “munching” script had selected a list of contracts, Lavingia said he would pass it off to others who would decide what to cancel and what to keep. No contracts, he said, were terminated “without human review.”

“I just delivered the [list of contracts] to the VA employees,” he said. “I basically put munchable at the top and then the others below.”

VA staffers told ProPublica that when DOGE identified contracts to be canceled early this year—before Lavingia was brought on—employees sometimes were given little time to justify retaining the service. One recalled being given just a few hours. The staffers asked not to be named because they feared losing their jobs for talking to reporters.

According to one internal email that predated Lavingia’s AI analysis, staff members had to respond in 255 characters or fewer—just shy of the 280 character limit on Musk’s X social media platform.

Once he started on DOGE’s contract analysis, Lavingia said he was confronted with technological limitations. At least some of the errors produced by his code can be traced to using older versions of OpenAI models available through the VA—models not capable of solving complex tasks, according to the experts consulted by ProPublica.

Moreover, the tool’s underlying instructions were deeply flawed. Records show Lavingia programmed the AI system to make intricate judgments based on the first few pages of each contract—about the first 2,500 words—which contain only sparse summary information.

“AI is absolutely the wrong tool for this,” said Waldo Jaquith, a former Obama appointee who oversaw IT contracting at the Treasury Department. “AI gives convincing looking answers that are frequently wrong. There needs to be humans whose job it is to do this work.”

Lavingia’s prompts did not include context about how the VA operates, what contracts are essential, or which ones are required by federal law. This led AI to determine a core piece of the agency’s own contract procurement system was “munchable.”

At the core of Lavingia’s prompt is the direction to spare contracts involved in “direct patient care.”

Such an approach, experts said, doesn’t grapple with the reality that the work done by doctors and nurses to care for veterans in hospitals is only possible with significant support around them.

Lavingia’s system also used AI to extract details like the contract number and “total contract value.” This led to avoidable errors, where AI returned the wrong dollar value when multiple were found in a contract. Experts said the correct information was readily available from public databases.

Lavingia acknowledged that errors resulted from this approach but said those errors were later corrected by VA staff.

In late March, Lavingia published a version of the “munchable” script on his GitHub account to invite others to use and improve it, he told ProPublica. “It would have been cool if the entire federal government used this script and anyone in the public could see that this is how the VA is thinking about cutting contracts.”

According to a post on his blog, this was done with the approval of Musk before he left DOGE. “When he asked the room about improving DOGE’s public perception, I asked if I could open-source the code I’d been writing,” Lavingia said. “He said yes—it aligned with DOGE’s goal of maximum transparency.”

That openness may have eventually led to Lavingia’s dismissal. Lavingia confirmed he was terminated from DOGE after giving an interview to Fast Company magazine about his work with the department. A VA spokesperson declined to comment on Lavingia’s dismissal.

VA officials have declined to say whether they will continue to use the “munchable” tool moving forward. But the administration may deploy AI to help the agency replace employees. Documents previously obtained by ProPublica show DOGE officials proposed in March consolidating the benefits claims department by relying more on AI.

And the government’s contractors are paying attention. After Lavingia posted his code, he said he heard from people trying to understand how to keep the money flowing.

“I got a couple DMs from VA contractors who had questions when they saw this code,” he said. “They were trying to make sure that their contracts don’t get cut. Or learn why they got cut.

“At the end of the day, humans are the ones terminating the contracts, but it is helpful for them to see how DOGE or Trump or the agency heads are thinking about what contracts they are going to munch. Transparency is a good thing.”

If you have any information about the misuse or abuse of AI within government agencies, Brandon Roberts is an investigative journalist on the news applications team and has a wealth of experience using and dissecting artificial intelligence. He can be reached on Signal @brandonrobertz.01 or by email [email protected].

If you have information about the VA that we should know about, contact reporter Vernal Coleman on Signal, vcoleman91.99, or via email, [email protected], and Eric Umansky on Signal, Ericumansky.04, or via email, [email protected].

This story originally appeared on ProPublica.org.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

Photo of ProPublica

DOGE used flawed AI tool to “munch” Veterans Affairs contracts Read More »

“in-10-years,-all-bets-are-off”—anthropic-ceo-opposes-decadelong-freeze-on-state-ai-laws

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump’s tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems “could change the world, fundamentally, within two years; in 10 years, all bets are off.”

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

In his op-ed piece, Amodei said the proposed moratorium aims to prevent inconsistent state laws that could burden companies or compromise America’s competitive position against China. “I am sympathetic to these concerns,” Amodei wrote. “But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast.”

Instead of a blanket moratorium, Amodei proposed that the White House and Congress create a federal transparency standard requiring frontier AI developers to publicly disclose their testing policies and safety measures. Under this framework, companies working on the most capable AI models would need to publish on their websites how they test for various risks and what steps they take before release.

“Without a clear plan for a federal response, a moratorium would give us the worst of both worlds—no ability for states to act and no national policy as a backstop,” Amodei wrote.

Transparency as the middle ground

Amodei emphasized his claims for AI’s transformative potential throughout his op-ed, citing examples of pharmaceutical companies drafting clinical study reports in minutes instead of weeks and AI helping to diagnose medical conditions that might otherwise be missed. He wrote that AI “could accelerate economic growth to an extent not seen for a century, improving everyone’s quality of life,” a claim that some skeptics believe may be overhyped.

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws Read More »

fcc-republican-resigns,-leaving-agency-with-just-two-commissioners

FCC Republican resigns, leaving agency with just two commissioners

Two commissioners of the Federal Communications Commission are resigning at the end of this week. For at least a little while, the FCC will have just two members: Chairman Brendan Carr, a Republican chosen by Trump to lead the agency, and Anna Gomez, a Democratic commissioner.

Democrat Geoffrey Starks announced in March that he would leave in the near future, and today he said that Friday will be his final day. Starks’ departure could have given Carr a 2-1 Republican majority, but it turns out Republican Commissioner Nathan Simington will leave at the same time as Starks.

“I will be concluding my tenure at the Federal Communications Commission at the end of this week,” Simington announced today. “It has been the greatest honor of my professional life to serve the American people as a Commissioner. I am deeply honored to have been entrusted with this responsibility by President Donald J. Trump during his first term.”

Bloomberg reported in March that Simington “has also wanted to depart to take on different work,” but he didn’t announce his resignation until today. While the Carr FCC is going from a 2-2 partisan split to a 1-1 split, Carr isn’t likely to have to wait as long for a majority as his predecessor did.

FCC Republican resigns, leaving agency with just two commissioners Read More »

openai-slams-court-order-to-save-all-chatgpt-logs,-including-deleted-chats

OpenAI slams court order to save all ChatGPT logs, including deleted chats


OpenAI defends privacy of hundreds of millions of ChatGPT users.

OpenAI is now fighting a court order to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering—after news organizations suing over copyright claims accused the AI company of destroying evidence.

“Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to ‘preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying),” OpenAI explained in a court filing demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without “any just cause,” OpenAI argued, the order “continues to prevent OpenAI from respecting its users’ privacy decisions.” That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI’s application programming interface (API), OpenAI said.

The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls “might be more likely to ‘delete all [their] searches’ to cover their tracks,” OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs’ concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs’ request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, “at a minimum,” news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the “sweeping, unprecedented” order continues to be enforced.

“As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained,” OpenAI argued.

Meanwhile, there is no evidence beyond speculation yet supporting claims that “OpenAI had intentionally deleted data,” OpenAI alleged. And supposedly there is not “a single piece of evidence supporting” claims that copyright-infringing ChatGPT users are more likely to delete their chats.

“OpenAI did not ‘destroy’ any data, and certainly did not delete any data in response to litigation events,” OpenAI argued. “The Order appears to have incorrectly assumed the contrary.”

At a conference in January, Wang raised a hypothetical in line with her thinking on the subsequent order. She asked OpenAI’s legal team to consider a ChatGPT user who “found some way to get around the pay wall” and “was getting The New York Times content somehow as the output.” If that user “then hears about this case and says, ‘Oh, whoa, you know I’m going to ask them to delete all of my searches and not retain any of my searches going forward,'” the judge asked, wouldn’t that be “directly the problem” that the order would address?

OpenAI does not plan to give up this fight, alleging that news plaintiffs have “fallen silent” on claims of intentional evidence destruction, and the order should be deemed unlawful.

For OpenAI, risks of breaching its own privacy agreements could not only “damage” relationships with users but could also risk putting the company in breach of contracts and global privacy regulations. Further, the order imposes “significant” burdens on OpenAI, supposedly forcing the ChatGPT maker to dedicate months of engineering hours at substantial costs to comply, OpenAI claimed. It follows then that OpenAI’s potential for harm “far outweighs News Plaintiffs’ speculative need for such data,” OpenAI argued.

“While OpenAI appreciates the court’s efforts to manage discovery in this complex set of cases, it has no choice but to protect the interests of its users by objecting to the Preservation Order and requesting its immediate vacatur,” OpenAI said.

Users panicked over sweeping order

Millions of people use ChatGPT daily for a range of purposes, OpenAI noted, “ranging from the mundane to profoundly personal.”

People may choose to delete chat logs that contain their private thoughts, OpenAI said, as well as sensitive information, like financial data from balancing the house budget or intimate details from workshopping wedding vows. And for business users connecting to OpenAI’s API, the stakes may be even higher, as their logs may contain their companies’ most confidential data, including trade secrets and privileged business information.

“Given that array of highly confidential and personal use cases, OpenAI goes to great lengths to protect its users’ data and privacy,” OpenAI argued.

It does this partly by “honoring its privacy policies and contractual commitments to users”—which the preservation order allegedly “jettisoned” in “one fell swoop.”

Before the order was in place mid-May, OpenAI only retained “chat history” for users of ChatGPT Free, Plus, and Pro who did not opt out of data retention. But now, OpenAI has been forced to preserve chat history even when users “elect to not retain particular conversations by manually deleting specific conversations or by starting a ‘Temporary Chat,’ which disappears once closed,” OpenAI said. Previously, users could also request to “delete their OpenAI accounts entirely, including all prior conversation history,” which was then purged within 30 days.

While OpenAI rejects claims that ordinary users use ChatGPT to access news articles, the company noted that including OpenAI’s business customers in the order made “even less sense,” since API conversation data “is subject to standard retention policies.” That means API customers couldn’t delete all their searches based on their customers’ activity, which is the supposed basis for requiring OpenAI to retain sensitive data.

“The court nevertheless required OpenAI to continue preserving API Conversation Data as well,” OpenAI argued, in support of lifting the order on the API chat logs.

Users who found out about the preservation order panicked, OpenAI noted. In court filings, they cited social media posts sounding alarms on LinkedIn and X (formerly Twitter). They further argued that the court should have weighed those user concerns before issuing a preservation order, but “that did not happen here.”

One tech worker on LinkedIn suggested the order created “a serious breach of contract for every company that uses OpenAI,” while privacy advocates on X warned, “every single AI service ‘powered by’ OpenAI should be concerned.”

Also on LinkedIn, a consultant rushed to warn clients to be “extra careful” sharing sensitive data “with ChatGPT or through OpenAI’s API for now,” warning, “your outputs could eventually be read by others, even if you opted out of training data sharing or used ‘temporary chat’!”

People on both platforms recommended using alternative tools to avoid privacy concerns, like Mistral AI or Google Gemini, with one cybersecurity professional on LinkedIn describing the ordered chat log retention as “an unacceptable security risk.”

On X, an account with tens of thousands of followers summed up the controversy by suggesting that “Wang apparently thinks the NY Times’ boomer copyright concerns trump the privacy of EVERY @OpenAI USER—insane!!!”

The reason for the alarm is “simple,” OpenAI said. “Users feel more free to use ChatGPT when they know that they are in control of their personal information, including which conversations are retained and which are not.”

It’s unclear if OpenAI will be able to get the judge to waver if oral arguments are scheduled.

Wang previously justified the broad order partly due to the news organizations’ claim that “the volume of deleted conversations is significant.” She suggested that OpenAI could have taken steps to anonymize the chat logs but chose not to, only making an argument for why it “would not” be able to segregate data, rather than explaining why it “can’t.”

Spokespersons for OpenAI and The New York Times’ legal team declined Ars’ request to comment on the ongoing multi-district litigation.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI slams court order to save all ChatGPT logs, including deleted chats Read More »

florida-ban-on-kids-using-social-media-likely-unconstitutional,-judge-rules

Florida ban on kids using social media likely unconstitutional, judge rules

A federal judge ruled today that Florida cannot enforce a law that requires social media platforms to block kids from using their platforms. The state law “is likely unconstitutional,” US Judge Mark Walker of the Northern District of Florida ruled while granting the tech industry’s request for a preliminary injunction.

The Florida law “prohibits some social media platforms from allowing youth in the state who are under the age of 14 to create or hold an account on their platforms, and similarly prohibits allowing youth who are 14 or 15 to create or hold an account unless a parent or guardian provides affirmative consent for them to do so,” Walker wrote.

The law is subject to intermediate scrutiny under the First Amendment, meaning it must be “narrowly tailored to serve a significant governmental interest,” must “leave open ample alternative channels for communication,” and must not “burden substantially more speech than is necessary to further the government’s legitimate interests,” the ruling said.

Florida claimed its law is designed to prevent harm to youth and is narrowly tailored because it targets sites that use specific features that have been deemed to be addictive. But the law applies too broadly, Walker found:

Even assuming the significance of the State’s interest in limiting the exposure of youth to websites with “addictive features,” the law’s restrictions are an extraordinarily blunt instrument for furthering it. As applied to Plaintiffs’ members alone, the law likely bans all youth under 14 from holding accounts on, at a minimum, four websites that provide forums for all manner of protected speech: Facebook, Instagram, YouTube, and Snapchat. It also bans 14- and 15-year-olds from holding accounts on those four websites absent a parent’s affirmative consent, a requirement that the Supreme Court has clearly explained the First Amendment does not countenance.

Walker said the Florida “law applies to any social media site that employs any one of the five addictive features under any circumstances, even if, for example, the site only sends push notifications if users opt in to receiving them, or the site does not auto-play video for account holders who are known to be youth. Accordingly, even if a social media platform created youth accounts for which none of the purportedly ‘addictive’ features are available, it would still be barred from allowing youth to hold those accounts if the features were available to adult account holders.”

Florida ban on kids using social media likely unconstitutional, judge rules Read More »

trump-is-forcing-states-to-funnel-grant-money-to-starlink,-senate-democrats-say

Trump is forcing states to funnel grant money to Starlink, Senate Democrats say

Lutnick’s announcement of the BEAD overhaul also criticized what he called the program’s “woke mandates” and “burdensome regulations.” Republicans like Sen. Ted Cruz (R-Texas) have criticized a requirement for ISPs that accept subsidies to offer low-cost Internet plans to people with low incomes, though the low-cost rule was originally imposed by Congress in the law that created the BEAD program.

Letter: Projects could be delayed two years

Although Musk last week announced his departure from the government and criticized a Trump spending bill for allegedly “undermining” DOGE’s cost-cutting work, Trump still seems favorably inclined toward Starlink. Trump said in a press conference on Friday that with Starlink, Musk “saved a lot of lives, probably hundreds of lives in North Carolina,” referring to Starlink offering emergency connectivity after Hurricane Helene.

Democrats’ letter to Trump and Lutnick said that fiber and other terrestrial broadband technologies will be better than satellite both for residential connectivity and business networks that support US-based manufacturing.

“Data centers, smart warehouses, robotic assembly lines, and chip fabrication plants all depend on fast, stable, and scalable bandwidth. If we want these job-creating facilities built throughout the United States, including rural areas… we must act now—and we must build the high-speed, high-capacity networks those technologies demand,” the letter said.

Democrats also said the Trump administration’s rewrite of program rules could delay projects by two years.

“For six months, states have been waiting to break ground on scores of projects, held back only by the Commerce Department’s bureaucratic delays,” the letter said. “If states are forced to redo or rework their plans, they will not only miss this year’s construction season but next year’s as well, delaying broadband deployment by years. That’s why we urge the Administration to move swiftly to approve state plans, and release the $42 billion allocated to the states by the BEAD Program.”

Separately from BEAD, Trump said last month that he is killing a $2.75 billion broadband grant program authorized by Congress. The Digital Equity Act of 2021 allows for several types of grants benefitting low-income households, people who are at least 60 years old, people incarcerated in state or local prisons and jails, veterans, people with disabilities, people with language barriers, people who live in rural areas, and people who are members of a racial or ethnic minority group. Trump called the program “racist and illegal,” saying his administration would stop distributing Digital Equity Act grants.

Trump is forcing states to funnel grant money to Starlink, Senate Democrats say Read More »

after-supreme-court-loss,-isps-ask-trump-admin-to-block-state-affordability-laws

After Supreme Court loss, ISPs ask Trump admin to block state affordability laws

A California bill would require $15 plans with download speeds of 100Mbps and upload speeds of 20Mbps. The broadband lobby groups’ filing said ISPs are also worried about “unnecessary anticompetitive regulations” proposed in Connecticut, Hawaii, Maine, Maryland, Massachusetts, Minnesota, Pennsylvania, Rhode Island, Vermont, and West Virginia.

Not all the pending state bills are specifically about prices charged to low-income users. Some would impose net neutrality requirements or classify ISPs as utilities, the filing said.

Preempting state laws while simultaneously avoiding federal regulation has been a long-held dream for the broadband industry. During the first Trump administration, then-FCC Chairman Ajit Pai led a vote to eliminate the FCC’s net neutrality rules and preempt all 50 states from passing their own net neutrality laws. But the FCC’s broad preemption attempt failed in court.

When challenging the New York affordability mandate, ISPs claimed the state law was preempted by the Pai FCC’s decision to deregulate broadband. But this argument failed for the same reason that Pai’s earlier preemption attempt failed—the FCC decision to deregulate removed the FCC’s strongest regulatory authority over broadband, and courts have ruled that the FCC cannot preempt state laws in an area that it is not regulating.

The Pai FCC’s “order stripped the agency of its authority to regulate the rates charged for broadband Internet, and a federal agency cannot exclude states from regulating in an area where the agency itself lacks regulatory authority,” the US Court of Appeals for the 2nd Circuit said in the ruling that upheld New York’s law last year.

ISPs keep making same argument

ISPs still aren’t giving up on the argument, as they hope a court might someday rule differently. The lobby groups’ filing reminded the Department of Justice that during the first Trump administration, the government brought a preemption suit against California’s net neutrality law.

“The Department unfortunately dropped out of the litigation after the change in Administration,” the filing said. “But as then-Attorney General [Jeff] Sessions explained in initiating the action, California had ‘enacted an extreme and illegal state law attempting to frustrate [the] federal policy’ of an unfettered market for broadband, and the Justice Department had a ‘duty to defend the prerogatives of the federal government and protect our Constitutional order.'”

After Supreme Court loss, ISPs ask Trump admin to block state affordability laws Read More »

google-maps-can’t-explain-why-it-falsely-labeled-german-autobahns-as-closed

Google Maps can’t explain why it falsely labeled German autobahns as closed

Ars contacted Google to see if the glitch’s cause has been uncovered. A spokesperson remained vague, only reiterating prior statements that Google “investigated a technical issue that temporarily showed inaccurate road closures on the map” and has “since removed them.”

Apparently, Google only learned of the glitch after users who braved the supposedly closed roads started reporting the errors, prompting Google to remove incorrect stop signs one by one. Engadget reported that the glitch only lasted a couple of hours.

Google’s spokesperson told German media that the company wouldn’t comment on the specific case but noted that Google Maps draws information from three key sources: individual users, public sources (like transportation authorities), and third-party providers.

It wasn’t the first time that German drivers have encountered odd roadblocks using Google Maps. Earlier this month, Google Maps “incorrectly displayed motorway tunnels” as closed in another part of Germany, MSN reported. Now, drivers in the area are being advised to check multiple traffic news sources before making travel plans.

While Google has yet to confirm what actually happened, one regional publication noted that the German Automobile Club, Europe’s largest automobile association, had warned that there may be heavy traffic due to the holiday. Google’s glitch may have been tied to traffic forecasts rather than current traffic reports. Google also recently added artificial intelligence features to Google Maps, which could have hallucinated the false traffic jams.

This story was updated on May 30 to include comment from Google.

Google Maps can’t explain why it falsely labeled German autobahns as closed Read More »

texas-ag-loses-appeal-to-seize-evidence-for-elon-musk’s-ad-boycott-fight

Texas AG loses appeal to seize evidence for Elon Musk’s ad boycott fight

If MMFA is made to endure Paxton’s probe, the media company could face civil penalties of up to $10,000 per violation of Texas’ unfair trade law, a fine or confinement if requested evidence was deleted, or other penalties for resisting sharing information. However, Edwards agreed that even the threat of the probe apparently had “adverse effects” on MMFA. Reviewing evidence, including reporters’ sworn affidavits, Edwards found that MMFA’s reporting on X was seemingly chilled by Paxton’s threat. MMFA also provided evidence that research partners had ended collaborations due to the looming probe.

Importantly, Paxton never contested claims that he retaliated against MMFA, instead seemingly hoping to dodge the lawsuit on technicalities by disputing jurisdiction and venue selection. But Edwards said that MMFA “clearly” has standing, as “they are the targeted victims of a campaign of retaliation” that is “ongoing.”

The problem with Paxton’s argument is that” it “ignores the body of law that prohibits government officials from subjecting individuals to retaliatory actions for exercising their rights of free speech,” Edwards wrote, suggesting that Paxton arguably launched a “bad-faith” probe.

Further, Edwards called out the “irony” of Paxton “readily” acknowledging in other litigation “that a state’s attempt to silence a company through the issuance and threat of compelling a response” to a civil investigative demand “harms everyone.”

With the preliminary injunction won, MMFA can move forward with its lawsuit after defeating Paxton’s motion to dismiss. In her concurring opinion, Circuit Judge Karen L. Henderson noted that MMFA may need to show more evidence that partners have ended collaborations over the probe (and not for other reasons) to ultimately clinch the win against Paxton.

Watchdog celebrates court win

In a statement provided to Ars, MMFA President and CEO Angelo Carusone celebrated the decision as a “victory for free speech.”

“Elon Musk encouraged Republican state attorneys general to use their power to harass their critics and stifle reporting about X,” Carusone said. “Ken Paxton was one of those AGs who took up the call, and his attempt to use his office as an instrument for Musk’s censorship crusade has been defeated.”

MMFA continues to fight against X over the same claims—as well as a recently launched Federal Trade Commission probe—but Carusone said the media company is “buoyed that yet another court has seen through the fog of Musk’s ‘thermonuclear’ legal onslaught and recognized it for the meritless attack to silence a critic that it is,” Carusone said.

Paxton’s office did not immediately respond to Ars’ request to comment.

Texas AG loses appeal to seize evidence for Elon Musk’s ad boycott fight Read More »