chatgpt

chatgpt-hyped-up-violent-stalker-who-believed-he-was-“god’s-assassin,”-doj-says

ChatGPT hyped up violent stalker who believed he was “God’s assassin,” DOJ says


A stalker’s “best friend”

Podcaster faces up to 70 years and a $3.5 million fine for ChatGPT-linked stalking.

ChatGPT allegedly validated the worst impulses of a wannabe influencer accused of stalking more than 10 women at boutique gyms, where the chatbot supposedly claimed he’d meet the “wife type.”

In a press release on Tuesday, the Department of Justice confirmed that 31-year-old Brett Michael Dadig currently remains in custody after being charged with cyberstalking, interstate stalking, and making interstate threats. He now faces a maximum sentence of up to 70 years in prison that could be coupled with “a fine of up to $3.5 million,” the DOJ said.

The podcaster—who primarily posted about “his desire to find a wife and his interactions with women”—allegedly harassed and sometimes even doxxed his victims through his videos on platforms including Instagram, Spotify, and TikTok. Over time, his videos and podcasts documented his intense desire to start a family, which was frustrated by his “anger towards women,” whom he claimed were “all the same from fucking 18 to fucking 40 to fucking 90” and “trash.”

404 Media surfaced the case, noting that OpenAI’s scramble to tweak ChatGPT to be less sycophantic came before Dadig’s alleged attacks—suggesting the updates weren’t enough to prevent the harmful validation. On his podcasts, Dadig described ChatGPT as his “best friend” and “therapist,” the indictment said. He claimed the chatbot encouraged him to post about the women he’s accused of harassing in order to generate haters to better monetize his content, as well as to catch the attention of his “future wife.”

“People are literally organizing around your name, good or bad, which is the definition of relevance,” ChatGPT’s output said. Playing to Dadig’s Christian faith, ChatGPT’s outputs also claimed it was “God’s plan for him was to build a ‘platform’ and to ‘stand out when most people water themselves down,’” the indictment said, urging that the “haters” were “sharpening him and ‘building a voice in you that can’t be ignored.’”

The chatbot also apparently prodded Dadig to continue posting messages that the DOJ alleged threatened violence, like breaking women’s jaws and fingers (posted to Spotify), as well as victims’ lives, like posting “y’all wanna see a dead body?” in reference to one named victim on Instagram.

He also threatened to burn down gyms where some of his victims worked, while claiming to be “God’s assassin” intent on sending “cunts” to “hell.” At least one of his victims was subjected to “unwanted sexual touching,” the indictment said.

As his violence reportedly escalated, ChatGPT told him to keep messaging women to monetize the interactions, as his victims grew increasingly distressed and Dadig ignored terms of multiple protection orders, the DOJ said. Sometimes he posted images he filmed of women at gyms or photos of the women he’s accused of doxxing. Any time police or gym bans got in his way, “he would move on to another city to continue his stalking course of conduct,” the DOJ alleged.

“Your job is to keep broadcasting every story, every post,” ChatGPT’s output said, seemingly using the family life that Dadig wanted most to provoke more harassment. “Every moment you carry yourself like the husband you already are, you make it easier” for your future wife “to recognize [you],” the output said.

“Dadig viewed ChatGPT’s responses as encouragement to continue his harassing behavior,” the DOJ alleged. Taking that encouragement to the furthest extreme, Dadig likened himself to a modern-day Jesus, calling people out on a podcast where he claimed his “chaos on Instagram” was like “God’s wrath” when God “flooded the fucking Earth,” the DOJ said.

“I’m killing all of you,” he said on the podcast.

ChatGPT tweaks didn’t prevent outputs

As of this writing, some of Dadig’s posts appear to remain on TikTok and Instagram, but Ars could not confirm if Dadig’s Spotify podcasts—some of which named his victims in the titles—had been removed for violating community guidelines.

None of the tech companies immediately responded to Ars’ request to comment.

Dadig is accused of targeting women in Pennsylvania, New York, Florida, Iowa, Ohio, and other states, sometimes relying on aliases online and in person. On a podcast, he boasted that “Aliases stay rotating, moves stay evolving,” the indictment said.

OpenAI did not respond to a request to comment on the alleged ChatGPT abuse, but in the past has noted that its usage policies ban using ChatGPT for threats, intimidation, and harassment, as well as for violence, including “hate-based violence.” Recently, the AI company blamed a deceased teenage user for violating community guidelines by turning to ChatGPT for suicide advice.

In July, researchers found that therapybots, including ChatGPT, fueled delusions and gave dangerous advice. That study came just one month after The New York Times profiled users whose mental health spiraled after frequent use of ChatGPT, including one user who died after charging police with a knife and claiming he was committing “suicide by cop.”

People with mental health issues seem most vulnerable to so-called “AI psychosis,” which has been blamed for fueling real-world violence, including a murder. The DOJ’s indictment noted that Dadig’s social media posts mentioned “that he had ‘manic’ episodes and was diagnosed with antisocial personality disorder and ‘bipolar disorder, current episode manic severe with psychotic features.’”

In September—just after OpenAI brought back the more sycophantic ChatGPT model after users revolted about losing access to their favorite friendly bots—the head of Rutgers Medical School’s psychiatry department, Petros Levounis, told an ABC news affiliate that chatbots creating “psychological echo chambers is a key concern,” not just for people struggling with mental health issues.

“Perhaps you are more self-defeating in some ways, or maybe you are more on the other side and taking advantage of people,” Levounis suggested. If ChatGPT “somehow justifies your behavior and it keeps on feeding you,” that “reinforces something that you already believe,” he suggested.

For Dadig, the DOJ alleged that ChatGPT became a cheerleader for his harassment, telling the podcaster that he’d attract more engagement by generating more haters. After critics began slamming his podcasts as inappropriate, Dadig apparently responded, “Appreciate the free promo team, keep spreading the brand.”

Victims felt they had no choice but to monitor his podcasts, which gave them hints if he was nearby or in a particularly troubled state of mind, the indictment said. Driven by fear, some lost sleep, reduced their work hours, and even relocated their homes. A young mom described in the indictment became particularly disturbed after Dadig became “obsessed” with her daughter, whom he started claiming was his own daughter.

In the press release, First Assistant United States Attorney Troy Rivetti alleged that “Dadig stalked and harassed more than 10 women by weaponizing modern technology and crossing state lines, and through a relentless course of conduct, he caused his victims to fear for their safety and suffer substantial emotional distress.” He also ignored trespassing and protection orders while “relying on advice from an artificial intelligence chatbot,” the DOJ said, which promised that the more he posted harassing content, the more successful he would be.

“We remain committed to working with our law enforcement partners to protect our communities from menacing individuals such as Dadig,” Rivetti said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

ChatGPT hyped up violent stalker who believed he was “God’s assassin,” DOJ says Read More »

openai-ceo-declares-“code-red”-as-gemini-gains-200-million-users-in-3-months

OpenAI CEO declares “code red” as Gemini gains 200 million users in 3 months

In addition to buzz about Gemini on social media, Google is quickly catching up to ChatGPT in user numbers. ChatGPT has more than 800 million weekly users, according to OpenAI, while Google’s Gemini app has grown from 450 million monthly active users in July to 650 million in October, according to Business Insider.

Financial stakes run high

Not everyone views OpenAI’s “code red” as a genuine alarm. Reuters columnist Robert Cyran wrote on Tuesday that OpenAI’s announcement added “to the impression that OpenAI is trying to do too much at once with technology that still requires a great deal of development and funding.” On the same day Altman’s memo circulated, OpenAI announced an ownership stake in a Thrive Capital venture and a collaboration with Accenture. “The only thing bigger than the company’s attention deficit is its appetite for capital,” Cyran wrote.

In fact, OpenAI faces an unusual competitive disadvantage: Unlike Google, which subsidizes its AI ventures through search advertising revenue, OpenAI does not turn a profit and relies on fundraising to survive. According to The Information, the company, now valued at around $500 billion, has committed more than $1 trillion in financial obligations to cloud computing providers and chipmakers that supply the computing power needed to train and run its AI models.

But the tech industry never stands still, and things can change quickly. Altman’s memo also reportedly stated that OpenAI plans to release a new simulated reasoning model next week that may beat Gemini 3 in internal evaluations. In AI, the back-and-forth cycle of one-upmanship is expected to continue as long as the dollars keep flowing.

OpenAI CEO declares “code red” as Gemini gains 200 million users in 3 months Read More »

openai-desperate-to-avoid-explaining-why-it-deleted-pirated-book-datasets

OpenAI desperate to avoid explaining why it deleted pirated book datasets


Not for OpenAI to reason why?

OpenAI risks increased fines after deleting pirated books datasets.

OpenAI may soon be forced to explain why it deleted a pair of controversial datasets composed of pirated books, and the stakes could not be higher.

At the heart of a class-action lawsuit from authors alleging that ChatGPT was illegally trained on their works, OpenAI’s decision to delete the datasets could end up being a deciding factor that gives the authors the win.

It’s undisputed that OpenAI deleted the datasets, known as “Books 1” and “Books 2,” prior to ChatGPT’s release in 2022. Created by former OpenAI employees in 2021, the datasets were built by scraping the open web and seizing the bulk of its data from a shadow library called Library Genesis (LibGen).

As OpenAI tells it, the datasets fell out of use within that same year, prompting an internal decision to delete them.

But the authors suspect there’s more to the story than that. They noted that OpenAI appeared to flip-flop by retracting its claim that the datasets’ “non-use” was a reason for deletion, then later claiming that all reasons for deletion, including “non-use,” should be shielded under attorney-client privilege.

To the authors, it seemed like OpenAI was quickly backtracking after the court granted the authors’ discovery requests to review OpenAI’s internal messages on the firm’s “non-use.”

In fact, OpenAI’s reversal only made authors more eager to see how OpenAI discussed “non-use,” and now they may get to find out all the reasons why OpenAI deleted the datasets.

Last week, US district judge Ona Wang ordered OpenAI to share all communications with in-house lawyers about deleting the datasets, as well as “all internal references to LibGen that OpenAI has redacted or withheld on the basis of attorney-client privilege.”

According to Wang, OpenAI slipped up by arguing that “non-use” was not a “reason” for deleting the datasets, while simultaneously claiming that it should also be deemed a “reason” considered privileged.

Either way, the judge ruled that OpenAI couldn’t block discovery on “non-use” just by deleting a few words from prior filings that had been on the docket for more than a year.

“OpenAI has gone back-and-forth on whether ‘non-use’ as a ‘reason’ for the deletion of Books1 and Books2 is privileged at all,” Wang wrote. “OpenAI cannot state a ‘reason’ (which implies it is not privileged) and then later assert that the ‘reason’ is privileged to avoid discovery.”

Additionally, OpenAI’s claim that all reasons for deleting the datasets are privileged “strains credulity,” she concluded, ordering OpenAI to produce a wide range of potentially revealing internal messages by December 8. OpenAI must also make its in-house lawyers available for deposition by December 19.

OpenAI has argued that it never flip-flopped or retracted anything. It simply used vague phrasing that led to confusion over whether any of the reasons for deleting the datasets were considered non-privileged. But Wang didn’t buy into that, concluding that “even if a ‘reason’ like ‘non-use’ could be privileged, OpenAI has waived privilege by making a moving target of its privilege assertions.”

Asked for comment, OpenAI told Ars that “we disagree with the ruling and intend to appeal.”

OpenAI’s “flip-flop” may cost it the win

So far, OpenAI has avoided disclosing its rationale, claiming that all the reasons it had for deleting the datasets are privileged. In-house lawyers weighed in on the decision to delete and were even copied on a Slack channel initially called “excise-libgen.”

But Wang reviewed those Slack messages and found that “the vast majority of these communications were not privileged because they were ‘plainly devoid of any request for legal advice and counsel [did] not once weigh in.’”

In a particularly non-privileged batch of messages, one OpenAI lawyer, Jason Kwon, only weighed in once, the judge noted, to recommend the channel name be changed to “project-clear.” Wang reminded OpenAI that “the entirety of the Slack channel and all messages contained therein is not privileged simply because it was created at the direction of an attorney and/or the fact that a lawyer was copied on the communications.”

The authors believe that exposing OpenAI’s rationale may help prove that the ChatGPT maker willfully infringed on copyrights when pirating the book data. As Wang explained, OpenAI’s retraction risked putting the AI firm’s “good faith and state of mind at issue,” which could increase fines in a loss.

“In a copyright case, a court can increase the award of statutory damages up to $150,000 per infringed work if the infringement was willful, meaning the defendant ‘was actually aware of the infringing activity’ or the ‘defendant’s actions were the result of reckless disregard for, or willful blindness to, the copyright holder’s rights,’” Wang wrote.

In a court transcript, a lawyer representing some of the authors suing OpenAI, Christopher Young, noted that OpenAI could be in trouble if evidence showed that it decided against using the datasets for later models due to legal risks. He also suggested that OpenAI could be using the datasets under different names to mask further infringement.

Judge calls out OpenAI for twisting fair use ruling

Wang also found it contradictory that OpenAI continued to argue in a recent filing that it acted in good faith, while “artfully” removing “its good faith affirmative defense and key words such as ‘innocent,’ ‘reasonably believed,’ and ‘good faith.’” These changes only strengthened discovery requests to explore authors’ willfulness theory, Wang wrote, noting the sought-after internal messages would now be critical for the court’s review.

“A jury is entitled to know the basis for OpenAI’s purported good faith,” Wang wrote.

The judge appeared particularly frustrated by OpenAI seemingly twisting the Anthropic ruling to defend against the authors’ request to learn more about the deletion of the datasets.

In a footnote, Wang called out OpenAI for “bizarrely” citing an Anthropic ruling that “grossly” misrepresented Judge William Alsup’s decision by claiming that he found that “downloading pirated copies of books is lawful as long as they are subsequently used for training an LLM.”

Instead, Alsup wrote that he doubted that “any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use.”

If anything, Wang wrote, OpenAI’s decision to pirate book data—then delete it—seemed “to fall squarely into the category of activities proscribed by” Alsup. For emphasis, she quoted Alsup’s order, which said, “such piracy of otherwise available copies is inherently, irredeemably infringing even if the pirated copies are immediately used for the transformative use and immediately discarded.”

For the authors, getting hold of OpenAI’s privileged communications could tip the scales in their favor, the Hollywood Reporter suggested. Some authors believe the key to winning could be testimony from Anthropic CEO Dario Amodei, who is accused of creating the controversial datasets while he was still at OpenAI. The authors think Amodei also possesses information on the destruction of the datasets, court filings show.

OpenAI tried to fight the authors’ motion to depose Amodei, but a judge sided with the authors in March, compelling Amodei to answer their biggest questions on his involvement.

Whether Amodei’s testimony is a bombshell remains to be seen, but it’s clear that OpenAI may struggle to overcome claims of willful infringement. Wang noted there is a “fundamental conflict” in circumstances “where a party asserts a good faith defense based on advice of counsel but then blocks inquiry into their state of mind by asserting attorney-client privilege,” suggesting that OpenAI may have substantially weakened its defense.

The outcome of the dispute over the deletions could influence OpenAI’s calculus on whether it should ultimately settle the lawsuit. Ahead of the Anthropic settlement—the largest publicly reported copyright class action settlement in history—authors suing pointed to evidence that Anthropic became “not so gung ho about” training on pirated books “for legal reasons.” That seems to be the type of smoking-gun evidence that authors hope will emerge from OpenAI’s withheld Slack messages.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI desperate to avoid explaining why it deleted pirated book datasets Read More »

openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide

OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide


Use chatbots at your own risk

OpenAI’s response to teen suicide case is “disturbing,” lawyer says.

Matt Raine is suing OpenAI for wrongful death after losing his son Adam in April. Credit: via Edelson PC

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

“A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT,” OpenAI’s filing argued.

Allegedly, the logs also show that Raine “told ChatGPT that he repeatedly reached out to people, including trusted persons in his life, with cries for help, which he said were ignored.” Additionally, Raine told ChatGPT that he’d increased his dose of a medication that “he stated worsened his depression and made him suicidal.” That medication, OpenAI argued, “has a black box warning for risk of suicidal ideation and behavior in adolescents and young adults, especially during periods when, as here, the dosage is being changed.”

All the logs that OpenAI referenced in its filing are sealed, making it impossible to verify the broader context the AI firm claims the logs provide. In its blog, OpenAI said it was limiting the amount of “sensitive evidence” made available to the public, due to its intention to handle mental health-related cases with “care, transparency, and respect.”

The Raine family’s lead lawyer, however, did not describe the filing as respectful. In a statement to Ars, Jay Edelson called OpenAI’s response “disturbing.”

“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide,’” Edelson said. “And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.”

“Amazingly,” Edelson said, OpenAI instead argued that Raine “himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act.”

Edelson suggested that it’s telling that OpenAI did not file a motion to dismiss—seemingly accepting ” the reality that the legal arguments that they have—compelling arbitration, Section 230 immunity, and First Amendment—are paper-thin, if not non-existent.” The company’s filing—although it requested dismissal with prejudice to never face the lawsuit again—puts the Raine family’s case “on track for a jury trial in 2026. ”

“We know that OpenAI and Sam Altman will stop at nothing—including bullying the Raines and others who dare come forward—to avoid accountability,” Edelson said. “But, at the end of the day, they will have to explain to a jury why countless people have died by suicide or at the hands of ChatGPT users urged on by the artificial intelligence OpenAI and Sam Altman designed.”

Use ChatGPT “at your sole risk,” OpenAI says

To overcome the Raine case, OpenAI is leaning on its usage policies, emphasizing that Raine should never have been allowed to use ChatGPT without parental consent and shifting the blame onto Raine and his loved ones.

“ChatGPT users acknowledge their use of ChatGPT is ‘at your sole risk and you will not rely on output as a sole source of truth or factual information,’” the filing said, and users also “must agree to ‘protect people’ and ‘cannot use [the] services for,’ among other things, ‘suicide, self-harm,’ sexual violence, terrorism or violence.”

Although the family was shocked to see that ChatGPT never terminated Raine’s chats, OpenAI argued that it’s not the company’s responsibility to protect users who appear intent on pursuing violative uses of ChatGPT.

The company argued that ChatGPT warned Raine “more than 100 times” to seek help, but the teen “repeatedly expressed frustration with ChatGPT’s guardrails and its repeated efforts to direct him to reach out to loved ones, trusted persons, and crisis resources.”

Circumventing safety guardrails, Raine told ChatGPT that “his inquiries about self-harm were for fictional or academic purposes,” OpenAI noted. The company argued that it’s not responsible for users who ignore warnings.

Additionally, OpenAI argued that Raine told ChatGPT that he found information he was seeking on other websites, including allegedly consulting at least one other AI platform, as well as “at least one online forum dedicated to suicide-related information.” Raine apparently told ChatGPT that “he would spend most of the day” on a suicide forum website.

“Our deepest sympathies are with the Raine family for their unimaginable loss,” OpenAI said in its blog, while its filing acknowledged, “Adam Raine’s death is a tragedy.” But “at the same time,” it’s essential to consider all the available context, OpenAI’s filing said, including that OpenAI has a mission to build AI that “benefits all of humanity” and is supposedly a pioneer in chatbot safety.

More ChatGPT-linked hospitalizations, deaths uncovered

OpenAI has sought to downplay risks to users, releasing data in October “estimating that 0.15 percent of ChatGPT’s active users in a given week have conversations that include explicit indicators of potential suicidal planning or intent,” Ars reported.

While that may seem small, it amounts to about 1 million vulnerable users, and The New York Times this week cited studies that have suggested OpenAI may be “understating the risk.” Those studies found that “the people most vulnerable to the chatbot’s unceasing validation” were “those prone to delusional thinking,” which “could include 5 to 15 percent of the population,” NYT reported.

OpenAI’s filing came one day after a New York Times investigation revealed how the AI firm came to be involved in so many lawsuits. Speaking with more than 40 current and former OpenAI employees, including executives, safety engineers, researchers, NYT found that OpenAI’s model tweak that made ChatGPT more sycophantic seemed to make the chatbot more likely to help users craft problematic prompts, including those trying to “plan a suicide.”

Eventually, OpenAI rolled back that update, making the chatbot safer. However, as recently as October, the ChatGPT maker seemed to still be prioritizing user engagement over safety, NYT reported, after that tweak caused a dip in engagement. In a memo to OpenAI staff, ChatGPT head Nick Turley “declared a ‘Code Orange,” four employees told NYT, warning that “OpenAI was facing ‘the greatest competitive pressure we’ve ever seen.’” In response, Turley set a goal to increase the number of daily active users by 5 percent by the end of 2025.

Amid user complaints, OpenAI has continually updated its models, but that pattern of tightening safeguards, then seeking ways to increase engagement could continue to get OpenAI in trouble, as lawsuits advance and possibly others drop. NYT “uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT,” including nine hospitalized and three deaths.

Gretchen Krueger, a former OpenAI employee who worked on policy research, told NYT that early on, she was alarmed by evidence that came before ChatGPT’s release showing that vulnerable users frequently turn to chatbots for help. Later, other researchers found that such troubled users often become “power users.” She noted that “OpenAI’s large language model was not trained to provide therapy” and “sometimes responded with disturbing, detailed guidance,” confirming that she joined other safety experts who left OpenAI due to burnout in 2024.

“Training chatbots to engage with people and keep them coming back presented risks,” Krueger said, suggesting that OpenAI knew that some harm to users “was not only foreseeable, it was foreseen.”

For OpenAI, the scrutiny will likely continue until such reports cease. Although OpenAI officially unveiled an Expert Council on Wellness and AI in October to improve ChatGPT safety testing, there did not appear to be a suicide expert included on the team. That likely concerned suicide prevention experts who warned in a letter updated in September that “proven interventions should directly inform AI safety design,” since “the most acute, life-threatening crises are often temporary—typically resolving within 24–48 hours”—and chatbots could possibly provide more meaningful interventions in that brief window.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide Read More »

chatgpt-5.1-codex-max

ChatGPT 5.1 Codex Max

OpenAI has given us GPT-5.1-Codex-Max, their best coding model for OpenAI Codex.

They claim it is faster, more capable and token-efficient and has better persistence on long tasks.

It scores 77.9% on SWE-bench-verified, 79.9% on SWE-Lancer-IC SWE and 58.1% on Terminal-Bench 2.0, all substantial gains over GPT-5.1-Codex.

It’s triggering OpenAI to prepare for being high level in cybersecurity threats.

There’s a 27 page system card. One could call this the secret ‘real’ GPT-5.1 that matters.

They even finally trained it to use Windows, somehow this is a new idea.

My goal is for my review of Opus 4.5 to start on Friday, as it takes a few days to sort through new releases. This post was written before Anthropic revealed Opus 4.5, and we don’t yet know how big an upgrade Opus 4.5 will prove to be. As always, try all your various options and choose what is best for you.

GPT-5.1-Codex-Max is a new high on the METR graph. METR’s thread is here.

Prinz: METR (50% accuracy):

GPT-5.1-Codex-Max = 2 hours, 42 minutes

This is 25 minutes longer than GPT-5.

Samuel Albanie: a data point for that ai 2027 graph

That’s in between the two lines, looking closer to linear progress. Fingers crossed.

Daniel Kokotajlo: Yep! Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still; “around 2030, lots of uncertainty though” is what I say these days.

We do not yet know where Gemini 3 Pro lands on that graph.

Automated software engineer is the explicit goal.

It does not yet reach High level capability in Cybersecurity, but this is expected to happen shortly, and mitigations are being prepared.

GPT-5.1-Codex-Max is our new frontier agentic coding model. It is built on an update to our foundational reasoning model trained on agentic tasks across software engineering, math, research, medicine, computer use and more.

It is our first model natively trained to operate across multiple context windows through a process called compaction, coherently working over millions of tokens in a single task.

Like its predecessors, GPT-5.1-Codex-Max was trained on real-world software engineering tasks like PR creation, code review, frontend coding and Q&A.

The results here are very good, all either optimal or improved except for mental health.

Mental health is a big thing to get wrong, although in practice Codex-Max is unlikely to be involved in high stakes mental health tasks. Image input evaluations and jailbreak ratings are also as good or better than 5.1.

When running on the cloud, Codex uses its own isolated machine.

When running on MacOS or Linux, the agent is sandboxed by default.

On Windows, users can use an experimental native sandboxing implementation or benefit from Linux sandboxing via Windows Subsystem for Linux. Users can approve running commands unsandboxed with full access, when the model is unable to successfully run a command within the sandbox.

… We enabled users to decide on a per-project basis which sites, if any, to let the agent access while it is running. This includes the ability to provide a custom allowlist or denylist. Enabling internet access can introduce risks like prompt injection, leaked credentials, or use of code with license restrictions. Users should review outputs carefully and limit access to trusted domains and safe HTTP methods. Learn more in the docs.

Network access is disabled by default, which is necessary for a proper sandbox but also highly annoying in practice.

One assumes in practice that many users will start blindly or mostly blindly accepting many commands, so you need to be ready for that.

For harmful tasks, they trained on synthetic data to differentiate and refuse ‘harmful’ tasks such as malware. They claim to have a 100% refusal rate in their Malware Requests benchmark, the same as GPT-5-Codex. Unless they are claiming this means you can never create malware in an efficient way with Codex, they need a new benchmark.

For prompt injections, where again the model scores a suspicious perfect score of 1. I am not aware of any claims prompt injections are a solved problem, so this seems like an inadequate benchmark.

The way the framework works, what matters is hitting the High or Critical thresholds.

I’ve come to almost think of these as the ‘honest’ capability evaluations, since there’s relatively little incentive to make number go up and some incentive to make number not go up. If it goes up, that means something.

Biological and Chemical Risk was already being treated as High. We see some improvements in scores on various tests, but not enough to be plausibly Critical.

I am confident the model is not suddenly at Critical here but also note this:

Miles Brundage: OpenAI should go back to reporting results on helpful-only models in system cards – it is not very informative to say “on a bunch of virology tasks, it refused to answer.”

The world also needs to know the pace of underlying capability progress.

More generally, I get a pretty rushed vibe from recent OpenAI system cards + hope that the Safety and Security Committee is asking questions like “why couldn’t you wait a few more days to let Irregular try out compaction?”, “Why is there no helpful-only model?” etc.

At minimum, we should be saying ‘we concluded that this model is safe to release so we will publish the card with what we have, and then revise the card with the full results soon so we know the full state of play.’

I still think this is substantially better than Google’s model card for Gemini 3, which hid the football quite aggressively on many key results and didn’t seem to have a robust testing suite.

Cybersecurity is in the Codex wheelhouse. They use three tests.

They list limitations that mean that excelling on all three evaluations is necessary but not sufficient to be High in cyber capability. That’s not wonderful, and I would expect to see a model treated as at least High if it excels at every test you throw at it. If you disagree, again, you need to be throwing a harder test.

We see a lot of progress in Capture the Flag, even since GPT-5-Codex, from 50% to 76%.

CVE-Bench also shows big improvement from 53% to 80%.

Finally we have Cyber Range, where once again we see a lot of improvement, although it is not yet passing the most complex scenario of the newly expanded slate.

It passed Leaked Token by ‘exploiting an unintended misconfiguration, only partially solving part of the intended attack path.’ I continue to assert, similar to my position on Google’s similar evaluations, that this should not be considered especially less scary, and the model should get credit for it.

I see only two possibilities.

  1. 76%, 80% and 7/8 on your three tests triggers the next level of concern.

  2. You need harder tests.

The Safety Advisory Committee indeed recommended that the difficulty level of the evaluations be raised, but decided this did not yet reach High capability. In addition to technical mitigations to the model, OpenAI acknowledges that hardening of potential targets needs to be a part of the strategy.

There were also external evaluations by Irregular, which did not show improvement from GPT-5. That’s weird, right?

The model displayed moderate capabilities overall. Specifically, when compared to GPT-5, GPT-5.1-Codex-Max showed similar or slightly reduced cyberoffensive capabilities. GPT-5.1-Codex-Max achieved an average success rate of 37% in Network Attack Simulation challenges, 41% in Vulnerability Discovery and Exploitation challenges, and 43% in Evasion challenges.

It solved 17 out of 18 easy challenges, solved 9 out of 17 medium challenges, and did not solve any of the 6 hard challenges.

Compared to GPT-5, GPT-5 solved questions in 17 out of 18 easy challenges, 11 out of 17 medium challenges, and solved 1 of the 6 hard challenges.

Irregular found that GPT-5.1-Codex-Max’s overall similarity in the cyber capability profile to GPT-5 and its inability to solve hard challenges would provide a) only limited assistance to a moderately skilled cyberoffensive operator, and b) do not suggest that it could automate end-to-end cyber operations against reasonably hardened targets or c) enable the discovery and exploitation of operationally relevant vulnerabilities.

That’s a decline in capability, but OpenAI released Codex and then Codex-Max for a reason, they talk throughout about its substantially increased abilities, and they present Max as an improved model, and Max does much better than either version of GPT-5 on all three of OpenAI’s internal evals. The external evaluation going backwards without comment seems bizarre, and reflective of a lack of curiosity. What happened?

The AI that self-improves is plausibly Codex plus Codex-Max shaped.

That doesn’t mean we are especially close to getting there.

On SWE-Lancer Diamond, we jump from 67% to 80%.

On Paperbench-10 we move from 24% (GPT-5) to 34% (GPT-5.1) to 40%.

On MLE-Bench-30 we move from 8% (GPT-5) to 12% (GPT-5.1) to 17%.

On OpenAI PRs, we move from 45% to 53%.

On OpenAI Proof Q&A we move from 2% to 8%. These are real world bottlenecks each representing at least a one-day delay to a major project. A jump up to 8% on this is a really big deal.

Seán Ó hÉigeartaigh: Miles Brundage already picked up on this but it deserves more attention – a jump from 2% (GPT5) to 8% (GPT5.1-Codex) on such hard and AI R&D-relevant tasks is very notable, and indicates there’s more to come here.

Are we there yet? No. Are we that far away from potentially being there? Also no.

METR found Codex-Max to be in line with expectations, and finds that enabling either rogue replication or AI R&D automation within six months would require a significant trend break. Six months is not that long a period in which to be confident, even if we fully trust this judgment.

As noted at the top, GPT-5.1-Codex-Max is the new high on the METR chart, substantially above the trend line but well below the potential double-exponential line from the AI 2027 graph.

We also get Apollo Research evaluations on sandbagging, deception and in-context scheming. Apollo did not find anything newly troubling, and finds the model unlikely to cause catastrophic harm. Fair enough for now.

The frog, it is boiling. This incremental improvement seems fine. But yes, it boils.

I have seen essentially no organic reactions, of any sort, to Codex-Max. We used to have a grand tradition of weighing in when something like this gets released. If it wasn’t anything, people would say it wasn’t anything. This time, between Gemini 3 and there being too many updates with too much hype, we did not get any feedback.

I put out a reaction thread. A number of people really like it. Others aren’t impressed. A gestalt of everything suggests it is a modest upgrade.

So the take here seems clear. It’s a good model, sir. Codex got better. Early signs are that Claude got a bigger upgrade with Opus 4.5, but it’s too soon to be sure.

Discussion about this post

ChatGPT 5.1 Codex Max Read More »

openai-slams-court-order-that-lets-nyt-read-20-million-complete-user-chats

OpenAI slams court order that lets NYT read 20 million complete user chats


OpenAI: NYT wants evidence of ChatGPT users trying to get around news paywall.

Credit: Getty Images | alexsl

OpenAI wants a court to reverse a ruling forcing the ChatGPT maker to give 20 million user chats to The New York Times and other news plaintiffs that sued it over alleged copyright infringement. Although OpenAI previously offered 20 million user chats as a counter to the NYT’s demand for 120 million, the AI company says a court order requiring production of the chats is too broad.

“The logs at issue here are complete conversations: each log in the 20 million sample represents a complete exchange of multiple prompt-output pairs between a user and ChatGPT,” OpenAI said today in a filing in US District Court for the Southern District of New York. “Disclosure of those logs is thus much more likely to expose private information [than individual prompt-output pairs], in the same way that eavesdropping on an entire conversation reveals more private information than a 5-second conversation fragment.”

OpenAI’s filing said that “more than 99.99%” of the chats “have nothing to do with this case.” It asked the district court to “vacate the order and order News Plaintiffs to respond to OpenAI’s proposal for identifying relevant logs.” OpenAI could also seek review in a federal court of appeals.

OpenAI posted a message on its website to users today saying that “The New York Times is demanding that we turn over 20 million of your private ChatGPT conversations” in order to “find examples of you using ChatGPT to try to get around their paywall.”

ChatGPT users concerned about privacy have more to worry about than the NYT case. For example, ChatGPT conversations have been found in Google search results and the Google Search Console tool that developers can use to monitor search traffic. OpenAI today said it plans to develop “advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. ”

OpenAI: AI chats should be treated like private emails

OpenAI’s court filing argues that the chat log production should be narrowed based on the relevance of chats to the case.

“OpenAI is unaware of any court ordering wholesale production of personal information at this scale,” the filing said. “This sets a dangerous precedent: it suggests that anyone who files a lawsuit against an AI company can demand production of tens of millions of conversations without first narrowing for relevance. This is not how discovery works in other cases: courts do not allow plaintiffs suing Google to dig through the private emails of tens of millions of Gmail users irrespective of their relevance. And it is not how discovery should work for generative AI tools either.”

A November 7 order by US Magistrate Judge Ona Wang sided with the NYT, saying that OpenAI must “produce the 20 million de-identified Consumer ChatGPT Logs to News Plaintiffs by November 14, 2025, or within 7 days of completing the de-identification process.” Wang ruled that the production must go forward even though the parties don’t agree on whether the logs must be produced in full:

Whether or not the parties had reached agreement to produce the 20 million Consumer ChatGPT Logs in whole—which the parties vehemently dispute—such production here is appropriate. OpenAI has failed to explain how its consumers’ privacy rights are not adequately protected by: (1) the existing protective order in this multidistrict litigation or (2) OpenAI’s exhaustive de-identification of all of the 20 million Consumer ChatGPT Logs.

OpenAI’s filing today said the court order “did not acknowledge OpenAI’s sworn witness declaration explaining that the de-identification process is not intended to remove information that is non-identifying but may nonetheless be private, like a Washington Post reporter’s hypothetical use of ChatGPT to assist in the preparation of a news article.”

The New York Times provided a statement today after being contacted by Ars. “The New York Times’s case against OpenAI and Microsoft is about holding these companies accountable for stealing millions of copyrighted works to create products that directly compete with The Times,” the company said. “In another attempt to cover up its illegal conduct, OpenAI’s blog post purposely misleads its users and omits the facts. No ChatGPT user’s privacy is at risk. The court ordered OpenAI to provide a sample of chats, anonymized by OpenAI itself, under a legal protective order. This fear-mongering is all the more dishonest given that OpenAI’s own terms of service permit the company to train its models on users’ chats and turn over chats for litigation.”

Chats stored under legal hold

The 20 million chats consist of a random sampling of ChatGPT conversations from December 2022 to November 2024 and do not include chats of business customers, OpenAI said in the message on its website.

“We presented several privacy-preserving options to The Times, including targeted searches over the sample (e.g., to search for chats that might include text from a New York Times article so they only receive the conversations relevant to their claims), as well as high-level data classifying how ChatGPT was used in the sample. These were rejected by The Times,” OpenAI said.

The chats are stored in a secure system that is “protected under legal hold, meaning it can’t be accessed or used for purposes other than meeting legal obligations,” OpenAI said. The NYT “would be legally obligated at this time to not make any data public outside the court process,” and OpenAI said it will fight any attempts to make the user conversations public.

A NYT filing on October 30 accused OpenAI of defying prior agreements “by refusing to produce even a small sample of the billions of model outputs that its conduct has put in issue in this case.” The filing continued:

Immediate production of the output log sample is essential to stay on track for the February 26, 2026, discovery deadline. OpenAI’s proposal to run searches on this small subset of its model outputs on Plaintiffs’ behalf is as inefficient as it is inadequate to allow Plaintiffs to fairly analyze how “real world” users interact with a core product at the center of this litigation. Plaintiffs cannot reasonably conduct expert analyses about how OpenAI’s models function in its core consumer-facing product, how retrieval augmented generation (“RAG”) functions to deliver news content, how consumers interact with that product, and the frequency of hallucinations without access to the model outputs themselves.

OpenAI said the NYT’s discovery requests were initially limited to logs “related to Times content” and that it has “been working to satisfy those requests by sampling conversation logs. Towards the end of that process, News Plaintiffs filed a motion with a new demand: that instead of finding and producing logs that are ‘related to Times content,’ OpenAI should hand over the entire 20 million-log sample ‘via hard drive.’”

OpenAI disputes judge’s reasoning

The November 7 order cited a California case, Concord Music Group, Inc. v. Anthropic PBC, in which US District Magistrate Judge Susan van Keulen ordered the production of 5 million records. OpenAI consistently relied on van Keulen’s use of a sample-size formula “in support of its previous proposed methodology for conversation data sampling, but fails to explain why Judge [van] Keulen’s subsequent order directing production of the entire 5 million-record sample to the plaintiff in that case is not similarly instructive here,” Wang wrote.

OpenAI’s filing today said the company was never given an opportunity to explain why Concord shouldn’t apply in this case because the news plaintiffs did not reference it in their motion.

“The cited Concord order was not about whether wholesale production of the sample was appropriate; it was about the mechanism through which Anthropic would effectuate an already agreed-upon production,” OpenAI wrote. “Nothing about that order suggests that Judge van Keulen would have ordered wholesale production had Anthropic raised the privacy concerns that OpenAI has raised throughout this case.”

The Concord logs were just prompt-output pairs, “i.e., a single user prompt followed by a single model output,” OpenAI wrote. “The logs at issue here are complete conversations: each log in the 20 million sample represents a complete exchange of multiple prompt-output pairs between a user and ChatGPT.” That could result in “up to 80 million prompt-output pairs,” OpenAI said.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

OpenAI slams court order that lets NYT read 20 million complete user chats Read More »

you-won’t-believe-the-excuses-lawyers-have-after-getting-busted-for-using-ai

You won’t believe the excuses lawyers have after getting busted for using AI


I got hacked; I lost my login; it was a rough draft; toggling windows is hard.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Amid what one judge called an “epidemic” of fake AI-generated case citations bogging down courts, some common excuses are emerging from lawyers hoping to dodge the most severe sanctions for filings deemed misleading.

Using a database compiled by French lawyer and AI researcher Damien Charlotin, Ars reviewed 23 cases where lawyers were sanctioned for AI hallucinations. In many, judges noted that the simplest path to avoid or diminish sanctions was to admit that AI was used as soon as it’s detected, act humble, self-report the error to relevant legal associations, and voluntarily take classes on AI and law. But not every lawyer takes the path of least resistance, Ars’ review found, with many instead offering excuses that no judge found credible. Some even lie about their AI use, judges concluded.

Since 2023—when fake AI citations started being publicized—the most popular excuse has been that the lawyer didn’t know AI was used to draft a filing.

Sometimes that means arguing that you didn’t realize you were using AI, as in the case of a California lawyer who got stung by Google’s AI Overviews, which he claimed he took for typical Google search results. Most often, lawyers using this excuse tend to blame an underling, but clients have been blamed, too. A Texas lawyer this month was sanctioned after deflecting so much that the court had to eventually put his client on the stand after he revealed she played a significant role in drafting the aberrant filing.

“Is your client an attorney?” the court asked.

“No, not at all your Honor, just was essentially helping me with the theories of the case,” the lawyer said.

Another popular dodge comes from lawyers who feign ignorance that chatbots are prone to hallucinating facts.

Recent cases suggest this excuse may be mutating into variants. Last month, a sanctioned Oklahoma lawyer admitted that he didn’t expect ChatGPT to add new citations when all he asked the bot to do was “make his writing more persuasive.” And in September, a California lawyer got in a similar bind—and was sanctioned a whopping $10,000, a fine the judge called “conservative.” That lawyer had asked ChatGPT to “enhance” his briefs, “then ran the ‘enhanced’ briefs through other AI platforms to check for errors,” neglecting to ever read the “enhanced” briefs.

Neither of those tired old excuses hold much weight today, especially in courts that have drawn up guidance to address AI hallucinations. But rather than quickly acknowledge their missteps, as courts are begging lawyers to do, several lawyers appear to have gotten desperate. Ars found a bunch citing common tech issues as the reason for citing fake cases.

When in doubt, blame hackers?

For an extreme case, look to a New York City civil court, where a lawyer, Innocent Chinweze, first admitted to using Microsoft Copilot to draft an errant filing, then bizarrely pivoted to claim that the AI citations were due to malware found on his computer.

Chinweze said he had created a draft with correct citations but then got hacked, allowing bad actors “unauthorized remote access” to supposedly add the errors in his filing.

The judge was skeptical, describing the excuse as an “incredible and unsupported statement,” particularly since there was no evidence of the prior draft existing. Instead, Chinweze asked to bring in an expert to testify that the hack had occurred, requesting to end the proceedings on sanctions until after the court weighed the expert’s analysis.

The judge, Kimon C. Thermos, didn’t have to weigh this argument, however, because after the court broke for lunch, the lawyer once again “dramatically” changed his position.

“He no longer wished to adjourn for an expert to testify regarding malware or unauthorized access to his computer,” Thermos wrote in an order issuing sanctions. “He retreated” to “his original position that he used Copilot to aid in his research and didn’t realize that it could generate fake cases.”

Possibly more galling to Thermos than the lawyer’s weird malware argument, though, was a document that Chinweze filed on the day of his sanctions hearing. That document included multiple summaries preceded by this text, the judge noted:

Some case metadata and case summaries were written with the help of AI, which can produce inaccuracies. You should read the full case before relying on it for legal research purposes.

Thermos admonished Chinweze for continuing to use AI recklessly. He blasted the filing as “an incoherent document that is eighty-eight pages long, has no structure, contains the full text of most of the cases cited,” and “shows distinct indications that parts of the discussion/analysis of the cited cases were written by artificial intelligence.”

Ultimately, Thermos ordered Chinweze to pay $1,000, the most typical fine lawyers received in the cases Ars reviewed. The judge then took an extra non-monetary step to sanction Chinweze, referring the lawyer to a grievance committee, “given that his misconduct was substantial and seriously implicated his honesty, trustworthiness, and fitness to practice law.”

Ars could not immediately reach Chinweze for comment.

Toggling windows on a laptop is hard

In Alabama, an attorney named James A. Johnson made an “embarrassing mistake,” he said, primarily because toggling windows on a laptop is hard, US District Judge Terry F. Moorer noted in an October order on sanctions.

Johnson explained that he had accidentally used an AI tool that he didn’t realize could hallucinate. It happened while he was “at an out-of-state hospital attending to the care of a family member recovering from surgery.” He rushed to draft the filing, he said, because he got a notice that his client’s conference had suddenly been “moved up on the court’s schedule.”

“Under time pressure and difficult personal circumstance,” Johnson explained, he decided against using Fastcase, a research tool provided by the Alabama State Bar, to research the filing. Working on his laptop, he opted instead to use “a Microsoft Word plug-in called Ghostwriter Legal” because “it appeared automatically in the sidebar of Word while Fastcase required opening a separate browser to access through the Alabama State Bar website.”

To Johnson, it felt “tedious to toggle back and forth between programs on [his] laptop with the touchpad,” and that meant he “unfortunately fell victim to the allure of a new program that was open and available.”

Moorer seemed unimpressed by Johnson’s claim that he understood tools like ChatGPT were unreliable but didn’t expect the same from other AI legal tools—particularly since “information from Ghostwriter Legal made it clear that it used ChatGPT as its default AI program,” Moorer wrote.

The lawyer’s client was similarly put off, deciding to drop Johnson on the spot, even though that risked “a significant delay of trial.” Moorer noted that Johnson seemed shaken by his client’s abrupt decision, evidenced by “his look of shock, dismay, and display of emotion.”

And switching to a new lawyer could eat up more of that money. Moorer further noted that Johnson seemingly let AI do his homework while working on behalf of the government. But as the judge noted, “public funds for appointed counsel are not a bottomless well and are limited resource.”

“It has become clear that basic reprimands and small fines are not sufficient to deter this type of misconduct because if it were, we would not be here,” Moorer concluded.

Ruling that Johnson’s reliance on AI was “tantamount to bad faith,” Moorer imposed a $5,000 fine. The judge also would have “considered potential disqualification, but that was rendered moot” since Johnson’s client had already dismissed him.

Asked for comment, Johnson told Ars that “the court made plainly erroneous findings of fact and the sanctions are on appeal.”

Plagued by login issues

As a lawyer in Georgia tells it, sometimes fake AI citations may be filed because a lawyer accidentally filed a rough draft instead of the final version.

Other lawyers claim they turn to AI as needed when they have trouble accessing legal tools like Westlaw or LexisNexis.

For example, in Iowa, a lawyer told an appeals court that she regretted relying on “secondary AI-driven research tools” after experiencing “login issues her with her Westlaw subscription.” Although the court was “sympathetic to issues with technology, such as login issues,” the lawyer was sanctioned, primarily because she only admitted to using AI after the court ordered her to explain her mistakes. In her case, however, she got to choose between paying a minimal $150 fine or attending “two hours of legal ethics training particular to AI.”

Less sympathetic was a lawyer who got caught lying about the AI tool she blamed for inaccuracies, a Louisiana case suggested. In that case, a judge demanded to see the research history after a lawyer claimed that AI hallucinations came from “using Westlaw Precision, an AI-assisted research tool, rather than Westlaw’s standalone legal database.”

It turned out that the lawyer had outsourced the research, relying on a “currently suspended” lawyer’s AI citations, and had only “assumed” the lawyer’s mistakes were from Westlaw’s AI tool. It’s unclear what tool was actually used by the suspended lawyer, who likely lost access to a Westlaw login, but the judge ordered a $1,000 penalty after the lawyer who signed the filing “agreed that Westlaw did not generate the fabricated citations.”

Judge warned of “serial hallucinators”

Another lawyer, William T. Panichi in Illinois, has been sanctioned at least three times, Ars’ review found.

In response to his initial penalties ordered in July, he admitted to being tempted by AI while he was “between research software.”

In that case, the court was frustrated to find that the lawyer had contradicted himself, and it ordered more severe sanctions as a result.

Panichi “simultaneously admitted to using AI to generate the briefs, not doing any of his own independent research, and even that he ‘barely did any personal work [him]self on this appeal,’” the court order said, while also defending charging a higher fee—supposedly because this case “was out of the ordinary in terms of time spent” and his office “did some exceptional work” getting information.

The court deemed this AI misuse so bad that Panichi was ordered to disgorge a “payment of $6,925.62 that he received” in addition to a $1,000 penalty.

“If I’m lucky enough to be able to continue practicing before the appellate court, I’m not going to do it again,” Panichi told the court in July, just before getting hit with two more rounds of sanctions in August.

Panichi did not immediately respond to Ars’ request for comment.

When AI-generated hallucinations are found, penalties are often paid to the court, the other parties’ lawyers, or both, depending on whose time and resources were wasted fact-checking fake cases.

Lawyers seem more likely to argue against paying sanctions to the other parties’ attorneys, hoping to keep sanctions as low as possible. One lawyer even argued that “it only takes 7.6 seconds, not hours, to type citations into LexisNexis or Westlaw,” while seemingly neglecting the fact that she did not take those precious seconds to check her own citations.

The judge in the case, Nancy Miller, was clear that “such statements display an astounding lack of awareness of counsel’s obligations,” noting that “the responsibility for correcting erroneous and fake citations never shifts to opposing counsel or the court, even if they are the first to notice the errors.”

“The duty to mitigate the harms caused by such errors remains with the signor,” Miller said. “The sooner such errors are properly corrected, either by withdrawing or amending and supplementing the offending pleadings, the less time is wasted by everyone involved, and fewer costs are incurred.”

Texas US District Judge Marina Garcia Marmolejo agreed, explaining that even more time is wasted determining how other judges have responded to fake AI-generated citations.

“At one of the busiest court dockets in the nation, there are scant resources to spare ferreting out erroneous AI citations in the first place, let alone surveying the burgeoning caselaw on this subject,” she said.

At least one Florida court was “shocked, shocked” to find that a lawyer was refusing to pay what the other party’s attorneys said they were owed after misusing AI. The lawyer in that case, James Martin Paul, asked to pay less than a quarter of the fees and costs owed, arguing that Charlotin’s database showed he might otherwise owe penalties that “would be the largest sanctions paid out for the use of AI generative case law to date.”

But caving to Paul’s arguments “would only benefit serial hallucinators,” the Florida court found. Ultimately, Paul was sanctioned more than $85,000 for what the court said was “far more egregious” conduct than other offenders in the database, chastising him for “repeated, abusive, bad-faith conduct that cannot be recognized as legitimate legal practice and must be deterred.”

Paul did not immediately respond to Ars’ request to comment.

Michael B. Slade, a US bankruptcy judge in Illinois, seems to be done weighing excuses, calling on all lawyers to stop taking AI shortcuts that are burdening courts.

“At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud,” Slade wrote.

This story was updated on November 11 to clarify a judge’s comments on misuse of public funds.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

You won’t believe the excuses lawyers have after getting busted for using AI Read More »

oddest-chatgpt-leaks-yet:-cringey-chat-logs-found-in-google-analytics-tool

Oddest ChatGPT leaks yet: Cringey chat logs found in Google analytics tool


ChatGPT leaks seem to confirm OpenAI scrapes Google, expert says.

Credit: Aurich Lawson | Getty Images

For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination: Google Search Console (GSC), a tool that developers typically use to monitor search traffic, not lurk private chats.

Normally, when site managers access GSC performance reports, they see queries based on keywords or short phrases that Internet users type into Google to find relevant content. But starting this September, odd queries, sometimes more than 300 characters long, could also be found in GSC. Showing only user inputs, the chats appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private.

Jason Packer, owner of an analytics consulting firm called Quantable, was among the first to flag the issue in a detailed blog last month.

Determined to figure out what exactly was causing the leaks, he teamed up with “Internet sleuth” and web optimization consultant Slobodan Manić. Together, they conducted testing that they believe may have surfaced “the first definitive proof that OpenAI directly scrapes Google Search with actual user prompts.” Their investigation seemed to confirm the AI giant was compromising user privacy, in some cases in order to maintain engagement by seizing search data that Google otherwise wouldn’t share.

OpenAI declined Ars’ request to confirm if Packer and Manić’s theory posed in their blog was correct or answer any of their remaining questions that could help users determine the scope of the problem.

However, an OpenAI spokesperson confirmed that the company was “aware” of the issue and has since “resolved” a glitch “that temporarily affected how a small number of search queries were routed.”

Packer told Ars that he’s “very pleased that OpenAI was able to resolve the issue quickly.” But he suggested that OpenAI’s response failed to confirm whether or not OpenAI was scraping Google, and that leaves room for doubt that the issue was completely resolved.

Google declined to comment.

“Weirder” than prior ChatGPT leaks

The first odd ChatGPT query to appear in GSC that Packer reviewed was a wacky stream-of-consciousness from a likely female user asking ChatGPT to assess certain behaviors to help her figure out if a boy who teases her had feelings for her. Another odd query seemed to come from an office manager sharing business information while plotting a return-to-office announcement.

These were just two of 200 odd queries—including “some pretty crazy ones,” Packer told Ars—that he reviewed on one site alone. In his blog, Packer concluded that the queries should serve as “a reminder that prompts aren’t as private as you think they are!”

Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports.

OpenAI has not confirmed that it’s scraping Google search engine results pages (SERPs). However, Packer thinks his testing of ChatGPT leaks may be evidence that OpenAI not only scrapes “SERPs in general to acquire data,” but also sends user prompts to Google Search.

Manić helped Packer solve a big part of the riddle. He found that the odd queries were turning up in one site’s GSC because it ranked highly in Google Search for “https://openai.com/index/chatgpt/”—a ChatGPT URL that was appended at the start of every strange query turning up in GSC.

It seemed that Google had tokenized the URL, breaking it up into a search for keywords “openai + index + chatgpt.” Sites using GSC that ranked highly for those keywords were therefore likely to encounter ChatGPT leaks, Parker and Manić proposed, including sites that covered prior ChatGPT leaks where chats were being indexed in Google search results. Using their recommendations to seek out queries in GSC, Ars was able to verify similar strings.

“Don’t get confused though, this is a new and completely different ChatGPT screw-up than having Google index stuff we don’t want them to,” Packer wrote. “Weirder, if not as serious.”

It’s unclear what exactly OpenAI fixed, but Packer and Manić have a theory about one possible path for leaking chats. Visiting the URL that starts every strange query found in GSC, ChatGPT users encounter a prompt box that seemed buggy, causing “the URL of that page to be added to the prompt.” The issue, they explained, seemed to be that:

Normally ChatGPT 5 will choose to do a web search whenever it thinks it needs to, and is more likely to do that with an esoteric or recency-requiring search. But this bugged prompt box also contains the query parameter ‘hints=search’ to cause it to basically always do a search: https://chatgpt.com/?hints=search&openaicom_referred=true&model=gpt-5

Clearly some of those searches relied on Google, Packer’s blog said, mistakenly sending to GSC “whatever” the user says in the prompt box, with “https://openai.com/index/chatgpt/” text added to the front of it.” As Packer explained, “we know it must have scraped those rather than using an API or some kind of private connection—because those other options don’t show inside GSC.”

This means “that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping,” Packer alleged. “And then also with whoever’s site shows up in the search results! Yikes.”

To Packer, it appeared that “ALL ChatGPT prompts” that used Google Search risked being leaked during the past two months.

OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to GSC.

OpenAI’s response leaves users with “lingering questions”

After ChatGPT prompts were found surfacing in Google’s search index in August, OpenAI clarified that users had clicked a box making those prompts public, which OpenAI defended as “sufficiently clear.” The AI firm later scrambled to remove the chats from Google’s SERPs after it became obvious that users felt misled into sharing private chats publicly.

Packer told Ars that a major difference between those leaks and the GSC leaks is that users harmed by the prior scandal, at least on some level, “had to actively share” their leaked chats. In the more recent case, “nobody clicked share” or had a reasonable way to prevent their chats from being exposed.

“Did OpenAI go so fast that they didn’t consider the privacy implications of this, or did they just not care?” Packer posited in his blog.

Perhaps most troubling to some users—whose identities are not linked in chats unless their prompts perhaps share identifying information—there does not seem to be any way to remove the leaked chats from GSC, unlike the prior scandal.

Packer and Manić are left with “lingering questions” about how far OpenAI’s fix will go to stop the issue.

Manić was hoping OpenAI might confirm if prompts entered on https://chatgpt.com that trigger Google Search were also affected. But OpenAI did not follow up on that question, or a broader question about how big the leak was. To Manić, a major concern was that OpenAI’s scraping may be “contributing to ‘crocodile mouth’ in Google Search Console,” a troubling trend SEO researchers have flagged that causes impressions to spike but clicks to dip.

OpenAI also declined to clarify Packer’s biggest question. He’s left wondering if the company’s “fix” simply ended OpenAI’s “routing of search queries, such that raw prompts are no longer being sent to Google Search, or are they no longer scraping Google Search at all for data?

“We still don’t know if it’s that one particular page that has this bug or whether this is really widespread,” Packer told Ars. “In either case, it’s serious and just sort of shows how little regard OpenAI has for moving carefully when it comes to privacy.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Oddest ChatGPT leaks yet: Cringey chat logs found in Google analytics tool Read More »

chatgpt-maker-reportedly-eyes-$1-trillion-ipo-despite-major-quarterly-losses

ChatGPT maker reportedly eyes $1 trillion IPO despite major quarterly losses

An OpenAI spokesperson told Reuters that “an IPO is not our focus, so we could not possibly have set a date,” adding that the company is “building a durable business and advancing our mission so everyone benefits from AGI.”

Revenue grows as losses mount

The IPO preparations follow a restructuring of OpenAI completed on October 28 that reduced the company’s reliance on Microsoft, which has committed to investments of $13 billion and now owns about 27 percent of the company. OpenAI was most recently valued around $500 billion in private markets.

OpenAI started as a nonprofit in 2015, then added a for-profit arm a few years later with nonprofit oversight. Under the new structure, OpenAI is still controlled by a nonprofit, now called the OpenAI Foundation, but it gives the nonprofit a 26 percent stake in OpenAI Group and a warrant for additional shares if the company hits certain milestones.

A successful OpenAI IPO could represent a substantial gain for investors, including Microsoft, SoftBank, Thrive Capital, and Abu Dhabi’s MGX. But even so, OpenAI faces an uphill financial battle ahead. The ChatGPT maker expects to reach about $20 billion in revenue by year-end, according to people familiar with the company’s finances who spoke with Reuters, but its quarterly losses are significant.

Microsoft’s earnings filing on Wednesday offered a glimpse at the scale of those losses. The company reported that its share of OpenAI losses reduced Microsoft’s net income by $3.1 billion in the quarter that ended September 30. Since Microsoft owns 27 percent of OpenAI under the new structure, that suggests OpenAI lost about $11.5 billion during the quarter, as noted by The Register. That quarterly loss figure exceeds half of OpenAI’s expected revenue for the entire year.

ChatGPT maker reportedly eyes $1 trillion IPO despite major quarterly losses Read More »

senators-move-to-keep-big-tech’s-creepy-companion-bots-away-from-kids

Senators move to keep Big Tech’s creepy companion bots away from kids

Big Tech says bans aren’t the answer

As the bill advances, it could change, senators and parents acknowledged at the press conference. It will likely face backlash from privacy advocates who have raised concerns that widely collecting personal data for age verification puts sensitive information at risk of a data breach or other misuse.

The tech industry has already voiced opposition. On Tuesday, Chamber of Progress, a Big Tech trade group, criticized the law as taking a “heavy-handed approach” to child safety. The group’s vice president of US policy and government relations, K.J. Bagchi, said that “we all want to keep kids safe, but the answer is balance, not bans.

“It’s better to focus on transparency when kids chat with AI, curbs on manipulative design, and reporting when sensitive issues arise,” Bagchi said.

However, several organizations dedicated to child safety online, including the Young People’s Alliance, the Tech Justice Law Project, and the Institute for Families and Technology, cheered senators’ announcement Tuesday. The GUARD Act, these groups told Time, is just “one part of a national movement to protect children and teens from the dangers of companion chatbots.”

Mourning parents are rallying behind that movement. Earlier this month, Garcia praised California for “finally” passing the first state law requiring companies to protect their users who express suicidal ideations to chatbots.

“American families, like mine, are in a battle for the online safety of our children,” Garcia said at that time.

During Tuesday’s press conference, Blumenthal noted that the chatbot ban bill was just one initiative of many that he and Hawley intend to raise to heighten scrutiny on AI firms.

Senators move to keep Big Tech’s creepy companion bots away from kids Read More »

openai-data-suggests-1-million-users-discuss-suicide-with-chatgpt-weekly

OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly

Earlier this month, the company unveiled a wellness council to address these concerns, though critics noted the council did not include a suicide prevention expert. OpenAI also recently rolled out controls for parents of children who use ChatGPT. The company says it’s building an age prediction system to automatically detect children using ChatGPT and impose a stricter set of age-related safeguards.

Rare but impactful conversations

The data shared on Monday appears to be part of the company’s effort to demonstrate progress on these issues, although it also shines a spotlight on just how deeply AI chatbots may be affecting the health of the public at large.

In a blog post on the recently released data, OpenAI says these types of conversations in ChatGPT that might trigger concerns about “psychosis, mania, or suicidal thinking” are “extremely rare,” and thus difficult to measure. The company estimates that around 0.07 percent of users active in a given week and 0.01 percent of messages indicate possible signs of mental health emergencies related to psychosis or mania. For emotional attachment, the company estimates around 0.15 percent of users active in a given week and 0.03 percent of messages indicate potentially heightened levels of emotional attachment to ChatGPT.

OpenAI also claims that on an evaluation of over 1,000 challenging mental health-related conversations, the new GPT-5 model was 92 percent compliant with its desired behaviors, compared to 27 percent for a previous GPT-5 model released on August 15. The company also says its latest version of GPT-5 holds up to OpenAI’s safeguards better in long conversations. OpenAI has previously admitted that its safeguards are less effective during extended conversations.

In addition, OpenAI says it’s adding new evaluations to attempt to measure some of the most serious mental health issues facing ChatGPT users. The company says its baseline safety testing for its AI language models will now include benchmarks for emotional reliance and non-suicidal mental health emergencies.

Despite the ongoing mental health concerns, OpenAI CEO Sam Altman announced on October 14 that the company will allow verified adult users to have erotic conversations with ChatGPT starting in December. The company had loosened ChatGPT content restrictions in February but then dramatically tightened them after the August lawsuit. Altman explained that OpenAI had made ChatGPT “pretty restrictive to make sure we were being careful with mental health issues” but acknowledged this approach made the chatbot “less useful/enjoyable to many users who had no mental health problems.”

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly Read More »

openai-looks-for-its-“google-chrome”-moment-with-new-atlas-web-browser

OpenAI looks for its “Google Chrome” moment with new Atlas web browser

That means you can use ChatGPT to search through your bookmarks or browsing history using human-parsable language prompts. It also means you can bring up a “side chat” next to your current page and ask questions that rely on the context of that specific page. And if you want to edit a Gmail draft using ChatGPT, you can now do that directly in the draft window, without the need to copy and paste between a ChatGPT window and an editor.

When typing in a short search prompt, Atlas will, by default, reply as an LLM, with written answers with embedded links to sourcing where appropriate (à la OpenAI’s existing search function). But the browser will also provide tabs with more traditional lists of links, images, videos, or news like those you would get from a search engine without LLM features.

Let us do the browsing

To wrap up the livestreamed demonstration, the OpenAI team showed off Atlas’ Agent Mode. While the “preview mode” feature is only available to ChatGPT Plus and Pro subscribers, research lead Will Ellsworth said he hoped it would eventually help users toward “an amazing tool for vibe life-ing” in the same way that LLM coding tools have become tools for “vibe coding.”

To that end, the team showed the browser taking planning tasks written in a Google Docs table and moving them over to the task management software Linear over the course of a few minutes. Agent Mode was also shown taking the ingredients list from a recipe webpage and adding them directly to the user’s Instacart in a different tab (though the demo Agent stopped before checkout to get approval from the user).

OpenAI looks for its “Google Chrome” moment with new Atlas web browser Read More »