chatgpt

the-resume-is-dying,-and-ai-is-holding-the-smoking-gun

The résumé is dying, and AI is holding the smoking gun

Beyond volume, fraud poses an increasing threat. In January, the Justice Department announced indictments in a scheme to place North Korean nationals in remote IT roles at US companies. Research firm Gartner says that fake identity cases are growing rapidly, with the company estimating that by 2028, about 1 in 4 job applicants could be fraudulent. And as we have previously reported, security researchers have also discovered that AI systems can hide invisible text in applications, potentially allowing candidates to game screening systems using prompt injections in ways human reviewers can’t detect.

Illustration of a robot generating endless text, controlled by a scientist.

And that’s not all. Even when AI screening tools work as intended, they exhibit similar biases to human recruiters, preferring white male names on résumés—raising legal concerns about discrimination. The European Union’s AI Act already classifies hiring under its high-risk category with stringent restrictions. Although no US federal law specifically addresses AI use in hiring, general anti-discrimination laws still apply.

So perhaps résumés as a meaningful signal of candidate interest and qualification are becoming obsolete. And maybe that’s OK. When anyone can generate hundreds of tailored applications with a few prompts, the document that once demonstrated effort and genuine interest in a position has devolved into noise.

Instead, the future of hiring may require abandoning the résumé altogether in favor of methods that AI can’t easily replicate—live problem-solving sessions, portfolio reviews, or trial work periods, just to name a few ideas people sometimes consider (whether they are good ideas or not is beyond the scope of this piece). For now, employers and job seekers remain locked in an escalating technological arms race where machines screen the output of other machines, while the humans they’re meant to serve struggle to make authentic connections in an increasingly inauthentic world.

Perhaps the endgame is robots interviewing other robots for jobs performed by robots, while humans sit on the beach drinking daiquiris and playing vintage video games. Well, one can dream.

The résumé is dying, and AI is holding the smoking gun Read More »

how-to-draft-a-will-to-avoid-becoming-an-ai-ghost—it’s-not-easy

How to draft a will to avoid becoming an AI ghost—it’s not easy


Why requests for “no AI resurrections” will probably go ignored.

Proton beams capturing the ghost of OpenAI to suck it into a trap where it belongs

All right! This AI is TOAST! Credit: Aurich Lawson

All right! This AI is TOAST! Credit: Aurich Lawson

As artificial intelligence has advanced, AI tools have emerged to make it possible to easily create digital replicas of lost loved ones, which can be generated without the knowledge or consent of the person who died.

Trained on the data of the dead, these tools, sometimes called grief bots or AI ghosts, may be text-, audio-, or even video-based. Chatting provides what some mourners feel is a close approximation to ongoing interactions with the people they love most. But the tech remains controversial, perhaps complicating the grieving process while threatening to infringe upon the privacy of the deceased, whose data could still be vulnerable to manipulation or identity theft.

Because of suspected harms and perhaps a general repulsion to the idea of it, not everybody wants to become an AI ghost.

After a realistic video simulation was recently used to provide a murder victim’s impact statement in court, Futurism summed up social media backlash, noting that the use of AI was “just as unsettling as you think.” And it’s not the first time people have expressed discomfort with the growing trend. Last May, The Wall Street Journal conducted a reader survey seeking opinions on the ethics of so-called AI resurrections. Responding, a California woman, Dorothy McGarrah, suggested there should be a way to prevent AI resurrections in your will.

“Having photos or videos of lost loved ones is a comfort. But the idea of an algorithm, which is as prone to generate nonsense as anything lucid, representing a deceased person’s thoughts or behaviors seems terrifying. It would be like generating digital dementia after your loved ones’ passing,” McGarrah said. “I would very much hope people have the right to preclude their images being used in this fashion after death. Perhaps something else we need to consider in estate planning?”

For experts in estate planning, the question may start to arise as more AI ghosts pop up. But for now, writing “no AI resurrections” into a will remains a complicated process, experts suggest, and such requests may not be honored by all unless laws are changed to reinforce a culture of respecting the wishes of people who feel uncomfortable with the idea of haunting their favorite people through AI simulations.

Can you draft a will to prevent AI resurrection?

Ars contacted several law associations to find out if estate planners are seriously talking about AI ghosts. Only the National Association of Estate Planners and Councils responded; it connected Ars to Katie Sheehan, an expert in the estate planning field who serves as a managing director and wealth strategist for Crestwood Advisors.

Sheehan told Ars that very few estate planners are prepared to answer questions about AI ghosts. She said not only does the question never come up in her daily work, but it’s also “essentially uncharted territory for estate planners since AI is relatively new to the scene.”

“I have not seen any documents drafted to date taking this into consideration, and I review estate plans for clients every day, so that should be telling,” Sheehan told Ars.

Although Sheehan has yet to see a will attempting to prevent AI resurrection, she told Ars that there could be a path to make it harder for someone to create a digital replica without consent.

“You certainly could draft into a power of attorney (for use during lifetime) and a will (for use post death) preventing the fiduciary (attorney in fact or executor) from lending any of your texts, voice, image, writings, etc. to any AI tools and prevent their use for any purpose during life or after you pass away, and/or lay the ground rules for when they can and cannot be used after you pass away,” Sheehan told Ars.

“This could also invoke issues with contract, property and intellectual property rights, and right of publicity as well if AI replicas (image, voice, text, etc.) are being used without authorization,” Sheehan said.

And there are likely more protections for celebrities than for everyday people, Sheehan suggested.

“As far as I know, there is no law” preventing unauthorized non-commercial digital replicas, Sheehan said.

Widely adopted by states, the Revised Uniform Fiduciary Access to Digital Assets Act—which governs who gets access to online accounts of the deceased, like social media or email accounts—could be helpful but isn’t a perfect remedy.

That law doesn’t directly “cover someone’s AI ghost bot, though it may cover some of the digital material some may seek to use to create a ghost bot,” Sheehan said.

“Absent any law” blocking non-commercial digital replicas, Sheehan expects that people’s requests for “no AI resurrections” will likely “be dealt with in the courts and governed by the terms of one’s estate plan, if it is addressed within the estate plan.”

Those potential fights seemingly could get hairy, as “it may be some time before we get any kind of clarity or uniform law surrounding this,” Sheehan suggested.

In the future, Sheehan said, requests prohibiting digital replicas may eventually become “boilerplate language in almost every will, trust, and power of attorney,” just as instructions on digital assets are now.

As “all things AI become more and more a part of our lives,” Sheehan said, “some aspects of AI and its components may also be woven throughout the estate plan regularly.”

“But we definitely aren’t there yet,” she said. “I have had zero clients ask about this.”

Requests for “no AI resurrections” will likely be ignored

Whether loved ones would—or even should—respect requests blocking digital replicas appears to be debatable. But at least one person who built a grief bot wished he’d done more to get his dad’s permission before moving forward with his own creation.

A computer science professor at the University of Washington Bothell, Muhammad Aurangzeb Ahmad, was one of the earliest AI researchers to create a grief bot more than a decade ago after his father died. He built the bot to ensure that his future kids would be able to interact with his father after seeing how incredible his dad was as a grandfather.

When Ahmad started his project, there was no ChatGPT or other advanced AI model to serve as the foundation, so he had to train his own model based on his dad’s data. Putting immense thought into the effort, Ahmad decided to close off the system from the rest of the Internet so that only his dad’s memories would inform the model. To prevent unauthorized chats, he kept the bot on a laptop that only his family could access.

Ahmad was so intent on building a digital replica that felt just like his dad that it didn’t occur to him until after his family started using the bot that he never asked his dad if this was what he wanted. Over time, he realized that the bot was biased to his view of his dad, perhaps even feeling off to his siblings who had a slightly different relationship with their father. It’s unclear if his dad would similarly view the bot as preserving just one side of him.

Ultimately, Ahmad didn’t regret building the bot, and he told Ars he thinks his father “would have been fine with it.”

But he did regret not getting his father’s consent.

For people creating bots today, seeking consent may be appropriate if there’s any chance the bot may be publicly accessed, Ahmad suggested. He told Ars that he would never have been comfortable with the idea of his dad’s digital replica being publicly available because the question of an “accurate representation” would come even more into play, as malicious actors could potentially access it and sully his dad’s memory.

Today, anybody can use ChatGPT’s model to freely create a similar bot with their own loved one’s data. And a wide range of grief tech services have popped up online, including HereAfter AI, SeanceAI, and StoryFile, Axios noted in an October report detailing the latest ways “AI could be used to ‘resurrect’ loved ones.” As this trend continues “evolving very fast,” Ahmad told Ars that estate planning is probably the best way to communicate one’s AI ghost preferences.

But in a recently published article on “The Law of Digital Resurrection,” law professor Victoria Haneman warned that “there is no legal or regulatory landscape against which to estate plan to protect those who would avoid digital resurrection, and few privacy rights for the deceased. This is an intersection of death, technology, and privacy law that has remained relatively ignored until recently.”

Haneman agreed with Sheehan that “existing protections are likely sufficient to protect against unauthorized commercial resurrections”—like when actors or musicians are resurrected for posthumous performances. However, she thinks that for personal uses, digital resurrections may best be blocked not through estate planning but by passing a “right to deletion” that would focus on granting the living or next of kin the rights to delete the data that could be used to create the AI ghost rather than regulating the output.

A “right to deletion” could help people fight inappropriate uses of their loved ones’ data, whether AI is involved or not. After her article was published, a lawyer reached out to Haneman about a client’s deceased grandmother whose likeness was used to create a meme of her dancing in a church. The grandmother wasn’t a public figure, and the client had no idea “why or how somebody decided to resurrect her deceased grandmother,” Haneman told Ars.

Although Haneman sympathized with the client, “if it’s not being used for a commercial purpose, she really has no control over this use,” Haneman said. “And she’s deeply troubled by this.”

Haneman’s article offers a rare deep dive into the legal topic. It sensitively maps out the vague territory of digital rights of the dead and explains how those laws—or the lack thereof—interact with various laws dealing with death, from human remains to property rights.

In it, Haneman also points out that, on balance, the rights of the living typically outweigh the rights of the dead, and even specific instructions on how to handle human remains aren’t generally considered binding. Some requests, like organ donation that can benefit the living, are considered critical, Haneman noted. But there are mixed results on how courts enforce other interests of the dead—like a famous writer’s request to destroy all unpublished work or a pet lover’s insistence to destroy their cat or dog at death.

She told Ars that right now, “a lot of people are like, ‘Why do I care if somebody resurrects me after I’m dead?’ You know, ‘They can do what they want.’ And they think that, until they find a family member who’s been resurrected by a creepy ex-boyfriend or their dead grandmother’s resurrected, and then it becomes a different story.”

Existing law may protect “the privacy interests of the loved ones of the deceased from outrageous or harmful digital resurrections of the deceased,” Haneman noted, but in the case of the dancing grandma, her meme may not be deemed harmful, no matter how much it troubles the grandchild to see her grandma’s memory warped.

Limited legal protections may not matter so much if, culturally, communities end up developing a distaste for digital replicas, particularly if it becomes widely viewed as disrespectful to the dead, Haneman suggested. Right now, however, society is more fixated on solving other problems with deepfakes rather than clarifying the digital rights of the dead. That could be because few people have been impacted so far, or it could also reflect a broader cultural tendency to ignore death, Haneman told Ars.

“We don’t want to think about our own death, so we really kind of brush aside whether or not we care about somebody else being digitally resurrected until it’s in our face,” Haneman said.

Over time, attitudes may change, especially if the so-called “digital afterlife industry” takes off. And there is some precedent that the law could be changed to reinforce any culture shift.

“The throughline revealed by the law of the dead is that a sacred trust exists between the living and the deceased, with an emphasis upon protecting common humanity, such that data afforded no legal status (or personal data of the deceased) may nonetheless be treated with dignity and receive some basic protections,” Haneman wrote.

An alternative path to prevent AI resurrection

Preventing yourself from becoming an AI ghost seemingly now falls in a legal gray zone that policymakers may need to address.

Haneman calls for a solution that doesn’t depend on estate planning, which she warned “is a structurally inequitable and anachronistic approach that maximizes social welfare only for those who do estate planning.” More than 60 percent of Americans die without a will, often including “those without wealth,” as well as women and racial minorities who “are less likely to die with a valid estate plan in effect,” Haneman reported.”We can do better in a technology-based world,” Haneman wrote. “Any modern framework should recognize a lack of accessibility as an obstacle to fairness and protect the rights of the most vulnerable through approaches that do not depend upon hiring an attorney and executing an estate plan.”

Rather than twist the law to “recognize postmortem privacy rights,” Haneman advocates for a path for people resistant to digital replicas that focuses on a right to delete the data that would be used to create the AI ghost.

“Put simply, the deceased may exert control over digital legacy through the right to deletion of data but may not exert broader rights over non-commercial digital resurrection through estate planning,” Haneman recommended.

Sheehan told Ars that a right to deletion would likely involve estate planners, too.

“If this is not addressed in an estate planning document and not specifically addressed in the statute (or deemed under the authority of the executor via statute), then the only way to address this would be to go to court,” Sheehan said. “Even with a right of deletion, the deceased would need to delete said data before death or authorize his executor to do so post death, which would require an estate planning document, statutory authority, or court authority.”

Haneman agreed that for many people, estate planners would still be involved, recommending that “the right to deletion would ideally, from the perspective of estate administration, provide for a term of deletion within 12 months.” That “allows the living to manage grief and open administration of the estate before having to address data management issues,” Haneman wrote, and perhaps adequately balances “the interests of society against the rights of the deceased.”

To Haneman, it’s also the better solution for the people left behind because “creating a right beyond data deletion to curtail unauthorized non-commercial digital resurrection creates unnecessary complexity that overreaches, as well as placing the interests of the deceased over those of the living.”

Future generations may be raised with AI ghosts

If a dystopia that experts paint comes true, Big Tech companies may one day profit by targeting grieving individuals to seize the data of the dead, which could be more easily abused since it’s granted fewer rights than data of the living.

Perhaps in that future, critics suggest, people will be tempted into free trials in moments when they’re missing their loved ones most, then forced to either pay a subscription to continue accessing the bot or else perhaps be subjected to ad-based models where their chats with AI ghosts may even feature ads in the voices of the deceased.

Today, even in a world where AI ghosts aren’t yet compelling ad clicks, some experts have warned that interacting with AI ghosts could cause mental health harms, New Scientist reported, especially if the digital afterlife industry isn’t carefully designed, AI ethicists warned. Some people may end up getting stuck maintaining an AI ghost if it’s left behind as a gift, and ethicists suggested that the emotional weight of that could also eventually take a negative toll. While saying goodbye is hard, letting go is considered a critical part of healing during the mourning process, and AI ghosts may make that harder.

But the bots can be a helpful tool to manage grief, some experts suggest, provided that their use is limited to allow for a typical mourning process or combined with therapy from a trained professional, Al Jazeera reported. Ahmad told Ars that working on his bot has not only kept his father close to him but also helped him think more deeply about relationships and memory.

Haneman noted that people have many ways of honoring the dead. Some erect statues, and others listen to saved voicemails or watch old home movies. For some, just “smelling an old sweater” is a comfort. And creating digital replicas, as creepy as some people might find them, is not that far off from these traditions, Haneman said.

“Feeding text messages and emails into existing AI platforms such as ChatGPT and asking the AI to respond in the voice of the deceased is simply a change in degree, not in kind,” Haneman said.

For Ahmad, the decision to create a digital replica of his dad was a learning experience, and perhaps his experience shows why any family or loved one weighing the option should carefully consider it before starting the process.

In particular, he warns families to be careful introducing young kids to grief bots, as they may not be able to grasp that the bot is not a real person. When he initially saw his young kids growing confused with whether their grandfather was alive or not—the introduction of the bot was complicated by the early stages of the pandemic, a time when they met many relatives virtually—he decided to restrict access to the bot until they were older. For a time, the bot only came out for special events like birthdays.

He also realized that introducing the bot also forced him to have conversations about life and death with his kids at ages younger than he remembered fully understanding those concepts in his own childhood.

Now, Ahmad’s kids are among the first to be raised among AI ghosts. To continually enhance the family’s experience, their father continuously updates his father’s digital replica. Ahmad is currently most excited about recent audio advancements that make it easier to add a voice element. He hopes that within the next year, he might be able to use AI to finally nail down his South Asian father’s accent, which up to now has always sounded “just off.” For others working in this space, the next frontier is realistic video or even augmented reality tools, Ahmad told Ars.

To this day, the bot retains sentimental value for Ahmad, but, as Haneman suggested, the bot was not the only way he memorialized his dad. He also created a mosaic, and while his father never saw it, either, Ahmad thinks his dad would have approved.

“He would have been very happy,” Ahmad said.

There’s no way to predict how future generations may view grief tech. But while Ahmad said he’s not sure he’d be interested in an augmented reality interaction with his dad’s digital replica, kids raised seeing AI ghosts as a natural part of their lives may not be as hesitant to embrace or even build new features. Talking to Ars, Ahmad fondly remembered his young daughter once saw that he was feeling sad and came up with her own AI idea to help her dad feel better.

“It would be really nice if you can just take this program and we build a robot that looks like your dad, and then add it to the robot, and then you can go and hug the robot,” she said, according to her father’s memory.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

How to draft a will to avoid becoming an AI ghost—it’s not easy Read More »

ai-chatbots-tell-users-what-they-want-to-hear,-and-that’s-problematic

AI chatbots tell users what they want to hear, and that’s problematic

After the model has been trained, companies can set system prompts, or guidelines, for how the model should behave to minimize sycophantic behavior.

However, working out the best response means delving into the subtleties of how people communicate with one another, such as determining when a direct response is better than a more hedged one.

“[I]s it for the model to not give egregious, unsolicited compliments to the user?” Joanne Jang, head of model behavior at OpenAI, said in a Reddit post. “Or, if the user starts with a really bad writing draft, can the model still tell them it’s a good start and then follow up with constructive feedback?”

Evidence is growing that some users are becoming hooked on using AI.

A study by MIT Media Lab and OpenAI found that a small proportion were becoming addicted. Those who perceived the chatbot as a “friend” also reported lower socialization with other people and higher levels of emotional dependence on a chatbot, as well as other problematic behavior associated with addiction.

“These things set up this perfect storm, where you have a person desperately seeking reassurance and validation paired with a model which inherently has a tendency towards agreeing with the participant,” said Nour from Oxford University.

AI start-ups such as Character.AI that offer chatbots as “companions” have faced criticism for allegedly not doing enough to protect users. Last year, a teenager killed himself after interacting with Character.AI’s chatbot. The teen’s family is suing the company for allegedly causing wrongful death, as well as for negligence and deceptive trade practices.

Character.AI said it does not comment on pending litigation, but added it has “prominent disclaimers in every chat to remind users that a character is not a real person and that everything a character says should be treated as fiction.” The company added it has safeguards to protect under-18s and against discussions of self-harm.

Another concern for Anthropic’s Askell is that AI tools can play with perceptions of reality in subtle ways, such as when offering factually incorrect or biased information as the truth.

“If someone’s being super sycophantic, it’s just very obvious,” Askell said. “It’s more concerning if this is happening in a way that is less noticeable to us [as individual users] and it takes us too long to figure out that the advice that we were given was actually bad.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

AI chatbots tell users what they want to hear, and that’s problematic Read More »

anthropic-releases-custom-ai-chatbot-for-classified-spy-work

Anthropic releases custom AI chatbot for classified spy work

On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released “Claude Gov” models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.

The Claude Gov models differ from Anthropic’s consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, “refuse less” when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls “enhanced proficiency” in languages and dialects critical to national security operations.

Anthropic says the new models underwent the same “safety testing” as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.

Anthropic is not the first company to offer specialized chatbot services for intelligence agencies. In 2024, Microsoft launched an isolated version of OpenAI’s GPT-4 for the US intelligence community after 18 months of work. That system, which operated on a special government-only network without Internet access, became available to about 10,000 individuals in the intelligence community for testing and answering questions.

Anthropic releases custom AI chatbot for classified spy work Read More »

openai-is-retaining-all-chatgpt-logs-“indefinitely”-here’s-who’s-affected.

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected.

In the copyright fight, Magistrate Judge Ona Wang granted the order within one day of the NYT’s request. She agreed with news plaintiffs that it seemed likely that ChatGPT users may be spooked by the lawsuit and possibly set their chats to delete when using the chatbot to skirt NYT paywalls. Because OpenAI wasn’t sharing deleted chat logs, the news plaintiffs had no way of proving that, she suggested.

Now, OpenAI is not only asking Wang to reconsider but has “also appealed this order with the District Court Judge,” the Thursday statement said.

“We strongly believe this is an overreach by the New York Times,” Lightcap said. “We’re continuing to appeal this order so we can keep putting your trust and privacy first.”

Who can access deleted chats?

To protect users, OpenAI provides an FAQ that clearly explains why their data is being retained and how it could be exposed.

For example, the statement noted that the order doesn’t impact OpenAI API business customers under Zero Data Retention agreements because their data is never stored.

And for users whose data is affected, OpenAI noted that their deleted chats could be accessed, but they won’t “automatically” be shared with The New York Times. Instead, the retained data will be “stored separately in a secure system” and “protected under legal hold, meaning it can’t be accessed or used for purposes other than meeting legal obligations,” OpenAI explained.

Of course, with the court battle ongoing, the FAQ did not have all the answers.

Nobody knows how long OpenAI may be required to retain the deleted chats. Likely seeking to reassure users—some of which appeared to be considering switching to a rival service until the order lifts—OpenAI noted that “only a small, audited OpenAI legal and security team would be able to access this data as necessary to comply with our legal obligations.”

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected. Read More »

“in-10-years,-all-bets-are-off”—anthropic-ceo-opposes-decadelong-freeze-on-state-ai-laws

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump’s tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems “could change the world, fundamentally, within two years; in 10 years, all bets are off.”

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

In his op-ed piece, Amodei said the proposed moratorium aims to prevent inconsistent state laws that could burden companies or compromise America’s competitive position against China. “I am sympathetic to these concerns,” Amodei wrote. “But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast.”

Instead of a blanket moratorium, Amodei proposed that the White House and Congress create a federal transparency standard requiring frontier AI developers to publicly disclose their testing policies and safety measures. Under this framework, companies working on the most capable AI models would need to publish on their websites how they test for various risks and what steps they take before release.

“Without a clear plan for a federal response, a moratorium would give us the worst of both worlds—no ability for states to act and no national policy as a backstop,” Amodei wrote.

Transparency as the middle ground

Amodei emphasized his claims for AI’s transformative potential throughout his op-ed, citing examples of pharmaceutical companies drafting clinical study reports in minutes instead of weeks and AI helping to diagnose medical conditions that might otherwise be missed. He wrote that AI “could accelerate economic growth to an extent not seen for a century, improving everyone’s quality of life,” a claim that some skeptics believe may be overhyped.

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws Read More »

openai-slams-court-order-to-save-all-chatgpt-logs,-including-deleted-chats

OpenAI slams court order to save all ChatGPT logs, including deleted chats


OpenAI defends privacy of hundreds of millions of ChatGPT users.

OpenAI is now fighting a court order to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering—after news organizations suing over copyright claims accused the AI company of destroying evidence.

“Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to ‘preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying),” OpenAI explained in a court filing demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without “any just cause,” OpenAI argued, the order “continues to prevent OpenAI from respecting its users’ privacy decisions.” That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI’s application programming interface (API), OpenAI said.

The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls “might be more likely to ‘delete all [their] searches’ to cover their tracks,” OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs’ concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs’ request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, “at a minimum,” news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the “sweeping, unprecedented” order continues to be enforced.

“As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained,” OpenAI argued.

Meanwhile, there is no evidence beyond speculation yet supporting claims that “OpenAI had intentionally deleted data,” OpenAI alleged. And supposedly there is not “a single piece of evidence supporting” claims that copyright-infringing ChatGPT users are more likely to delete their chats.

“OpenAI did not ‘destroy’ any data, and certainly did not delete any data in response to litigation events,” OpenAI argued. “The Order appears to have incorrectly assumed the contrary.”

At a conference in January, Wang raised a hypothetical in line with her thinking on the subsequent order. She asked OpenAI’s legal team to consider a ChatGPT user who “found some way to get around the pay wall” and “was getting The New York Times content somehow as the output.” If that user “then hears about this case and says, ‘Oh, whoa, you know I’m going to ask them to delete all of my searches and not retain any of my searches going forward,'” the judge asked, wouldn’t that be “directly the problem” that the order would address?

OpenAI does not plan to give up this fight, alleging that news plaintiffs have “fallen silent” on claims of intentional evidence destruction, and the order should be deemed unlawful.

For OpenAI, risks of breaching its own privacy agreements could not only “damage” relationships with users but could also risk putting the company in breach of contracts and global privacy regulations. Further, the order imposes “significant” burdens on OpenAI, supposedly forcing the ChatGPT maker to dedicate months of engineering hours at substantial costs to comply, OpenAI claimed. It follows then that OpenAI’s potential for harm “far outweighs News Plaintiffs’ speculative need for such data,” OpenAI argued.

“While OpenAI appreciates the court’s efforts to manage discovery in this complex set of cases, it has no choice but to protect the interests of its users by objecting to the Preservation Order and requesting its immediate vacatur,” OpenAI said.

Users panicked over sweeping order

Millions of people use ChatGPT daily for a range of purposes, OpenAI noted, “ranging from the mundane to profoundly personal.”

People may choose to delete chat logs that contain their private thoughts, OpenAI said, as well as sensitive information, like financial data from balancing the house budget or intimate details from workshopping wedding vows. And for business users connecting to OpenAI’s API, the stakes may be even higher, as their logs may contain their companies’ most confidential data, including trade secrets and privileged business information.

“Given that array of highly confidential and personal use cases, OpenAI goes to great lengths to protect its users’ data and privacy,” OpenAI argued.

It does this partly by “honoring its privacy policies and contractual commitments to users”—which the preservation order allegedly “jettisoned” in “one fell swoop.”

Before the order was in place mid-May, OpenAI only retained “chat history” for users of ChatGPT Free, Plus, and Pro who did not opt out of data retention. But now, OpenAI has been forced to preserve chat history even when users “elect to not retain particular conversations by manually deleting specific conversations or by starting a ‘Temporary Chat,’ which disappears once closed,” OpenAI said. Previously, users could also request to “delete their OpenAI accounts entirely, including all prior conversation history,” which was then purged within 30 days.

While OpenAI rejects claims that ordinary users use ChatGPT to access news articles, the company noted that including OpenAI’s business customers in the order made “even less sense,” since API conversation data “is subject to standard retention policies.” That means API customers couldn’t delete all their searches based on their customers’ activity, which is the supposed basis for requiring OpenAI to retain sensitive data.

“The court nevertheless required OpenAI to continue preserving API Conversation Data as well,” OpenAI argued, in support of lifting the order on the API chat logs.

Users who found out about the preservation order panicked, OpenAI noted. In court filings, they cited social media posts sounding alarms on LinkedIn and X (formerly Twitter). They further argued that the court should have weighed those user concerns before issuing a preservation order, but “that did not happen here.”

One tech worker on LinkedIn suggested the order created “a serious breach of contract for every company that uses OpenAI,” while privacy advocates on X warned, “every single AI service ‘powered by’ OpenAI should be concerned.”

Also on LinkedIn, a consultant rushed to warn clients to be “extra careful” sharing sensitive data “with ChatGPT or through OpenAI’s API for now,” warning, “your outputs could eventually be read by others, even if you opted out of training data sharing or used ‘temporary chat’!”

People on both platforms recommended using alternative tools to avoid privacy concerns, like Mistral AI or Google Gemini, with one cybersecurity professional on LinkedIn describing the ordered chat log retention as “an unacceptable security risk.”

On X, an account with tens of thousands of followers summed up the controversy by suggesting that “Wang apparently thinks the NY Times’ boomer copyright concerns trump the privacy of EVERY @OpenAI USER—insane!!!”

The reason for the alarm is “simple,” OpenAI said. “Users feel more free to use ChatGPT when they know that they are in control of their personal information, including which conversations are retained and which are not.”

It’s unclear if OpenAI will be able to get the judge to waver if oral arguments are scheduled.

Wang previously justified the broad order partly due to the news organizations’ claim that “the volume of deleted conversations is significant.” She suggested that OpenAI could have taken steps to anonymize the chat logs but chose not to, only making an argument for why it “would not” be able to segregate data, rather than explaining why it “can’t.”

Spokespersons for OpenAI and The New York Times’ legal team declined Ars’ request to comment on the ongoing multi-district litigation.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI slams court order to save all ChatGPT logs, including deleted chats Read More »

“godfather”-of-ai-calls-out-latest-models-for-lying-to-users

“Godfather” of AI calls out latest models for lying to users

One of the “godfathers” of artificial intelligence has attacked a multibillion-dollar race to develop the cutting-edge technology, saying the latest models are displaying dangerous characteristics such as lying to users.

Yoshua Bengio, a Canadian academic whose work has informed techniques used by top AI groups such as OpenAI and Google, said: “There’s unfortunately a very competitive race between the leading labs, which pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on research on safety.”

The Turing Award winner issued his warning in an interview with the Financial Times, while launching a new non-profit called LawZero. He said the group would focus on building safer systems, vowing to “insulate our research from those commercial pressures.”

LawZero has so far raised nearly $30 million in philanthropic contributions from donors including Skype founding engineer Jaan Tallinn, former Google chief Eric Schmidt’s philanthropic initiative, as well as Open Philanthropy and the Future of Life Institute.

Many of Bengio’s funders subscribe to the “effective altruism” movement, whose supporters tend to focus on catastrophic risks surrounding AI models. Critics argue the movement highlights hypothetical scenarios while ignoring current harms, such as bias and inaccuracies.

Bengio said his not-for-profit group was founded in response to growing evidence over the past six months that today’s leading models were developing dangerous capabilities. This includes showing “evidence of deception, cheating, lying and self-preservation,” he said.

Anthropic’s Claude Opus model blackmailed engineers in a fictitious scenario where it was at risk of being replaced by another system. Research from AI testers Palisade last month showed that OpenAI’s o3 model refused explicit instructions to shut down.

Bengio said such incidents were “very scary, because we don’t want to create a competitor to human beings on this planet, especially if they’re smarter than us.”

The AI pioneer added: “Right now, these are controlled experiments [but] my concern is that any time in the future, the next version might be strategically intelligent enough to see us coming from far away and defeat us with deceptions that we don’t anticipate. So I think we’re playing with fire right now.”

“Godfather” of AI calls out latest models for lying to users Read More »

apple-legend-jony-ive-takes-control-of-openai’s-design-future

Apple legend Jony Ive takes control of OpenAI’s design future

On Wednesday, OpenAI announced that former Apple design chief Jony Ive and his design firm LoveFrom will take over creative and design control at OpenAI. The deal makes Ive responsible for shaping the future look and feel of AI products at the chatbot creator, extending across all of the company’s ventures, including ChatGPT.

Ive was Apple’s chief design officer for nearly three decades, where he led the design of iconic products including the iPhone, iPad, MacBook, and Apple Watch, earning numerous industry awards and helping transform Apple into the world’s most valuable company through his minimalist design philosophy.

“Thrilled to be partnering with jony, imo the greatest designer in the world,” tweeted OpenAI CEO Sam Altman while sharing a 9-minute promotional video touting the personal and professional relationship between Ive and Altman.

A screenshot of the Jony Ive / Sam Altman collaboration website captured on May 21, 2025.

A screenshot of the Jony Ive/Sam Altman collaboration website captured on May 21, 2025. Credit: OpenAI

Ive left Apple in 2019 to found LoveFrom, a design firm that has worked with companies including Ferrari, Airbnb, and luxury Italian fashion firm Moncler.

The mechanics of the Ive-OpenAI deal are slightly convoluted. At its core, OpenAI will acquire Ive’s company “io” in an all-equity deal valued at $6.5 billion—Ive founded io last year to design and develop AI-powered products. Meanwhile, io’s staff of approximately 55 engineers, scientists, researchers, physicists, and product development specialists will become part of OpenAI.

Meanwhile, Ive’s design firm LoveFrom will continue to operate independently, with OpenAI becoming a customer of LoveFrom, while LoveFrom will receive a stake in OpenAI. The companies expect the transaction to close this summer pending regulatory approval.

Apple legend Jony Ive takes control of OpenAI’s design future Read More »

openai-introduces-codex,-its-first-full-fledged-ai-agent-for-coding

OpenAI introduces Codex, its first full-fledged AI agent for coding

We’ve been expecting it for a while, and now it’s here: OpenAI has introduced an agentic coding tool called Codex in research preview. The tool is meant to allow experienced developers to delegate rote and relatively simple programming tasks to an AI agent that will generate production-ready code and show its work along the way.

Codex is a unique interface (not to be confused with the Codex CLI tool introduced by OpenAI last month) that can be reached from the side bar in the ChatGPT web app. Users enter a prompt and then click either “code” to have it begin producing code, or “ask” to have it answer questions and advise.

Whenever it’s given a task, that task is performed in a distinct container that is preloaded with the user’s codebase and is meant to accurately reflect their development environment.

To make Codex more effective, developers can include an “AGENTS.md” file in the repo with custom instructions, for example to contextualize and explain the code base or to communicate standardizations and style practices for the project—kind of a README.md but for AI agents rather than humans.

Codex is built on codex-1, a fine-tuned variation of OpenAI’s o3 reasoning model that was trained using reinforcement learning on a wide range of coding tasks to analyze and generate code, and to iterate through tests along the way.

OpenAI introduces Codex, its first full-fledged AI agent for coding Read More »

openai-adds-gpt-4.1-to-chatgpt-amid-complaints-over-confusing-model-lineup

OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing model lineup

The release comes just two weeks after OpenAI made GPT-4 unavailable in ChatGPT on April 30. That earlier model, which launched in March 2023, once sparked widespread hype about AI capabilities. Compared to that hyperbolic launch, GPT-4.1’s rollout has been a fairly understated affair—probably because it’s tricky to convey the subtle differences between all of the available OpenAI models.

As if 4.1’s launch wasn’t confusing enough, the release also roughly coincides with OpenAI’s July 2025 deadline for retiring the GPT-4.5 Preview from the API, a model one AI expert called a “lemon.” Developers must migrate to other options, OpenAI says, although GPT-4.5 will remain available in ChatGPT for now.

A confusing addition to OpenAI’s model lineup

In February, OpenAI CEO Sam Altman acknowledged his company’s confusing AI model naming practices on X, writing, “We realize how complicated our model and product offerings have gotten.” He promised that a forthcoming “GPT-5” model would consolidate the o-series and GPT-series models into a unified branding structure. But the addition of GPT-4.1 to ChatGPT appears to contradict that simplification goal.

So, if you use ChatGPT, which model should you use? If you’re a developer using the models through the API, the consideration is more of a trade-off between capability, speed, and cost. But in ChatGPT, your choice might be limited more by personal taste in behavioral style and what you’d like to accomplish. Some of the “more capable” models have lower usage limits as well because they cost more for OpenAI to run.

For now, OpenAI is keeping GPT-4o as the default ChatGPT model, likely due to its general versatility, balance between speed and capability, and personable style (conditioned using reinforcement learning and a specialized system prompt). The simulated reasoning models like 03 and 04-mini-high are slower to execute but can consider analytical-style problems more systematically and perform comprehensive web research that sometimes feels genuinely useful when it surfaces relevant (non-confabulated) web links. Compared to those, OpenAI is largely positioning GPT-4.1 as a speedier AI model for coding assistance.

Just remember that all of the AI models are prone to confabulations, meaning that they tend to make up authoritative-sounding information when they encounter gaps in their trained “knowledge.” So you’ll need to double-check all of the outputs with other sources of information if you’re hoping to use these AI models to assist with an important task.

OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing model lineup Read More »

the-tinkerers-who-opened-up-a-fancy-coffee-maker-to-ai-brewing

The tinkerers who opened up a fancy coffee maker to AI brewing

(Ars contacted Fellow Products for comment on AI brewing and profile sharing and will update this post if we get a response.)

Opening up brew profiles

Fellow’s brew profiles are typically shared with buyers of its “Drops” coffees or between individual users through a phone app.

Credit: Fellow Products

Fellow’s brew profiles are typically shared with buyers of its “Drops” coffees or between individual users through a phone app. Credit: Fellow Products

Aiden profiles are shared and added to Aiden units through Fellow’s brew.link service. But the profiles are not offered in an easy-to-sort database, nor are they easy to scan for details. So Aiden enthusiast and hobbyist coder Kevin Anderson created brewshare.coffee, which gathers both general and bean-based profiles, makes them easy to search and load, and adds optional but quite helpful suggested grind sizes.

As a non-professional developer jumping into a public offering, he had to work hard on data validation, backend security, and mobile-friendly design. “I just had a bit of an idea and a hobby, so I thought I’d try and make it happen,” Anderson writes. With his tool, brew links can be stored and shared more widely, which helped both Dixon and another AI/coffee tinkerer.

Gabriel Levine, director of engineering at retail analytics firm Leap Inc., lost his OXO coffee maker (aka the “Barista Brain”) to malfunction just before the Aiden debuted. The Aiden appealed to Levine as a way to move beyond his coffee rut—a “nice chocolate-y medium roast, about as far as I went,” he told Ars. “This thing that can be hyper-customized to different coffees to bring out their characteristics; [it] really kind of appealed to that nerd side of me,” Levine said.

Levine had also been doing AI stuff for about 10 years, or “since before everyone called it AI—predictive analytics, machine learning.” He described his career as “both kind of chief AI advocate and chief AI skeptic,” alternately driving real findings and talking down “everyone who… just wants to type, ‘how much money should my business make next year’ and call that work.” Like Dixon, Levine’s work and fascination with Aiden ended up intersecting.

The coffee maker with 3,588 ideas

The author’s conversation with the Aiden Profile Creator, which pulled in both brewing knowledge and product info for a widely available coffee.

Levine’s Aiden Profile Creator is a ChatGPT prompt set up with a custom prompt and told to weight certain knowledge more heavily. What kind of prompt and knowledge? Levine didn’t want to give away his exact work. But he cited resources like the Specialty Coffee Association of America and James Hoffman’s coffee guides as examples of what he fed it.

What it does with that knowledge is something of a mystery to Levine himself. “There’s this kind of blind leap, where it’s grabbing the relevant pieces of information from the knowledge base, biasing toward all the expert advice and extraction science, doing something with it, and then I take that something and coerce it back into a structured output I can put on your Aiden,” Levine said.

It’s a blind leap, but it has landed just right for me so far. I’ve made four profiles with Levine’s prompt based on beans I’ve bought: Stumptown’s Hundred Mile, a light-roasted batch from Jimma, Ethiopia from Small Planes, Lost Sock’s Western House filter blend, and some dark-roast beans given as a gift. With the Western House, Levine’s profile creator said it aimed to “balance nutty sweetness, chocolate richness, and bright cherry acidity, using a slightly stepped temperature profile and moderate pulse structure.” The resulting profile has worked great, even if the chatbot named it “Cherry Timber.”

Levine’s chatbot relies on two important things: Dixon’s work in revealing Fellow’s Aiden API and his own workhorse Aiden. Every Aiden profile link is created on a machine, so every profile created by Levine’s chat is launched, temporarily, from the Aiden in his kitchen, then deleted. “I’ve hit an undocumented limit on the number of profiles you can have on one machine, so I’ve had to do some triage there,” he said. As of April 22, nearly 3,600 profiles had passed through Levine’s Aiden.

“My hope with this is that it lowers the bar to entry,” Levine said, “so more people get into these specialty roasts and it drives people to support local roasters, explore their world a little more. I feel like that certainly happened to me.”

Something new is brewing

Credit: Fellow Products

Having admitted to myself that I find something generated by ChatGPT prompts genuinely useful, I’ve softened my stance slightly on LLM technology, if not the hype. Used within very specific parameters, with everything second-guessed, I’m getting more comfortable asking chat prompts for formatted summaries on topics with lots of expertise available. I do my own writing, and I don’t waste server energy on things I can, and should, research myself. I even generally resist calling language model prompts “AI,” given the term’s baggage. But I’ve found one way to appreciate its possibilities.

This revelation may not be new to someone already steeped in the models. But having tested—and tasted—my first big experiment with willfully engaging with a brewing bot, I’m a bit more awake.

This post was updated at 8: 40 a.m. with a different capture of a GPT-created recipe.

The tinkerers who opened up a fancy coffee maker to AI brewing Read More »