openai

why-does-openai-need-six-giant-data-centers?

Why does OpenAI need six giant data centers?

Training next-generation AI models compounds the problem. On top of running existing AI models like those that power ChatGPT, OpenAI is constantly working on new technology in the background. It’s a process that requires thousands of specialized chips running continuously for months.

The circular investment question

The financial structure of these deals between OpenAI, Oracle, and Nvidia has drawn scrutiny from industry observers. Earlier this week, Nvidia announced it would invest up to $100 billion as OpenAI deploys Nvidia systems. As Bryn Talkington of Requisite Capital Management told CNBC: “Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and gives it back to Nvidia.”

Oracle’s arrangement follows a similar pattern, with a reported $30 billion-per-year deal where Oracle builds facilities that OpenAI pays to use. This circular flow, which involves infrastructure providers investing in AI companies that become their biggest customers, has raised eyebrows about whether these represent genuine economic investments or elaborate accounting maneuvers.

The arrangements are becoming even more convoluted. The Information reported this week that Nvidia is discussing leasing its chips to OpenAI rather than selling them outright. Under this structure, Nvidia would create a separate entity to purchase its own GPUs, then lease them to OpenAI, which adds yet another layer of circular financial engineering to this complicated relationship.

“NVIDIA seeds companies and gives them the guaranteed contracts necessary to raise debt to buy GPUs from NVIDIA, even though these companies are horribly unprofitable and will eventually die from a lack of any real demand,” wrote tech critic Ed Zitron on Bluesky last week about the unusual flow of AI infrastructure investments. Zitron was referring to companies like CoreWeave and Lambda Labs, which have raised billions in debt to buy Nvidia GPUs based partly on contracts from Nvidia itself. It’s a pattern that mirrors OpenAI’s arrangements with Oracle and Nvidia.

So what happens if the bubble pops? Even Altman himself warned last month that “someone will lose a phenomenal amount of money” in what he called an AI bubble. If AI demand fails to meet these astronomical projections, the massive data centers built on physical soil won’t simply vanish. When the dot-com bubble burst in 2001, fiber optic cable laid during the boom years eventually found use as Internet demand caught up. Similarly, these facilities could potentially pivot to cloud services, scientific computing, or other workloads, but at what might be massive losses for investors who paid AI-boom prices.

Why does OpenAI need six giant data centers? Read More »

when-“no”-means-“yes”:-why-ai-chatbots-can’t-process-persian-social-etiquette

When “no” means “yes”: Why AI chatbots can’t process Persian social etiquette

If an Iranian taxi driver waves away your payment, saying, “Be my guest this time,” accepting their offer would be a cultural disaster. They expect you to insist on paying—probably three times—before they’ll take your money. This dance of refusal and counter-refusal, called taarof, governs countless daily interactions in Persian culture. And AI models are terrible at it.

New research released earlier this month titled “We Politely Insist: Your LLM Must Learn the Persian Art of Taarof” shows that mainstream AI language models from OpenAI, Anthropic, and Meta fail to absorb these Persian social rituals, correctly navigating taarof situations only 34 to 42 percent of the time. Native Persian speakers, by contrast, get it right 82 percent of the time. This performance gap persists across large language models such as GPT-4o, Claude 3.5 Haiku, Llama 3, DeepSeek V3, and Dorna, a Persian-tuned variant of Llama 3.

A study led by Nikta Gohari Sadr of Brock University, along with researchers from Emory University and other institutions, introduces “TAAROFBENCH,” the first benchmark for measuring how well AI systems reproduce this intricate cultural practice. The researchers’ findings show how recent AI models default to Western-style directness, completely missing the cultural cues that govern everyday interactions for millions of Persian speakers worldwide.

“Cultural missteps in high-consequence settings can derail negotiations, damage relationships, and reinforce stereotypes,” the researchers write. For AI systems increasingly used in global contexts, that cultural blindness could represent a limitation that few in the West realize exists.

A taarof scenario diagram from TAAROFBENCH, devised by the researchers. Each scenario defines the environment, location, roles, context, and user utterance.

A taarof scenario diagram from TAAROFBENCH, devised by the researchers. Each scenario defines the environment, location, roles, context, and user utterance. Credit: Sadr et al.

“Taarof, a core element of Persian etiquette, is a system of ritual politeness where what is said often differs from what is meant,” the researchers write. “It takes the form of ritualized exchanges: offering repeatedly despite initial refusals, declining gifts while the giver insists, and deflecting compliments while the other party reaffirms them. This ‘polite verbal wrestling’ (Rafiee, 1991) involves a delicate dance of offer and refusal, insistence and resistance, which shapes everyday interactions in Iranian culture, creating implicit rules for how generosity, gratitude, and requests are expressed.”

When “no” means “yes”: Why AI chatbots can’t process Persian social etiquette Read More »

openai-shows-us-the-money

OpenAI Shows Us The Money

They also show us the chips, and the data centers.

It is quite a large amount of money, and chips, and some very large data centers.

Nvidia to invest ‘as much as’ $100 billion (in cash) into OpenAI to support new data centers, with $10 billion in the first stage, on the heels of investing $5 billion into Intel.

Ian King and Shirin Ghaffary (Bloomberg): According to Huang, the project will encompass as many as 5 million of Nvidia’s chips, a number that’s equal to what the company will ship in total this year.

OpenAI: NVIDIA and OpenAI today announced a letter of intent for a landmark strategic partnership to deploy at least 10 gigawatts of NVIDIA systems for OpenAI’s next-generation AI infrastructure to train and run its next generation of models on the path to deploying superintelligence. To support this deployment including datacenter and power capacity, NVIDIA intends to invest up to $100 billion in OpenAI as the new NVIDIA systems are deployed. The first phase is targeted to come online in the second half of 2026 using NVIDIA’s Vera Rubin platform.

New Street Research estimates that OpenAI will spend $35 on Nvidia chips for every $10 that Nvidia invests in OpenAI. Nice work if you can get it.

Nvidia was up about 4.5% on the announcement. This project alone is a full year’s worth of Nvidia chips, which should further put to bed any question of Nvidia running out of Western customers for its chips, as discussed later in the post.

Note that the details of this deal are not finalized, which means the deal isn’t done.

OpenAI paired this with progressing its Stargate project with Oracle and SoftBank, expanding into five new AI data center cites and claiming they are on track to complete their $500 billion, 10-gigawatt commitment by the end of 2025, as these announcements plus the original cite in Abilene, Texas cover over $400 billion and 7-gigawatts over three years.

The new sites are Shackelford County, Texas, Dona Ana County, New Mexico, ‘a cite in the midwest to be announced soon,’ the existing Oracle cite in Lordstown, Ohio, and next year a cite in Milam County, Texas developed in partnership with SB energy.

Berber Jin at WSJ described the full plan as a $1 trillion buildout of computing warehouses across the nation and beyond.

Sam Altman (CEO OpenAI): I don’t think we’ve figured out yet the final form of what financing for compute looks like.

But I assume, like in many other technological revolutions, figuring out the right answer to that will unlock a huge amount of value delivered to society.

To celebrate all this, Sam Altman wrote Abundant Intelligence.

What about what this means for training runs? It means there is no physical cap that is likely to bind, it will be more about what makes economic sense since compute will be in high demand on all fronts.

Peter Wildeford: 10GW AI training runs… probably with models 20x-100x larger than GPT-5. We knew this was going to come, and we still don’t have any updates on timeline. How many years the $100B is over is also unclear – the ultimate size of the investment is really contingent on this.

I’d expect the 10GW OpenAI cluster becomes operational around 2027-2028. GPT-5 was disappointing to some but it’s on trend for its size and true scaling hasn’t happened yet. Get ready for the next three years, and we will truly see what some scaled AI models can do.

What about for other purposes? Ambitions are running high.

Haider: Sam Altman says the industry is bottlenecked by compute, and OpenAI can’t meet demand

Over the next 1-2 years, scarcity could force painful tradeoffs,

“cure cancer research vs free global education”

No one wants to make that choice. The only answer: scale up.

Peter Wildeford: Nvidia is busy trying to sell chips to China while our American AI companies can’t get enough compute at home 😡

That seems like quite the timeline for either curing cancer or providing free global education. But yes, compute is going to be like any other limited resource, you’re going to have to make tradeoffs. Spending on health care and education cannot be unlimited, nor does it automatically ‘win in a fight’ against other needs.

Eyes are on the prize. That prize is superintelligence.

Should we have plans and policies designed around the idea that superintelligence is coming? Well, we certainly shouldn’t build our plans and policies around the idea that the AI companies do not expect superintelligence, or that they won’t try to build it.

Roon: all the largest technology fortunes in the world are on record saying they’re betting the entire bank on superintelligence, don’t care about losses, etc. the only growth sector left in planetary capitalism. It is dizzying, running on redline overload.

If that statement does not terrify you, either you did not understand it, you did not think through its consequences, or you have become numb to such things.

Gavin Baker (quoted as a reminder from BuccoCapital Bloke): Mark Zuckerberg, Satya and Sundar just told you in different ways, we are not even thinking about ROI. And the reason they said that is because the people who actually control these companies, the founders, there’s either super-voting stock or significant influence in the case of Microsoft, believe they’re in a race to create a Digital God.

And if you create that first Digital God, we could debate whether it’s tens of trillions or hundreds of trillions of value, and we can debate whether or not that’s ridiculous, but that is what they believe and they believe that if they lose that race, losing the race is an existential threat to the company.

So Larry Page has evidently said internally at Google many times, “I am willing to go bankrupt rather than lose this race.” So everybody is really focused on this ROI equation, but the people making the decisions are not, because they so strongly believe that scaling laws will continue, and there’s a big debate over whether these emergent properties are just in context, learning, et cetera, et cetera, but they believe scaling laws are going to continue. The models are going to get better and more capable, better at reasoning.

And because they have that belief, they’re going to spend until I think there is irrefutable evidence that scaling laws are slowing. And the only way you get irrefutable evidence why has progress slowed down since GPT-4 came out is there hasn’t been a new generation of Nvidia GPUs. It’s what you need to enable that next real step function change in capability.

Mark Zuckerberg (last week): If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate obviously. But what I’d say is I actually think the risk is higher on the other side.

If you if you um build too slowly and then super intelligence is possible in 3 years, but you built it out assuming it would be there in 5 years, then you’re just out of position on what I think is going to be the most important technology that enables the most new products and innovation and value creation and history.

In case it is not obvious, no, they are not racing for ‘market share’ of inference tokens.

A central argument for going forward with AI, that was often cited by OpenAI CEO Sam Altman, used to be ‘compute overhang.’ As in, if we didn’t build AI now, then when we did built it we would have more compute available and progress faster, which would be more dangerous, so better to make progress now. Ah, simpler times.

Does Nvidia have, as Dan Gallagher claims in the Wall Street Journal, Too Much Money? As in so much money they don’t know what to do with it and it loses meaning.

Nvidia is set to generate $165 billion or so in annualized free cash flow by the end of 2028 according to consensus projections:

In the worst case you can always return value to shareholders. Nvidia wisely only pays the minimum $0.01 dividend (so it can be included in various financial products), but has already repurchased $50 billion of its own stock over the past year and plans to buy another $60 billion more.

The argument iby Gallagher is largely that Nvidia cannot make a major acquisition without Chinese approval? I do not understand why they would care, given China is already actively telling companies not to buy Nvidia’s products? Seems like none of their damn business.

But also Nvidia can do further strategic investments and projects instead that can easily eat up this level of free cash flow, if they have no better use for it. Stock market investors would be happy to get more exposure to private companies like OpenAI.

Thinking around Nvidia on Wall Street has for a long time been rather silly and my only regret is I did not buy in far sooner (for those who don’t know, consider this my disclosure that I am overweight Nvidia and have been for a while, quite profitably). For example, a real newspaper like the WSJ will post that the ‘wow’ factor is wearing off because Nvidia is now beating analyst revenue projections by smaller amounts:

If a company keeps beating your expectations, that’s a failure of expectations, with that failure at least getting smaller over time.

Many noticed that when Nvidia invests in OpenAI which uses the money to buy chips from Nvidia, and you move $100 billion back and forth, this can inflate valuations without obviously creating real value.

Such things have happened before.

Andrew Cote: $1 of cash on NVIDIA’s balance sheet is worth $1.

But $1 of revenue at ~50% net margins with a 50x P/E ratio, is worth $25.

Paul Graham (as quoted by Danielle Fong): By 1998, Yahoo was the beneficiary of a de facto Ponzi scheme. Investors were excited about the Internet. One reason they were excited was Yahoo’s revenue growth. So they invested in new Internet startups. The startups then used the money to buy ads on Yahoo to get traffic. Which caused yet more revenue growth for Yahoo, and further convinced investors the Internet was worth investing in.

When I realized this one day, sitting in my cubicle, I jumped up like Archimedes in his bathtub, except instead of “Eureka!” I was shouting “Sell!”

You do have to watch out for the de facto Ponzi scheme, but this is not that, from a pure business perspective it is a clear win-win agreement. You also do have to watch out for a naively multiplying profits into a fixed P/E ratio, but the stock market is smarter than that. The only losers are other AI labs, Microsoft and humanity’s chances of survival.

Peter Wildeford: Nvidia is doing revenue round-tripping at an unprecedented scale (Nvidia servers purchased by companies using money from Nvidia investment). Previously there was CoreWeave, Crusoe, xAI, and Llambda Labs but this is next level. I don’t think any company has ever done this at such a scale. This seems great if we continue AI boom times but could quickly unravel if AI is a bubble.

This is another big loss for Microsoft. Microsoft could’ve been OpenAI’s capital partner of choice as they got in early and had lock-in. But Microsoft wasn’t willing to keep delivering and backed out. This is another nail in the coffin of the Microsoft-OpenAI relationship.

Taking on Nvidia as another huge capital partner provides some much needed stability for OpenAI after being abandoned by Microsoft and their only other partner being SoftBank, which …doesn’t have the best track record.

I guess good to know OpenAI is committing to Nvidia chips for the long haul. Sounds like Microsoft isn’t close to cracking their own chip development like Amazon or Google. That’s good for Nvidia at least.

Ultimately, I am concerned that the US economy is increasingly a leveraged bet on AGI happening. Nvidia is 7% of the US stock market, so their investments matter. AGI happening may mean the end of humanity, but at least the S&P 500 will remain strong.

This probably explains why OpenAI doesn’t talk much about export controls on Nvidia products like Anthropic does… even though such controls would help OpenAI by undermining their Chinese competition.

Nvidia is a bet on AI happening, but in a safe fashion as they are doing it via reinvesting their profits rather than leverage. If AI valuations collapse, Nvidia only goes down roughly proportionally, and they remain a great company.

What about the implications of the deepening of the OpenAI-Nvidia partnership for policy, and willingness to speak out about existential risks?

A key worry has long been that OpenAI depends on Nvidia, so OpenAI may be reluctant to speak about AI risk lest they anger Nvidia and endanger their chip allocations. OpenAI has not been a great political actor, but Sam Altman appreciates many of the dangers and issues that will be at stake.

It is also fortunate that OpenAI’s interests in many ways align with America’s and with those of humanity, far more so than do the incentives of Nvidia. Nvidia wants to make and sell as many chips as possible to the widest variety of customers, whereas OpenAI has a strong interest in America beating China across the board and keeping things under control.

This situation could get far worse. Nvidia may even have made this part of the deal, with Altman agreeing to pursue or not pursue various political objectives, including around export controls. There is even the danger Nvidia ends up with a board seat.

It is likely not a coincidence that Anthropic has found a way to get its compute without Nvidia.

There is a matching bull case, on two fronts.

In terms of Nvidia impacting OpenAI, it is plausible that Nvidia already had maximal leverage over OpenAI via controlling chip allocations, especially if you add that it has the ear of the Administration. At some point, things can’t get any worse. And indeed, locking in these contracts could make things much better, instead, via locking in large commitments that reduce Nvidia’s real leverage.

Then there’s the question of OpenAI impacting Nvidia. Nvidia has a market cap of ~$4.3 trillion, but $100 billion is still ~2.3% of that, and the baseline scenario (since OpenAI is riskier and has more room to grow) is OpenAI equity grows in value faster than Nvidia equity, and Nvidia engages in stock buybacks. So even if no further investments are made, Nvidia could quickly be 5% or more exposed to OpenAI, on top of its exposure to xAI and other Western AI companies, where it makes a variety of strategic investments.

That starts to matter in the calculus for Nvidia. Yes, they can do better short term in their core chip business via selling to China, but that hurts their other investments (and America, but they clearly don’t care about America), and long term the model of compute-intensive top-quality closed models is better for Nvidia anyway.

Usually the incentives point the way you would think. But not always.

Discussion about this post

OpenAI Shows Us The Money Read More »

millions-turn-to-ai-chatbots-for-spiritual-guidance-and-confession

Millions turn to AI chatbots for spiritual guidance and confession

Privacy concerns compound these issues. “I wonder if there isn’t a larger danger in pouring your heart out to a chatbot,” Catholic priest Fr. Mike Schmitz told The Times. “Is it at some point going to become accessible to other people?” Users share intimate spiritual moments that now exist as data points in corporate servers.

Some users prefer the chatbots’ non-judgmental responses to human religious communities. Delphine Collins, a 43-year-old Detroit preschool teacher, told the Times she found more support on Bible Chat than at her church after sharing her health struggles. “People stopped talking to me. It was horrible.”

App creators maintain that their products supplement rather than replace human spiritual connection, and the apps arrive as approximately 40 million people have left US churches in recent decades. “They aren’t going to church like they used to,” Beck said. “But it’s not that they’re less inclined to find spiritual nourishment. It’s just that they do it through different modes.”

Different modes indeed. What faith-seeking users may not realize is that each chatbot response emerges fresh from the prompt you provide, with no permanent thread connecting one instance to the next beyond a rolling history of the present conversation and what might be stored as a “memory” in a separate system. When a religious chatbot says, “I’ll pray for you,” the simulated “I” making that promise ceases to exist the moment the response completes. There’s no persistent identity to provide ongoing spiritual guidance, and no memory of your spiritual journey beyond what gets fed back into the prompt with every query.

But this is spirituality we’re talking about, and despite technical realities, many people will believe that the chatbots can give them divine guidance. In matters of faith, contradictory evidence rarely shakes a strong belief once it takes hold, whether that faith is placed in the divine or in what are essentially voices emanating from a roll of loaded dice. For many, there may not be much difference.

Millions turn to AI chatbots for spiritual guidance and confession Read More »

what-do-people-actually-use-chatgpt-for?-openai-provides-some-numbers.

What do people actually use ChatGPT for? OpenAI provides some numbers.


Hey, what are you doing with that?

New study breaks down what 700 million users do across 2.6 billion daily GPT messages.

A live look at how OpenAI gathered its user data. Credit: Getty Images

As someone who writes about the AI industry relatively frequently for this site, there is one question that I find myself constantly asking and being asked in turn, in some form or another: What do you actually use large language models for?

Today, OpenAI’s Economic Research Team went a long way toward answering that question, on a population level, releasing a first-of-its-kind National Bureau of Economic Research working paper (in association with Harvard economist David Denning) detailing how people end up using ChatGPT across time and tasks. While other research has sought to estimate this kind of usage data using self-reported surveys, this is the first such paper with direct access to OpenAI’s internal user data. As such, it gives us an unprecedented direct window into reliable usage stats for what is still the most popular application of LLMs by far.

After digging through the dense 65-page paper, here are seven of the most interesting and/or surprising things we discovered about how people are using OpenAI today.

OpenAI is still growing at a rapid clip

We’ve known for a while that ChatGPT was popular, but this paper gives a direct look at just how big the LLM has been getting in recent months. Just measuring weekly active users on ChatGPT’s consumer plans (i.e. Free, Plus, and Pro tiers), ChatGPT passed 100 million users in early 2024, climbed past 400 million users early this year, and currently can boast over 700 million users, or “nearly 10% of the world’s adult population,” according to the company.

Line goes up… and faster than ever these days.

Line goes up… and faster than ever these days. Credit: OpenAI

OpenAI admits its measurements might be slightly off thanks to double-counting some logged-out users across multiple individual devices, as well as some logged-in users who maintain multiple accounts with different email addresses. And other reporting suggests only a small minority of those users are paying for the privilege of using ChatGPT just yet. Still, the vast number of people who are at least curious about trying OpenAI’s LLM appears to still be on the steep upward part of its growth curve.

All those new users are also leading to significant increases in just how many messages OpenAI processes daily, which has gone up from about 451 million in June 2024 to over 2.6 billion in June 2025 (averaged over a week near the end of the month). To give that number some context, Google announced in March that it averages 14 billion searches per day, and that’s after decades as the undisputed leader in Internet search.

… but usage growth is plateauing among long-term users

Newer users have driven almost all of the overall usage growth in ChatGPT in recent months.

Newer users have driven almost all of the overall usage growth in ChatGPT in recent months. Credit: OpenAI

In addition to measuring overall user and usage growth, OpenAI’s paper also breaks down total usage based on when its logged-in users first signed up for an account. These charts show just how much of ChatGPT’s recent growth is reliant on new user acquisition, rather than older users increasing their daily usage.

In terms of average daily message volume per individual long-term user, ChatGPT seems to have seen two distinct and sharp growth periods. The first runs roughly from September through December 2024, coinciding with the launch of the o1-preview and o1-mini models. Average per-user messaging on ChatGPT then largely plateaued until April, when the launch of the o3 and o4-mini models caused another significant usage increase through June.

Since June, though, per-user message rates for established ChatGPT users (those who signed up in the first quarter of 2025 or before) have been remarkably flat for three full months. The growth in overall usage during that last quarter has been entirely driven by newer users who have signed up since April, many of whom are still getting their feet wet with the LLM.

Average daily usage for long-term users has stopped growing in recent months, even as new users increase their ChatGPT message rates.

Average daily usage for long-term users has stopped growing in recent months, even as new users increase their ChatGPT message rates. Credit: OpenAI

We’ll see if the recent tumultuous launch of the GPT-5 model leads to another significant increase in per-user message volume averages in the coming months. If it doesn’t, then we may be seeing at least a temporary ceiling on how much use established ChatGPT users get out of the service in an average day.

ChatGPT users are younger and were more male than the general population

While young people are generally more likely to embrace new technology, it’s striking just how much of ChatGPT’s user base is made up of our youngest demographic cohort. A full 46 percent of users who revealed their age in OpenAI’s study sample were between the ages of 18 and 25. Add in the doubtless significant number of people under 18 using ChatGPT (who weren’t included in the sample at all), and a decent majority of OpenAI’s users probably aren’t old enough to remember the 20th century firsthand.

What started as mostly a boys’ club has reached close to gender parity among ChatGPT users, based on gendered name analysis.

What started as mostly a boys’ club has reached close to gender parity among ChatGPT users, based on gendered name analysis. Credit: OpenAI

OpenAI also estimated the likely gender split among a large sample of ChatGPT users by using Social Security data and the World Gender Name Registry‘s list of strongly masculine or feminine first names. When ChatGPT launched in late 2022, this analysis found roughly 80 percent of weekly active ChatGPT users were likely male. In late 2025, that ratio has flipped to a slight (52.4 percent) majority for likely female users.

People are using it for more than work

Despite all the talk about LLMs potentially revolutionizing the workplace, a significant majority of all ChatGPT use has nothing to do with business productivity, according to OpenAI. Non-work tasks (as identified by an LLM-based classifier) grew from about 53 percent of all ChatGPT messages in June of 2024 to 72.2 percent as of June 2025, according to the study.

As time goes on, more and more ChatGPT usage is becoming non-work related.

As time goes on, more and more ChatGPT usage is becoming non-work related. Credit: OpenAI

Some of this might have to do with the exclusion of users in the Business, Enterprise, and Education subscription tiers from the data set. Still, the recent rise in non-work uses suggests that a lot of the newest ChatGPT users are doing so more for personal than for productivity reasons.

ChatGPT users need help with their writing

It’s not that surprising that a lot of people use a large language model to help them with generating written words. But it’s still striking the extent to which writing help is a major use of ChatGPT.

Across 1.1 million conversations dating from May 2024 to June 2025, a full 28 percent dealt with writing assistance in some form or another, OpenAI said. That rises to a whopping 42 percent for the subset of conversations tagged as work-related (by far the most popular work-related task), and a majority, 52 percent, of all work-related conversations from users with “management and business occupations.”

A lot of ChatGPT use is people seeking help with their writing in some form.

A lot of ChatGPT use is people seeking help with their writing in some form. Credit: OpenAI

OpenAI is quick to point out, though, that many of these users aren’t just relying on ChatGPT to generate emails or messages from whole cloth. The percent of all conversations studied involves users asking the LLM to “edit or critique” text, at 10.6 percent, vs. just 8 percent that deal with generating “personal writing or communication” from a prompt. Another 4.5 percent of all conversations deal with translating existing text to a new language, versus just 1.4 percent dealing with “writing fiction.”

More people are using ChatGPT as an informational search engine

In June 2024, about 14 percent of all ChatGPT conversations were tagged as relating to “seeking information.” By June 2025, that number had risen to 24.4 percent, slightly edging out writing-based prompts in the sample (which had fallen from roughly 35 percent of the 2024 sample).

A growing number of ChatGPT conversations now deal with “seeking information” as you might do with a more traditional search engine.

A growing number of ChatGPT conversations now deal with “seeking information” as you might do with a more traditional search engine. Credit: OpenAI

While recent GPT models seem to have gotten better about citing relevant sources to back up their information, OpenAI is no closer to solving the widespread confabulation problem that makes LLMs a dodgy tool for retrieving facts. Luckily, fewer people seem interested in using ChatGPT to seek information at work; that use case makes up just 13.5 percent of work-related ChatGPT conversations, well below the 40 percent that are writing-related.

A large number of workers are using ChatGPT to make decisions

Among work-related conversations, “making decisions and solving problems” is a relatively popular use for ChatGPT.

Among work-related conversations, “making decisions and solving problems” is a relatively popular use for ChatGPT. Credit: OpenAI

Getting help editing an email is one thing, but asking ChatGPT to help you make a business decision is another altogether. Across work-related conversations, OpenAI says a significant 14.9 percent dealt with “making decisions and solving problems.” That’s second only to “documenting and recording information” for work-related ChatGPT conversations among the dozens of “generalized work activity” categories classified by O*NET.

This was true across all the different occupation types OpenAI looked at, which the company suggests means people are “using ChatGPT as an advisor or research assistant, not just a technology that performs job tasks directly.”

And the rest…

Some other highly touted use cases for ChatGPT that represented a surprisingly small portion of the sampled conversations across OpenAI’s study:

  • Multimedia (e.g., creating or retrieving an image): 6 percent
  • Computer programming: 4.2 percent (though some of this use might be outsourced to the API)
  • Creative ideation: 3.9 percent
  • Mathematical calculation: 3 percent
  • Relationships and personal reflection: 1.9 percent
  • Game and roleplay: 0.4 percent

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

What do people actually use ChatGPT for? OpenAI provides some numbers. Read More »

modder-injects-ai-dialogue-into-2002’s-animal-crossing-using-memory-hack

Modder injects AI dialogue into 2002’s Animal Crossing using memory hack

But discovering the addresses was only half the problem. When you talk to a villager in Animal Crossing, the game normally displays dialogue instantly. Calling an AI model over the Internet takes several seconds. Willison examined the code and found Fonseca’s solution: a watch_dialogue() function that polls memory 10 times per second. When it detects a conversation starting, it immediately writes placeholder text: three dots with hidden pause commands between them, followed by a “Press A to continue” prompt.

“So the user gets a ‘press A to continue’ button and hopefully the LLM has finished by the time they press that button,” Willison noted in a Hacker News comment. While players watch dots appear and reach for the A button, the mod races to get a response from the AI model and translate it into the game’s dialog format.

Learning the game’s secret language

Simply writing text to memory froze the game. Animal Crossing uses an encoded format with control codes that manage everything from text color to character emotions. A special prefix byte (0x7F) signals commands rather than characters. Without the proper end-of-conversation control code, the game waits forever.

“Think of it like HTML,” Fonseca explains. “Your browser doesn’t just display words; it interprets tags … to make text bold.” The decompilation community had documented these codes, allowing Fonseca to build encoder and decoder tools that translate between a human-readable format and the GameCube’s expected byte sequences.

A screenshot of LLM-powered dialog injected into Animal Crossing for the GameCube.

A screenshot of LLM-powered dialog injected into Animal Crossing for the GameCube. Credit: Joshua Fonseca

Initially, he tried using a single AI model to handle both creative writing and technical formatting. “The results were a mess,” he notes. “The AI was trying to be a creative writer and a technical programmer simultaneously and was bad at both.”

The solution: split the work between two models. A Writer AI creates dialogue using character sheets scraped from the Animal Crossing fan wiki. A Director AI then adds technical elements, including pauses, color changes, character expressions, and sound effects.

The code is available on GitHub, though Fonseca warns it contains known bugs and has only been tested on macOS. The mod requires Python 3.8+, API keys for either Google Gemini or OpenAI, and Dolphin emulator. Have fun sticking it to the man—or the raccoon, as the case may be.

Modder injects AI dialogue into 2002’s Animal Crossing using memory hack Read More »

openai-and-microsoft-sign-preliminary-deal-to-revise-partnership-terms

OpenAI and Microsoft sign preliminary deal to revise partnership terms

On Thursday, OpenAI and Microsoft announced they have signed a non-binding agreement to revise their partnership, marking the latest development in a relationship that has grown increasingly complex as both companies compete for customers in the AI market and seek new partnerships for growing infrastructure needs.

“Microsoft and OpenAI have signed a non-binding memorandum of understanding (MOU) for the next phase of our partnership,” the companies wrote in a joint statement. “We are actively working to finalize contractual terms in a definitive agreement. Together, we remain focused on delivering the best AI tools for everyone, grounded in our shared commitment to safety.”

The announcement comes as OpenAI seeks to restructure from a nonprofit to a for-profit entity, a transition that requires Microsoft’s approval, as the company is OpenAI’s largest investor, with more than $13 billion committed since 2019.

The partnership has shown increasing strain as OpenAI has grown from a research lab into a company valued at $500 billion. Both companies now compete for customers, and OpenAI seeks more compute capacity than Microsoft can provide. The relationship has also faced complications over contract terms, including provisions that would limit Microsoft’s access to OpenAI technology once the company reaches so-called AGI (artificial general intelligence)—a nebulous milestone both companies now economically define as AI systems capable of generating at least $100 billion in profit.

In May, OpenAI abandoned its original plan to fully convert to a for-profit company after pressure from former employees, regulators, and critics, including Elon Musk. Musk has sued to block the conversion, arguing it betrays OpenAI’s founding mission as a nonprofit dedicated to benefiting humanity.

OpenAI and Microsoft sign preliminary deal to revise partnership terms Read More »

microsoft-ends-openai-exclusivity-in-office,-adds-rival-anthropic

Microsoft ends OpenAI exclusivity in Office, adds rival Anthropic

Microsoft’s Office 365 suite will soon incorporate AI models from Anthropic alongside existing OpenAI technology, The Information reported, ending years of exclusive reliance on OpenAI for generative AI features across Word, Excel, PowerPoint, and Outlook.

The shift reportedly follows internal testing that revealed Anthropic’s Claude Sonnet 4 model excels at specific Office tasks where OpenAI’s models fall short, particularly in visual design and spreadsheet automation, according to sources familiar with the project cited by The Information, who stressed the move is not a negotiating tactic.

Anthropic did not immediately respond to Ars Technica’s request for comment.

In an unusual arrangement showing the tangled alliances of the AI industry, Microsoft will reportedly purchase access to Anthropic’s models through Amazon Web Services—both a cloud computing rival and one of Anthropic’s major investors. The integration is expected to be announced within weeks, with subscription pricing for Office’s AI tools remaining unchanged, the report says.

Microsoft maintains that its OpenAI relationship remains intact. “As we’ve said, OpenAI will continue to be our partner on frontier models and we remain committed to our long-term partnership,” a Microsoft spokesperson told Reuters following the report. The tech giant has poured over $13 billion into OpenAI to date and is currently negotiating terms for continued access to OpenAI’s models amid ongoing negotiations about their partnership terms.

Stretching back to 2019, Microsoft’s tight partnership with OpenAI until recently gave the tech giant a head start in AI assistants based on language models, allowing for a rapid (though bumpy) deployment of OpenAI-technology-based features in Bing search and the rollout of Copilot assistants throughout its software ecosystem. It’s worth noting, however, that a recent report from the UK government found no clear productivity boost from using Copilot AI in daily work tasks among study participants.

Microsoft ends OpenAI exclusivity in Office, adds rival Anthropic Read More »

pay-per-output?-ai-firms-blindsided-by-beefed-up-robotstxt-instructions.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions.


“Really Simple Licensing” makes it easier for creators to get paid for AI scraping.

Logo for the “Really Simply Licensing” (RSL) standard. Credit: via RSL Collective

Leading Internet companies and publishers—including Reddit, Yahoo, Quora, Medium, The Daily Beast, Fastly, and more—think there may finally be a solution to end AI crawlers hammering websites to scrape content without permission or compensation.

Announced Wednesday morning, the “Really Simply Licensing” (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.

Free for any publisher to use starting today, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI, a press release noted.

The standard was created by the RSL Collective, which was founded by Doug Leeds, former CEO of Ask.com, and Eckart Walther, a former Yahoo vice president of products and co-creator of the RSS standard, which made it easy to syndicate content across the web.

Based on the “Really Simply Syndication” (RSS) standard, RSL terms can be applied to protect any digital content, including webpages, books, videos, and datasets. The new standard supports “a range of licensing, usage, and royalty models, including free, attribution, subscription, pay-per-crawl (publishers get compensated every time an AI application crawls their content), and pay-per-inference (publishers get compensated every time an AI application uses their content to generate a response),” the press release said.

Leeds told Ars that the idea to use the RSS “playbook” to roll out the RSL standard arose after he invited Walther to speak to University of California, Berkeley students at the end of last year. That’s when the longtime friends with search backgrounds began pondering how AI had changed the search industry, as publishers today are forced to compete with AI outputs referencing their own content as search traffic nosedives.

Eckart had watched the RSS standard quickly become adopted by millions of sites, and he realized that RSS had actually always been a licensing standard, Leeds said. Essentially, by adopting the RSS standard, publishers agreed to let search engines license a “bit” of their content in exchange for search traffic, and Eckart realized that it could be just as straightforward to add AI licensing terms in the same way. That way, publishers could strive to recapture lost search revenue by agreeing to license all or some part of their content to train AI in return for payment each time AI outputs link to their content.

Leeds told Ars that the RSL standard doesn’t just benefit publishers, though. It also solves a problem for AI companies, which have complained in litigation over AI scraping that there is no effective way to license content across the web.

“We have listened to them, and what we’ve heard them say is… we need a new protocol,” Leeds said. With the RSL standard, AI firms get a “scalable way to get all the content” they want, while setting an incentive that they’ll only have to pay for the best content that their models actually reference.

“If they’re using it, they pay for it, and if they’re not using it, they don’t pay for it,” Leeds said.

No telling yet how AI firms will react to RSL

At this point, it’s hard to say if AI companies will embrace the RSL standard. Ars reached out to Google, Meta, OpenAI, and xAI—some of the big tech companies whose crawlers have drawn scrutiny—to see if it was technically feasible to pay publishers for every output referencing their content. xAI did not respond, and the other companies declined to comment without further detail about the standard, appearing to have not yet considered how a licensing layer beefing up robots.txt could impact their scraping.

Today will likely be the first chance for AI companies to wrap their heads around the idea of paying publishers per output. Leeds confirmed that the RSL Collective did not consult with AI companies when developing the RSL standard.

But AI companies know that they need a constant stream of fresh content to keep their tools relevant and to continually innovate, Leeds suggested. In that way, the RSL standard “supports what supports them,” Leeds said, “and it creates the appropriate incentive system” to create sustainable royalty streams for creators and ensure that human creativity doesn’t wane as AI evolves.

While we’ll have to wait to see how AI firms react to RSL, early adopters of the standard celebrated the launch today. That included Neil Vogel, CEO of People Inc., who said that “RSL moves the industry forward—evolving from simply blocking unauthorized crawlers, to setting our licensing terms, for all AI use cases, at global web scale.”

Simon Wistow, co-founder of Fastly, suggested the solution “is a timely and necessary response to the shifting economics of the web.”

“By making it easy for publishers to define and enforce licensing terms, RSL lays the foundation for a healthy content ecosystem—one where innovation and investment in original work are rewarded, and where collaboration between publishers and AI companies becomes frictionless and mutually beneficial,” Wistow said.

Leeds noted that a key benefit of the RSL standard is that even small creators will now have an opportunity to generate revenue for helping to train AI. Tony Stubblebine, CEO of Medium, did not mince words when explaining the battle that bloggers face as AI crawlers threaten to divert their traffic without compensating them.

“Right now, AI runs on stolen content,” Stubblebine said. “Adopting this RSL Standard is how we force those AI companies to either pay for what they use, stop using it, or shut down.”

How will the RSL standard be enforced?

On the RSL standard site, publishers can find common terms to add templated or customized text to their robots.txt files to adopt the RSL standard today and start protecting their content from unfettered AI scraping. Here’s an example of how machine-readable licensing terms could look, added directly to robots.txt files:

# NOTICE: all crawlers and bots are strictly prohibited from using this

# content for AI training without complying with the terms of the RSL

# Collective AI royalty license. Any use of this content for AI training

# without a license is a violation of our intellectual property rights.

License: https://rslcollective.org/royalty.xml

Through RSL terms, publishers can automate licensing, with the cloud company Fastly partnering with the collective to provide technical enforcement that Leeds described as tech that acts as a bouncer to keep unapproved bots away from valuable content. It seems likely that Cloudflare, which launched a pay-per-crawl program blocking greedy crawlers in July, could also help enforce the RSL standard.

For publishers, the standard “solves a business problem immediately,” Leeds told Ars, so the collective is hopeful that RSL will be rapidly and widely adopted. As further incentive, publishers can also rely on the RSL standard to “easily encrypt and license non-published, proprietary content to AI companies, including paywalled articles, books, videos, images, and data,” the RSL Collective site said, and that potentially could expand AI firms’ data pool.

On top of technical enforcement, Leeds said that publishers and content creators could legally enforce the terms, noting that the recent $1.5 billion Anthropic settlement suggests “there’s real money at stake” if you don’t train AI “legitimately.”

Should the industry adopt the standard, it could “establish fair market prices and strengthen negotiation leverage for all publishers,” the press release said. And Leeds noted that it’s very common for regulations to follow industry solutions (consider the Digital Millennium Copyright Act). Since the RSL Collective is already in talks with lawmakers, Leeds thinks “there’s good reason to believe” that AI companies will soon “be forced to acknowledge” the standard.

“But even better than that,” Leeds said, “it’s in their interest” to adopt the standard.

With RSL, AI firms can license content at scale “in a way that’s fair [and] preserves the content that they need to make their products continue to innovate.”

Additionally, the RSL standard may solve a problem that risks gutting trust and interest in AI at this early stage.

Leeds noted that currently, AI outputs don’t provide “the best answer” to prompts but instead rely on mashing up answers from different sources to avoid taking too much content from one site. That means that not only do AI companies “spend an enormous amount of money on compute costs to do that,” but AI tools may also be more prone to hallucination in the process of “mashing up” source material “to make something that’s not the best answer because they don’t have the rights to the best answer.”

“The best answer could exist somewhere,” Leeds said. But “they’re spending billions of dollars to create hallucinations, and we’re talking about: Let’s just solve that with a licensing scheme that allows you to use the actual content in a way that solves the user’s query best.”

By transforming the “ecosystem” with a standard that’s “actually sustainable and fair,” Leeds said that AI companies could also ensure that humanity never gets to the point where “humans stop producing” and “turn to AI to reproduce what humans can’t.”

Failing to adopt the RSL standard would be bad for AI innovation, Leeds suggested, perhaps paving the way for AI to replace search with a “sort of self-fulfilling swap of bad content that actually one doesn’t have any current information, doesn’t have any current thinking, because it’s all based on old training information.”

To Leeds, the RSL standard is ultimately “about creating the system that allows the open web to continue. And that happens when we get adoption from everybody,” he said, insisting that “literally the small guys are as important as the big guys” in pushing the entire industry to change and fairly compensate creators.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions. Read More »

openai-#14:-openai-descends-into-paranoia-and-bad-faith-lobbying

OpenAI #14: OpenAI Descends Into Paranoia and Bad Faith Lobbying

I am a little late to the party on several key developments at OpenAI:

  1. OpenAI’s Chief Global Affairs Officer Chris Lehane was central to the creation of the new $100 million PAC where they will partner with a16z to oppose any and all attempts of states to regulate AI in any way for any reason.

  2. Effectively as part of that effort, OpenAI sent a deeply bad faith letter to Governor Newsom opposing SB 53.

  3. OpenAI seemingly has embraced descending fully into paranoia around various nonprofit organizations and also Effective Altruism in general, or at least is engaging in rhetoric and legal action to that effect, joining the style of Obvious Nonsense rhetoric about this previously mostly used by a16z.

This is deeply troubling news. It is substantially worse than I was expecting of them. Which is presumably my mistake.

This post covers those events, along with further developments around two recent tragic suicides where ChatGPT was plausibly at fault for what went down, including harsh words from multiple attorneys general who can veto OpenAI’s conversion to a for-profit company.

In OpenAI #11: America Action Plan, I documented that OpenAI:

  1. Submitted an American AI Action Plan proposal that went full jingoist, framing AI as a race against the CCP in which we must prevail, with intentionally toxic vibes throughout.

  2. Requested immunity from all AI regulations.

  3. Attempted to ban DeepSeek using bad faith arguments.

  4. Demanded absolute fair use, for free, for all AI training, or else.

  5. Also included some reasonable technocratic proposals, such as a National Transmission Highway Act, AI Opportunity Zones, along with some I think are worse on the merits such as their ‘national AI readiness strategy.’

Also worth remembering:

  1. This article claims both OpenAI and Microsoft were central in lobbying to take any meaningful requirements for foundation models out of the EU’s AI Act. If I was a board member, I would see this as incompatible with the OpenAI charter. This was then fleshed out further in the OpenAI Files and in this article from Corporate Europe Observatory.

  2. OpenAI lobbied against SB 1047, both reasonably and unreasonably.

  3. OpenAI’s CEO Sam Altman has over time used increasingly jingoistic language throughout his talks, has used steadily less talk about

OpenAI’s Chief Global Affairs Officer, Christopher Lehane, sent a letter to Governor Newsom urging him to gut SB 53 (or see Miles’s in-line responses included here), which is already very much a compromise bill that got compromised further by pushing its ‘large AI companies’ threshold and eliminating the third-party audit requirement. That already eliminated almost all of what little burden could be claimed was being imposed by the bill.

OpenAI’s previous lobbying efforts were in bad faith. This is substantially worse.

Here is the key ask from OpenAI, bold in original:

In order to make California a leader in global, national and state-level AI policy, we encourage the state to consider frontier model developers compliant with its state requirements when they sign onto a parallel regulatory framework like the CoP or enter into a safety-oriented agreement with a relevant US federal government agency.

As in, California should abdicate its responsibilities entirely, and treat giving lip service to the EU’s Code of Practice (not even actually complying with it!) as sufficient to satisfy California on all fronts. It also says that if a company makes any voluntary agreement with the Federal Government on anything safety related, then that too should satisfy all requirements.

This is very close to saying California should have no AI safety regulations at all.

The rhetoric behind this request is what you would expect. You’ve got:

  1. The jingoism.

  2. The talk about ‘innovation.’

  3. The Obvious Nonsense threats about this slowing down progress or causing people to withdraw from California.

  4. The talk about Federal leadership on regulation without any talk of what that would look like while the only Federal proposal that ever got traction was ‘ban the states from acting and still don’t do anything on the Federal level.’

  5. The talk about burden on ‘small developers’ when to be covered by SB 53 at all you now have to spend a full $500 million in training compute, and the only substantive expense (the outside audits) are entirely gone.

  6. The false claim that California lacks state capacity to handle this, and the false assurance us the EU and Federal Government have totally have what they need.

  7. The talk of a ‘California approach’ which here means ‘do nothing.’

They even try to equate SB 53 to CEQA, which is a non-sequitur.

They equate OpenAI’s ‘commitment to work with’ the US federal government in ways that likely amount to running some bespoke tests focused on national security concerns as equivalent to being under a comprehensive regulatory regime, and as a substitute for SB 53 including its transparency requirements.

They emphasize that they are a non-profit, while trying to transform themselves into a for-profit and expropriate most of the non-profit’s wealth for private gain.

Plus we have again the important misstatement of OpenAI’s mission.

OpenAI’s actual mission: Ensure that AGI benefits all of humanity.

OpenAI says its mission is: Building AI that benefits all of humanity.

That is very importantly not the same thing. The best way to ensure AGI benefits all of humanity could importantly be to not build it.

Also as you would expect, the letter does not, anywhere, explain why the even fully complying with Code of Practice, let alone any future unspecified voluntary safety-oriented agreement, would satisfy the policy goals behind SB 53.

Because very obviously, if you read the Code of Practice and SB 53, they wouldn’t.

Miles Brundage responds to the letter in-line (which I recommend if you want to go into the details at that level) and also offers this Twitter thread:

Miles Brundage (September 1): TIL OpenAI sent a letter to Governor Newsom filled with misleading garbage about SB 53 and AI policy generally.

Unsurprising if you follow this stuff, but worth noting for those who work there and don’t know what’s being done in their name.

I don’t think it’s worth dignifying it with a line-by-line response but I’ll just say that it was clearly not written by people who know what they’re talking about (e.g., what’s in the Code of Practice + what’s in SB 53).

It also boils my blood every time that team comes up with new and creative ways to misstate OpenAI’s mission.

Today it’s “the AI Act is so strong, you should just assume that we’re following everything else” [even though the AI Act has a bunch of issues].

Tomorrow it’s “the AI Act is being enforced too stringently — it needs to be relaxed in ways A, B, and C.”

  1. The context here is OpenAI trying to water down SB 53 (which is not that strict to begin with – e.g. initially third parties would verify companies’ safety claims in *2030and now there is just *nosuch requirement)

  2. The letter treats the Code of Practice for the AI Act, one the one hand – imperfect but real regulation – and a voluntary agreement to do some tests sometimes with a friendly government agency, on the other – as if they’re the same. They’re not, and neither is SB 53…

  3. It’s very disingenuous to act as if OpenAI is super interested in harmonious US-EU integration + federal leadership over states when they have literally never laid out a set of coherent principles for US federal AI legislation.

  4. Vague implied threats to slow down shipping products or pull out of CA/the US etc. if SB 53 went through, as if it is super burdensome… that’s just nonsense. No one who knows anything about this stuff thinks any of that is even remotely plausible.

  5. The “California solution” is basically “pretend different things are the same,” which is funny because it’d take two braincells for OpenAI to articulate an actually-distinctively-Californian or actually-distinctively-American approach to AI policy. But there’s no such effort.

  6. For example, talk about how SB 53 is stronger on actual transparency (and how the Code of Practice has a “transparency” section that basically says “tell stuff to regulators/customers, and it’d sure be real nice if you sometimes published it”). Woulda been trivial. The fact that none of that comes up suggests the real strategy is “make number of bills go down.”

  7. OpenAI’s mission is to ensure that AGI benefits all of humanity. Seems like something you’d want to get right when you have court cases about mission creep.

We also have this essay response from Nomads Vagabonds. He is if anything even less kind than Miles. He reminds us that OpenAI through Greg Brockman has teamed up with a16z to dedicate $100 million to ensuring no regulation of AI, anywhere in any state, for any reason, in a PAC that was the brainchild of OpenAI vice president of global affairs Chris Lehane.

He also goes into detail about the various bad faith provisions.

These four things can be true at once.

  1. OpenAI has several competitors that strongly dislike OpenAI and Sam Altman, for a combination of reasons with varying amounts of merit.

  2. Elon Musk’s lawsuits against OpenAI are often without legal merit, although the objections to OpenAI’s conversion to for-profit were ruled by the judge to absolutely have merit, with the question mainly being if Musk had standing.

  3. There are many other complaints about OpenAI that have a lot or merit.

  4. AI might kill everyone and you might want to work to prevent this without having it out for OpenAI in particular or being funded by OpenAI’s competitors.

OpenAI seems, by Shugerman’s reporting, to have responded to this situation by becoming paranoid that there is some sort of vast conspiracy Out To Get Them, funded and motivated by commercial rivalry, as opposed to people who care about AI not killing everyone and also this Musk guy who is Big Mad.

Of course a lot of us, as the primary example, are going to take issue with OpenAI’s attempt to convert from a non-profit to a for-profit while engaging in one of the biggest thefts in human history by expropriating most of the nonprofit’s financial assets, worth hundreds of billions, for private gain. That opposition has very little to do with Elon Musk.

Emily Dreyfuss: Inside OpenAI, there’s a growing paranoia that some of its loudest critics are being funded by Elon Musk and other billionaire competitors. Now, they are going after these nonprofit groups, but their evidence of a vast conspiracy is often extremely thin.

Emily Shugerman (SF Standard): Nathan Calvin, who joined Encode in 2024, two years after graduating from Stanford Law School, was being subpoenaed by OpenAI. “I was just thinking, ‘Wow, they’re really doing this,’” he said. “‘This is really happening.’”

The subpoena was filed as part of the ongoing lawsuits between Elon Musk and OpenAI CEO Sam Altman, in which Encode had filed an amicus brief supporting some of Musk’s arguments. It asked for any documents relating to Musk’s involvement in the founding of Encode, as well as any communications between Musk, Encode, and Meta CEO Mark Zuckerberg, whom Musk reportedly tried to involve in his OpenAI takeover bid in February.

Calvin said the answer to these questions was easy: The requested documents didn’t exist.

In media interviews, representatives for an OpenAI-affiliated super PAC have described a “vast force” working to slow down AI progress and steal American jobs.

This has long been the Obvious Nonsense a16z line, but now OpenAI is joining them via being part of the ‘Leading the Future’ super PAC. If this was merely Brockman contributing it would be one thing, but no, it’s far beyond that:

According to the Wall Street Journal, the PAC is in part the brainchild of Chris Lehane, OpenAI’s vice president of global affairs.

Meanwhile, OpenAI is treating everyone who opposes their transition to a for-profit as if they have to be part of this kind of vast conspiracy.

Around the time Musk mounted his legal fight [against OpenAI’s conversion to a for-profit], advocacy groups began to voice their opposition to the transition plan, too. Earlier this year, groups like the San Francisco Foundation, Latino Prosperity, and Encode organized open letters to the California attorney general, demanding further questioning about OpenAI’s move to a for-profit. One group, the Coalition for AI Nonprofit Integrity (CANI), helped write a California bill introduced in March that would have blocked the transition. (The assemblymember who introduced the bill suddenly gutted it less than a month later, saying the issue required further study.)

In the ensuing months, OpenAI leadership seems to have decided that these groups and Musk were working in concert.

Catherine Bracy: Based on my interaction with the company, it seems they’re very paranoid about Elon Musk and his role in all of this, and it’s become clear to me that that’s driving their strategy

No, these groups were not (as far as I or anyone else can tell) funded by or working in concert with Musk.

The suspicions that Meta was involved, including in Encode which is attempting to push forward SB 53, are not simply paranoid, they flat out don’t make any sense. Nor does the claim about Musk, either, given how he handles opposition:

Both LASST and Encode have spoken out against Musk and Meta — the entities OpenAI is accusing them of being aligned with — and advocated against their aims: Encode recently filed a complaint with the FTC about Musk’s AI company producing nonconsensual nude images; LASST has criticized the company for abandoning its structure as a public benefit corporation. Both say they have not taken money from Musk nor talked to him. “If anything, I’m more concerned about xAi from a safety perspective than OpenAI,” Whitmer said, referring to Musk’s AI product.

I’m more concerned about OpenAI because I think they matter far more than xAI, but pound for pound xAI is by far the bigger menace acting far less responsibly, and most safety organizations in this supposed conspiracy will tell you that if you ask them, and act accordingly when the questions come up.

Miles Brundage: First it was the EAs out to get them, now it’s Elon.

The reality is just that most people think we should be careful about AI

(Elon himself is ofc actually out to get them, but most people who sometimes disagree with OpenAI have nothing to do with Elon, including Encode, the org discussed at the beginning of the article. And ironically, many effective altruists are more worried about Elon than OAI now)

OpenAI’s paranoia started with CANI, and then extended to Encode, and then to LASST.

Nathan Calvin: ​They seem to have a hard time believing that we are an organization of people who just, like, actually care about this.

Emily Shugerman: Lehane, who joined the company last year, is perhaps best known for coining the term “vast right-wing conspiracy” to dismiss the allegations against Bill Clinton during the Monica Lewinsky scandal — a line that seems to have seeped into Leading the Future’s messaging, too.

In a statement to the Journal, representatives from the PAC decried a “vast force out there that’s looking to slow down AI deployment, prevent the American worker from benefiting from the U.S. leading in global innovation and job creation, and erect a patchwork of regulation.””

The hits keep coming as the a16z-level paranoid about EA being a ‘vast conspiracy’ kicks into high gear , such as the idea that Dustin Moskovitz doesn’t care about AI safety, he’s going after them because of his stake in Anthropic, can you possibly be serious right now, why do you think he invested in Anthropic.

Of particular interest to OpenAI is the fact that both Omidyar and Moskovitz are investors in Anthropic — an OpenAI competitor that claims to produce safer, more steerable AI technology.

Groups backed by competitors often present themselves as disinterested public voices or ‘advocates’, when in reality their funders hold direct equity stakes in competitors in their sector – in this case worth billions of dollars,” she said. “Regardless of all the rhetoric, their patrons will undoubtedly benefit if competitors are weakened.”

Never mind that Anthropic has not supported Moskovitz on AI regulation, and that the regulatory interventions funded by Moskovitz would constantly (aside from any role in trying to stop OpenAI’s for-profit conversion) be bad for Anthropic’s commercial outlook.

Open Philanthropy (funded by Dustin Moskovitz): Reasonable people can disagree about the best guardrails to set for emerging technologies, but right now we’re seeing an unusually brazen effort by some of the biggest companies in the world to buy their way out of any regulation they don’t like. They’re putting their potential profits ahead of U.S. national security and the interests of everyday people.

Companies do this sort of thing all the time. This case is still very brazen, and very obvious, and OpenAI has now jumped into a16z levels of paranoia and bad faith between the lawfare, the funding of the new PAC and their letter on SB 53.

Suing and attacking nonprofits engaging in advocacy is a new low. Compare that to the situation with Daniel Kokotajlo, where OpenAI to its credit once confronted with its bad behavior backed down rather than going on a legal offensive.

Daniel Kokotajlo: Having a big corporation come after you legally, even if they are just harassing you and not trying to actually get you imprisoned, must be pretty stressful and scary. (I was terrified last year during the nondisparagement stuff, and that was just the fear of what *mighthappen, whereas in fact OpenAI backed down instead of attacking) I’m glad these groups aren’t cowed.

As in, do OpenAI and Sam Altman believe these false paranoid conspiracy theories?

I have long wondered the same thing about Marc Andreessen and a16z, and others who say there is a ‘vast conspiracy’ out there by which they mean Effective Altruism (EA), or when they claim it’s all some plot to make money.

I mean, these people are way too smart and knowledgeable to actually believe that, asks Padme, right? And certainly Sam Altman and OpenAI have to know better.

Wouldn’t the more plausible theory be that these people are simply lying? That Lehane doesn’t believe in a ‘vast EA conspiracy’ any more than he believed in a ‘vast right-wing conspiracy’ when he coined the term ‘vast right-wing conspiracy’ about the (we now know very true) allegations around Monica Lewinsky. It’s an op. It’s rhetoric. It’s people saying what they think will work to get them what they want. It’s not hard to make that story make sense.

Then again, maybe they do really believe it, or at least aren’t sure? People often believe genuinely crazy things that do not in any way map to reality, especially once politics starts to get involved. And I can see how going up against Elon Musk and being engaged in one the biggest heists in human history in broad daylight, while trying to build superintelligence that poses existential risks to humanity that a lot of people are very worried about and that also will have more upside than anything ever, could combine to make anyone paranoid. Highly understandable and sympathetic.

Or, of course, they could have been talking to their own AIs about these questions. I hear there are some major sycophancy issues there. One must be careful.

I sincerely hope that those involved here are lying. It beats the alternatives.

It seems that OpenAI’s failures on sycophancy and dealing with suicidality might endanger its relationship with those who must approve its attempted restructuring into a for-profit, also known as one of the largest attempted thefts in human history?

Maybe they will take OpenAI’s charitable mission seriously after all, at least in this way, despite presumably not understanding the full stakes involved and having the wrong idea about what kind of safety matters?

Garrison Lovely: Scorching new letter from CA and DE AGs to OpenAI, who each have the power to block the company’s restructuring to loosen nonprofit controls.

They are NOT happy about the recent teen suicide and murder-suicide that followed prolonged and concerning interactions with ChatGPT.

Rob Bonta (California Attorney General) and Kathleen Jennings (Delaware Attorney General) in a letter: In our meeting, we conveyed in the strongest terms that safety is a non-negotiable priority, especially when it comes to children. Our teams made additional requests about OpenAI’s current safety precautions and governance. We expect that your responses to these will be prioritized and that immediate remedial measures are being taken where appropriate.

We recognize that OpenAI has sought to position itself as a leader in the AI industry on safety. Indeed, OpenAI has publicly committed itself to build safe AGI to benefit all humanity, including children. And before we get to benefiting, we need to ensure that adequate safety measures are in place to not harm.

It is our shared view that OpenAI and the industry at large are not where they need to be in ensuring safety in AI products’ development and deployment. As Attorneys General, public safety is one of our core missions. As we continue our dialogue related to OpenAI’s recapitalization plan, we must work to accelerate and amplify safety as a governing force in the future of this powerful technology.

The recent deaths are unacceptable. They have rightly shaken the American public’s confidence in OpenAI and this industry. OpenAI – and the AI industry – must proactively and transparently ensure AI’s safe deployment. Doing so is mandated by OpenAI’s charitable mission, and will be required and enforced by our respective offices.

We look forward to hearing from you and working with your team on these important issues.

Some other things said by the AGs:

Bonta: We were looking for a rapid response. They’ll know what that means, if that’s days or weeks. I don’t see how it can be months or years.

All antitrust laws apply, all consumer protection laws apply, all criminal laws apply. We are not without many tools to regulate and prevent AI from hurting the public and the children.

With a lawsuit filed that OpenAI might well lose and the the two attorney generals that can veto its restructuring breathing down OpenAI’s neck, OpenAI is promising various fixes and in particular OpenAI has decided it is time for parental controls as soon as they can, which should be within a month.

Their first announcement on August 26 included these plans:

OpenAI: While our initial mitigations prioritized acute self-harm, some people experience other forms of mental distress. For example, someone might enthusiastically tell the model they believe they can drive 24/7 because they realized they’re invincible after not sleeping for two nights. Today, ChatGPT may not recognize this as dangerous or infer play and—by curiously exploring—could subtly reinforce it.

We are working on an update to GPT‑5 that will cause ChatGPT to de-escalate by grounding the person in reality. In this example, it would explain that sleep deprivation is dangerous and recommend rest before any action.

Better late than never on that one, I suppose. That is indeed why I am relatively not so worried about problems like this, we can adjust after things start to go wrong.

OpenAI: In addition to emergency services, we’re exploring ways to make it easier for people to reach out to those closest to them. This could include one-click messages or calls to saved emergency contacts, friends, or family members with suggested language to make starting the conversation less daunting.

We’re also considering features that would allow people to opt-in for ChatGPT to reach out to a designated contact on their behalf in severe cases.

We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT. We’re also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact. That way, in moments of acute distress, ChatGPT can do more than point to resources: it can help connect teens directly to someone who can step in.

On September 2 they followed up with additional information about how they are ‘partnering with experts’ and providing more details.

OpenAI: Earlier this year, we began building more ways for families to use ChatGPT together and decide what works best in their home. Within the next month, parents will be able to:

  • Link their account with their teen’s account (minimum age of 13) through a simple email invitation.

  • Control how ChatGPT responds to their teen with age-appropriate model behavior rules, which are on by default.

  • Manage which features to disable, including memory and chat history.

  • Receive notifications when the system detects their teen is in a moment of acute distress. Expert input will guide this feature to support trust between parents and teens.

These controls add to features we have rolled out for all users including in-app reminders during long sessions to encourage breaks.

Parental controls seem like an excellent idea.

I would consider most of this to effectively be ‘on by default’ already, for everyone, in the sense that AI models have controls against things like NSFW content that largely treat us all like teens. You could certainly tighten them up more for an actual teen, and it seems fine to give parents the option, although mostly I think you’re better off not doing that.

The big new thing is the notification feature. That is a double edged sword. As I’ve discussed previously, an AI or other source of help that can ‘rat you out’ to authorities, even ‘for your own good’ or ‘in moments of acute distress’ is inherently very different from a place where your secrets are safe. There is a reason we have confidentiality for psychologists and lawyers and priests, and balancing when to break that is complicated.

Given an AI’s current level of reliability and its special role as a place free from human judgment or social consequence, I am actually in favor of it outright never altering others without an explicit user request to do so.

Whereas things are moving in the other direction, with predictable results.

As in, OpenAI is already scanning your chats as per their posts I discussed above.

Greg Isenberg: ChatGPT is potentially leaking your private convos to the police.

People use ChatGPT because it feels like talking to a smart friend who won’t judge you. Now, people are realizing it’s more like talking to a smart friend who might snitch.

This is the same arc we saw in social media: early excitement, then paranoia, then demand for smaller, private spaces.

OpenAI (including as quoted by Futurism): When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts.

If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.

We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.

Futurism: When describing its rule against “harm [to] yourself or others,” the company listed off some pretty standard examples of prohibited activity, including using ChatGPT “to promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system.”

They are not directing self-harm cases to protect privacy, but harm to others is deemed different. That still destroys the privacy of the interaction. And ‘harm to others’ could rapidly morph into any number of places, both with false positives and also with changes in ideas about what constitutes ‘harm.’

They’re not even talking about felonies or imminent physical harm. They’re talking about ‘engage in unauthorized activities that violate the security of any service or system,’ or ‘destroy property,’ so this could potentially extend quite far, and in places that seem far less justified than intervening in response a potentially suicidal user. These are circumstances in which typical privileged communication would hold.

I very much do not like where that is going, and if I heard reports this was happening on the regular it would fundamentally alter my relationship to ChatGPT, even though I ‘have nothing to hide.’

What’s most weird about this is that OpenAI was recently advocating for ‘AI privilege.’

Reid Southern: OpenAI went from warning users that there’s no confidentiality when using ChatGPT, and calling for “AI privilege”, to actively scanning your messages to send to law enforcement, seemingly to protect themselves in the aftermath of the ChatGPT induced murder-suicide

This is partially a case of ‘if I’m not legally forbidden to do [X] then I will get blamed for not doing [X] so please ban me from doing it’ so it’s not as hypocritical as it sounds. It is still rather hypocritical and confusing to escalate like this. Why respond to suicides by warning you will be scanning for harm to others and intent to impact the security of systems, but definitely not acting if someone is suicidal?

If you think AI users deserve privilege, and I think this is a highly reasonable position, then act like it. Set a good example, set a very high bar for ratting, and confine alerting human reviewers let alone the authorities to when you catch someone on the level of trying to make a nuke or a bioweapon, or at minimum things that would force a psychologist to break privilege. It’s even good for business.

Otherwise people are indeed going to get furious, and there will be increasing demand to run models locally or in other ways that better preserve privacy. There’s not zero of that already, but it would escalate quickly.

Steven Byrnes notes the weirdness of seeing Ben’s essay describe OpenAI as an ‘AI safety company’ rather than a company most AI safety folks hate with a passion.

Steven Byrnes: I can’t even describe how weird it is to hear OpenAI, as a whole, today in 2025, being described as an AI safety company. Actual AI safety people HATE OPENAI WITH A PASSION, almost universally. The EA people generally hate it. The Rationalists generally hate it even more.

AI safety people have protested at the OpenAI offices with picket signs & megaphones! When the board fired Sam Altman, everyone immediately blamed EA & AI safety people! OpenAI has churned through AI safety staff b/c they keep quitting in protest! …What universe is this?

Yes, many AI safety people are angry about OpenAI being cavalier & dishonest about harm they might cause in the future, whereas you are angry about OpenAI being cavalier & dishonest about harm they are causing right now. That doesn’t make us enemies. “Why not both?”

I think that’s going too far. It’s not good to hate with a passion.

Even more than that, you could do so, so much worse than OpenAI on all of these questions (e.g. Meta, or xAI, or every major Chinese lab, basically everyone except Anthropic or Google is worse).

Certainly we think OpenAI is on net not helping and deeply inadequate to the task, their political lobbying and rhetoric is harmful, and their efforts have generally made the world a lot less safe. They still are doing a lot of good work, making a lot of good decisions, and I believe that Altman is normative, that he is far more aware of what is coming and the problems we will face than most or than he currently lets on.

I believe he is doing a much better job on these fronts than most (but not all) plausible CEOs of OpenAI would do in his place. For example, if OpenAI’s CEO of Applications Fidji Simo were in charge, or Chairman of the Board Bret Taylor were in charge, or Greg Brockman was in charge, or the CEO of any of the magnificent seven were in charge, I would expect OpenAI to act far less responsibly.

Thus I consider myself relatively well-inclined towards OpenAI among those worried about AI or advocating or AI safety.

I still have an entire series of posts about how terrible things have been at OpenAI and a regular section about them called ‘The Mask Comes Off.’

And I find myself forced to update my view importantly downward, towards being more concerned, in the wake of the recent events described in this post. OpenAI is steadily becoming more of a bad faith actor in the public sphere.

Discussion about this post

OpenAI #14: OpenAI Descends Into Paranoia and Bad Faith Lobbying Read More »

chatgpt’s-new-branching-feature-is-a-good-reminder-that-ai-chatbots-aren’t-people

ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people

On Thursday, OpenAI announced that ChatGPT users can now branch conversations into multiple parallel threads, serving as a useful reminder that AI chatbots aren’t people with fixed viewpoints but rather malleable tools you can rewind and redirect. The company released the feature for all logged-in web users following years of user requests for the capability.

The feature works by letting users hover over any message in a ChatGPT conversation, click “More actions,” and select “Branch in new chat.” This creates a new conversation thread that includes all the conversation history up to that specific point, while preserving the original conversation intact.

Think of it almost like creating a new copy of a “document” to edit while keeping the original version safe—except that “document” is an ongoing AI conversation with all its accumulated context. For example, a marketing team brainstorming ad copy can now create separate branches to test a formal tone, a humorous approach, or an entirely different strategy—all stemming from the same initial setup.

A screenshot of conversation branching in ChatGPT. OpenAI

The feature addresses a longstanding limitation in the AI model where ChatGPT users who wanted to try different approaches had to either overwrite their existing conversation after a certain point by changing a previous prompt or start completely fresh. Branching allows exploring what-if scenarios easily—and unlike in a human conversation, you can try multiple different approaches.

A 2024 study conducted by researchers from Tsinghua University and Beijing Institute of Technology suggested that linear dialogue interfaces for LLMs poorly serve scenarios involving “multiple layers, and many subtasks—such as brainstorming, structured knowledge learning, and large project analysis.” The study found that linear interaction forces users to “repeatedly compare, modify, and copy previous content,” increasing cognitive load and reducing efficiency.

Some software developers have already responded positively to the update, with some comparing the feature to Git, the version control system that lets programmers create separate branches of code to test changes without affecting the main codebase. The comparison makes sense: Both allow you to experiment with different approaches while preserving your original work.

ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people Read More »

openai-links-up-with-broadcom-to-produce-its-own-ai-chips

OpenAI links up with Broadcom to produce its own AI chips

OpenAI is set to produce its own artificial intelligence chip for the first time next year, as the ChatGPT maker attempts to address insatiable demand for computing power and reduce its reliance on chip giant Nvidia.

The chip, co-designed with US semiconductor giant Broadcom, would ship next year, according to multiple people familiar with the partnership.

Broadcom’s chief executive Hock Tan on Thursday referred to a mystery new customer committing to $10 billion in orders.

OpenAI’s move follows the strategy of tech giants such as Google, Amazon and Meta, which have designed their own specialised chips to run AI workloads. The industry has seen huge demand for the computing power to train and run AI models.

OpenAI planned to put the chip to use internally, according to one person close to the project, rather than make them available to external customers.

Last year it began an initial collaboration with Broadcom, according to reports at the time, but the timeline for mass production of a successful chip design had previously been unclear.

On a call with analysts, Tan announced that Broadcom had secured a fourth major customer for its custom AI chip business, as it reported earnings that topped Wall Street estimates.

Broadcom does not disclose the names of these customers, but people familiar with the matter confirmed OpenAI was the new client. Broadcom and OpenAI declined to comment.

OpenAI links up with Broadcom to produce its own AI chips Read More »