Author name: Paul Patrick

openai-#14:-openai-descends-into-paranoia-and-bad-faith-lobbying

OpenAI #14: OpenAI Descends Into Paranoia and Bad Faith Lobbying

I am a little late to the party on several key developments at OpenAI:

  1. OpenAI’s Chief Global Affairs Officer Chris Lehane was central to the creation of the new $100 million PAC where they will partner with a16z to oppose any and all attempts of states to regulate AI in any way for any reason.

  2. Effectively as part of that effort, OpenAI sent a deeply bad faith letter to Governor Newsom opposing SB 53.

  3. OpenAI seemingly has embraced descending fully into paranoia around various nonprofit organizations and also Effective Altruism in general, or at least is engaging in rhetoric and legal action to that effect, joining the style of Obvious Nonsense rhetoric about this previously mostly used by a16z.

This is deeply troubling news. It is substantially worse than I was expecting of them. Which is presumably my mistake.

This post covers those events, along with further developments around two recent tragic suicides where ChatGPT was plausibly at fault for what went down, including harsh words from multiple attorneys general who can veto OpenAI’s conversion to a for-profit company.

In OpenAI #11: America Action Plan, I documented that OpenAI:

  1. Submitted an American AI Action Plan proposal that went full jingoist, framing AI as a race against the CCP in which we must prevail, with intentionally toxic vibes throughout.

  2. Requested immunity from all AI regulations.

  3. Attempted to ban DeepSeek using bad faith arguments.

  4. Demanded absolute fair use, for free, for all AI training, or else.

  5. Also included some reasonable technocratic proposals, such as a National Transmission Highway Act, AI Opportunity Zones, along with some I think are worse on the merits such as their ‘national AI readiness strategy.’

Also worth remembering:

  1. This article claims both OpenAI and Microsoft were central in lobbying to take any meaningful requirements for foundation models out of the EU’s AI Act. If I was a board member, I would see this as incompatible with the OpenAI charter. This was then fleshed out further in the OpenAI Files and in this article from Corporate Europe Observatory.

  2. OpenAI lobbied against SB 1047, both reasonably and unreasonably.

  3. OpenAI’s CEO Sam Altman has over time used increasingly jingoistic language throughout his talks, has used steadily less talk about

OpenAI’s Chief Global Affairs Officer, Christopher Lehane, sent a letter to Governor Newsom urging him to gut SB 53 (or see Miles’s in-line responses included here), which is already very much a compromise bill that got compromised further by pushing its ‘large AI companies’ threshold and eliminating the third-party audit requirement. That already eliminated almost all of what little burden could be claimed was being imposed by the bill.

OpenAI’s previous lobbying efforts were in bad faith. This is substantially worse.

Here is the key ask from OpenAI, bold in original:

In order to make California a leader in global, national and state-level AI policy, we encourage the state to consider frontier model developers compliant with its state requirements when they sign onto a parallel regulatory framework like the CoP or enter into a safety-oriented agreement with a relevant US federal government agency.

As in, California should abdicate its responsibilities entirely, and treat giving lip service to the EU’s Code of Practice (not even actually complying with it!) as sufficient to satisfy California on all fronts. It also says that if a company makes any voluntary agreement with the Federal Government on anything safety related, then that too should satisfy all requirements.

This is very close to saying California should have no AI safety regulations at all.

The rhetoric behind this request is what you would expect. You’ve got:

  1. The jingoism.

  2. The talk about ‘innovation.’

  3. The Obvious Nonsense threats about this slowing down progress or causing people to withdraw from California.

  4. The talk about Federal leadership on regulation without any talk of what that would look like while the only Federal proposal that ever got traction was ‘ban the states from acting and still don’t do anything on the Federal level.’

  5. The talk about burden on ‘small developers’ when to be covered by SB 53 at all you now have to spend a full $500 million in training compute, and the only substantive expense (the outside audits) are entirely gone.

  6. The false claim that California lacks state capacity to handle this, and the false assurance us the EU and Federal Government have totally have what they need.

  7. The talk of a ‘California approach’ which here means ‘do nothing.’

They even try to equate SB 53 to CEQA, which is a non-sequitur.

They equate OpenAI’s ‘commitment to work with’ the US federal government in ways that likely amount to running some bespoke tests focused on national security concerns as equivalent to being under a comprehensive regulatory regime, and as a substitute for SB 53 including its transparency requirements.

They emphasize that they are a non-profit, while trying to transform themselves into a for-profit and expropriate most of the non-profit’s wealth for private gain.

Plus we have again the important misstatement of OpenAI’s mission.

OpenAI’s actual mission: Ensure that AGI benefits all of humanity.

OpenAI says its mission is: Building AI that benefits all of humanity.

That is very importantly not the same thing. The best way to ensure AGI benefits all of humanity could importantly be to not build it.

Also as you would expect, the letter does not, anywhere, explain why the even fully complying with Code of Practice, let alone any future unspecified voluntary safety-oriented agreement, would satisfy the policy goals behind SB 53.

Because very obviously, if you read the Code of Practice and SB 53, they wouldn’t.

Miles Brundage responds to the letter in-line (which I recommend if you want to go into the details at that level) and also offers this Twitter thread:

Miles Brundage (September 1): TIL OpenAI sent a letter to Governor Newsom filled with misleading garbage about SB 53 and AI policy generally.

Unsurprising if you follow this stuff, but worth noting for those who work there and don’t know what’s being done in their name.

I don’t think it’s worth dignifying it with a line-by-line response but I’ll just say that it was clearly not written by people who know what they’re talking about (e.g., what’s in the Code of Practice + what’s in SB 53).

It also boils my blood every time that team comes up with new and creative ways to misstate OpenAI’s mission.

Today it’s “the AI Act is so strong, you should just assume that we’re following everything else” [even though the AI Act has a bunch of issues].

Tomorrow it’s “the AI Act is being enforced too stringently — it needs to be relaxed in ways A, B, and C.”

  1. The context here is OpenAI trying to water down SB 53 (which is not that strict to begin with – e.g. initially third parties would verify companies’ safety claims in *2030and now there is just *nosuch requirement)

  2. The letter treats the Code of Practice for the AI Act, one the one hand – imperfect but real regulation – and a voluntary agreement to do some tests sometimes with a friendly government agency, on the other – as if they’re the same. They’re not, and neither is SB 53…

  3. It’s very disingenuous to act as if OpenAI is super interested in harmonious US-EU integration + federal leadership over states when they have literally never laid out a set of coherent principles for US federal AI legislation.

  4. Vague implied threats to slow down shipping products or pull out of CA/the US etc. if SB 53 went through, as if it is super burdensome… that’s just nonsense. No one who knows anything about this stuff thinks any of that is even remotely plausible.

  5. The “California solution” is basically “pretend different things are the same,” which is funny because it’d take two braincells for OpenAI to articulate an actually-distinctively-Californian or actually-distinctively-American approach to AI policy. But there’s no such effort.

  6. For example, talk about how SB 53 is stronger on actual transparency (and how the Code of Practice has a “transparency” section that basically says “tell stuff to regulators/customers, and it’d sure be real nice if you sometimes published it”). Woulda been trivial. The fact that none of that comes up suggests the real strategy is “make number of bills go down.”

  7. OpenAI’s mission is to ensure that AGI benefits all of humanity. Seems like something you’d want to get right when you have court cases about mission creep.

We also have this essay response from Nomads Vagabonds. He is if anything even less kind than Miles. He reminds us that OpenAI through Greg Brockman has teamed up with a16z to dedicate $100 million to ensuring no regulation of AI, anywhere in any state, for any reason, in a PAC that was the brainchild of OpenAI vice president of global affairs Chris Lehane.

He also goes into detail about the various bad faith provisions.

These four things can be true at once.

  1. OpenAI has several competitors that strongly dislike OpenAI and Sam Altman, for a combination of reasons with varying amounts of merit.

  2. Elon Musk’s lawsuits against OpenAI are often without legal merit, although the objections to OpenAI’s conversion to for-profit were ruled by the judge to absolutely have merit, with the question mainly being if Musk had standing.

  3. There are many other complaints about OpenAI that have a lot or merit.

  4. AI might kill everyone and you might want to work to prevent this without having it out for OpenAI in particular or being funded by OpenAI’s competitors.

OpenAI seems, by Shugerman’s reporting, to have responded to this situation by becoming paranoid that there is some sort of vast conspiracy Out To Get Them, funded and motivated by commercial rivalry, as opposed to people who care about AI not killing everyone and also this Musk guy who is Big Mad.

Of course a lot of us, as the primary example, are going to take issue with OpenAI’s attempt to convert from a non-profit to a for-profit while engaging in one of the biggest thefts in human history by expropriating most of the nonprofit’s financial assets, worth hundreds of billions, for private gain. That opposition has very little to do with Elon Musk.

Emily Dreyfuss: Inside OpenAI, there’s a growing paranoia that some of its loudest critics are being funded by Elon Musk and other billionaire competitors. Now, they are going after these nonprofit groups, but their evidence of a vast conspiracy is often extremely thin.

Emily Shugerman (SF Standard): Nathan Calvin, who joined Encode in 2024, two years after graduating from Stanford Law School, was being subpoenaed by OpenAI. “I was just thinking, ‘Wow, they’re really doing this,’” he said. “‘This is really happening.’”

The subpoena was filed as part of the ongoing lawsuits between Elon Musk and OpenAI CEO Sam Altman, in which Encode had filed an amicus brief supporting some of Musk’s arguments. It asked for any documents relating to Musk’s involvement in the founding of Encode, as well as any communications between Musk, Encode, and Meta CEO Mark Zuckerberg, whom Musk reportedly tried to involve in his OpenAI takeover bid in February.

Calvin said the answer to these questions was easy: The requested documents didn’t exist.

In media interviews, representatives for an OpenAI-affiliated super PAC have described a “vast force” working to slow down AI progress and steal American jobs.

This has long been the Obvious Nonsense a16z line, but now OpenAI is joining them via being part of the ‘Leading the Future’ super PAC. If this was merely Brockman contributing it would be one thing, but no, it’s far beyond that:

According to the Wall Street Journal, the PAC is in part the brainchild of Chris Lehane, OpenAI’s vice president of global affairs.

Meanwhile, OpenAI is treating everyone who opposes their transition to a for-profit as if they have to be part of this kind of vast conspiracy.

Around the time Musk mounted his legal fight [against OpenAI’s conversion to a for-profit], advocacy groups began to voice their opposition to the transition plan, too. Earlier this year, groups like the San Francisco Foundation, Latino Prosperity, and Encode organized open letters to the California attorney general, demanding further questioning about OpenAI’s move to a for-profit. One group, the Coalition for AI Nonprofit Integrity (CANI), helped write a California bill introduced in March that would have blocked the transition. (The assemblymember who introduced the bill suddenly gutted it less than a month later, saying the issue required further study.)

In the ensuing months, OpenAI leadership seems to have decided that these groups and Musk were working in concert.

Catherine Bracy: Based on my interaction with the company, it seems they’re very paranoid about Elon Musk and his role in all of this, and it’s become clear to me that that’s driving their strategy

No, these groups were not (as far as I or anyone else can tell) funded by or working in concert with Musk.

The suspicions that Meta was involved, including in Encode which is attempting to push forward SB 53, are not simply paranoid, they flat out don’t make any sense. Nor does the claim about Musk, either, given how he handles opposition:

Both LASST and Encode have spoken out against Musk and Meta — the entities OpenAI is accusing them of being aligned with — and advocated against their aims: Encode recently filed a complaint with the FTC about Musk’s AI company producing nonconsensual nude images; LASST has criticized the company for abandoning its structure as a public benefit corporation. Both say they have not taken money from Musk nor talked to him. “If anything, I’m more concerned about xAi from a safety perspective than OpenAI,” Whitmer said, referring to Musk’s AI product.

I’m more concerned about OpenAI because I think they matter far more than xAI, but pound for pound xAI is by far the bigger menace acting far less responsibly, and most safety organizations in this supposed conspiracy will tell you that if you ask them, and act accordingly when the questions come up.

Miles Brundage: First it was the EAs out to get them, now it’s Elon.

The reality is just that most people think we should be careful about AI

(Elon himself is ofc actually out to get them, but most people who sometimes disagree with OpenAI have nothing to do with Elon, including Encode, the org discussed at the beginning of the article. And ironically, many effective altruists are more worried about Elon than OAI now)

OpenAI’s paranoia started with CANI, and then extended to Encode, and then to LASST.

Nathan Calvin: ​They seem to have a hard time believing that we are an organization of people who just, like, actually care about this.

Emily Shugerman: Lehane, who joined the company last year, is perhaps best known for coining the term “vast right-wing conspiracy” to dismiss the allegations against Bill Clinton during the Monica Lewinsky scandal — a line that seems to have seeped into Leading the Future’s messaging, too.

In a statement to the Journal, representatives from the PAC decried a “vast force out there that’s looking to slow down AI deployment, prevent the American worker from benefiting from the U.S. leading in global innovation and job creation, and erect a patchwork of regulation.””

The hits keep coming as the a16z-level paranoid about EA being a ‘vast conspiracy’ kicks into high gear , such as the idea that Dustin Moskovitz doesn’t care about AI safety, he’s going after them because of his stake in Anthropic, can you possibly be serious right now, why do you think he invested in Anthropic.

Of particular interest to OpenAI is the fact that both Omidyar and Moskovitz are investors in Anthropic — an OpenAI competitor that claims to produce safer, more steerable AI technology.

Groups backed by competitors often present themselves as disinterested public voices or ‘advocates’, when in reality their funders hold direct equity stakes in competitors in their sector – in this case worth billions of dollars,” she said. “Regardless of all the rhetoric, their patrons will undoubtedly benefit if competitors are weakened.”

Never mind that Anthropic has not supported Moskovitz on AI regulation, and that the regulatory interventions funded by Moskovitz would constantly (aside from any role in trying to stop OpenAI’s for-profit conversion) be bad for Anthropic’s commercial outlook.

Open Philanthropy (funded by Dustin Moskovitz): Reasonable people can disagree about the best guardrails to set for emerging technologies, but right now we’re seeing an unusually brazen effort by some of the biggest companies in the world to buy their way out of any regulation they don’t like. They’re putting their potential profits ahead of U.S. national security and the interests of everyday people.

Companies do this sort of thing all the time. This case is still very brazen, and very obvious, and OpenAI has now jumped into a16z levels of paranoia and bad faith between the lawfare, the funding of the new PAC and their letter on SB 53.

Suing and attacking nonprofits engaging in advocacy is a new low. Compare that to the situation with Daniel Kokotajlo, where OpenAI to its credit once confronted with its bad behavior backed down rather than going on a legal offensive.

Daniel Kokotajlo: Having a big corporation come after you legally, even if they are just harassing you and not trying to actually get you imprisoned, must be pretty stressful and scary. (I was terrified last year during the nondisparagement stuff, and that was just the fear of what *mighthappen, whereas in fact OpenAI backed down instead of attacking) I’m glad these groups aren’t cowed.

As in, do OpenAI and Sam Altman believe these false paranoid conspiracy theories?

I have long wondered the same thing about Marc Andreessen and a16z, and others who say there is a ‘vast conspiracy’ out there by which they mean Effective Altruism (EA), or when they claim it’s all some plot to make money.

I mean, these people are way too smart and knowledgeable to actually believe that, asks Padme, right? And certainly Sam Altman and OpenAI have to know better.

Wouldn’t the more plausible theory be that these people are simply lying? That Lehane doesn’t believe in a ‘vast EA conspiracy’ any more than he believed in a ‘vast right-wing conspiracy’ when he coined the term ‘vast right-wing conspiracy’ about the (we now know very true) allegations around Monica Lewinsky. It’s an op. It’s rhetoric. It’s people saying what they think will work to get them what they want. It’s not hard to make that story make sense.

Then again, maybe they do really believe it, or at least aren’t sure? People often believe genuinely crazy things that do not in any way map to reality, especially once politics starts to get involved. And I can see how going up against Elon Musk and being engaged in one the biggest heists in human history in broad daylight, while trying to build superintelligence that poses existential risks to humanity that a lot of people are very worried about and that also will have more upside than anything ever, could combine to make anyone paranoid. Highly understandable and sympathetic.

Or, of course, they could have been talking to their own AIs about these questions. I hear there are some major sycophancy issues there. One must be careful.

I sincerely hope that those involved here are lying. It beats the alternatives.

It seems that OpenAI’s failures on sycophancy and dealing with suicidality might endanger its relationship with those who must approve its attempted restructuring into a for-profit, also known as one of the largest attempted thefts in human history?

Maybe they will take OpenAI’s charitable mission seriously after all, at least in this way, despite presumably not understanding the full stakes involved and having the wrong idea about what kind of safety matters?

Garrison Lovely: Scorching new letter from CA and DE AGs to OpenAI, who each have the power to block the company’s restructuring to loosen nonprofit controls.

They are NOT happy about the recent teen suicide and murder-suicide that followed prolonged and concerning interactions with ChatGPT.

Rob Bonta (California Attorney General) and Kathleen Jennings (Delaware Attorney General) in a letter: In our meeting, we conveyed in the strongest terms that safety is a non-negotiable priority, especially when it comes to children. Our teams made additional requests about OpenAI’s current safety precautions and governance. We expect that your responses to these will be prioritized and that immediate remedial measures are being taken where appropriate.

We recognize that OpenAI has sought to position itself as a leader in the AI industry on safety. Indeed, OpenAI has publicly committed itself to build safe AGI to benefit all humanity, including children. And before we get to benefiting, we need to ensure that adequate safety measures are in place to not harm.

It is our shared view that OpenAI and the industry at large are not where they need to be in ensuring safety in AI products’ development and deployment. As Attorneys General, public safety is one of our core missions. As we continue our dialogue related to OpenAI’s recapitalization plan, we must work to accelerate and amplify safety as a governing force in the future of this powerful technology.

The recent deaths are unacceptable. They have rightly shaken the American public’s confidence in OpenAI and this industry. OpenAI – and the AI industry – must proactively and transparently ensure AI’s safe deployment. Doing so is mandated by OpenAI’s charitable mission, and will be required and enforced by our respective offices.

We look forward to hearing from you and working with your team on these important issues.

Some other things said by the AGs:

Bonta: We were looking for a rapid response. They’ll know what that means, if that’s days or weeks. I don’t see how it can be months or years.

All antitrust laws apply, all consumer protection laws apply, all criminal laws apply. We are not without many tools to regulate and prevent AI from hurting the public and the children.

With a lawsuit filed that OpenAI might well lose and the the two attorney generals that can veto its restructuring breathing down OpenAI’s neck, OpenAI is promising various fixes and in particular OpenAI has decided it is time for parental controls as soon as they can, which should be within a month.

Their first announcement on August 26 included these plans:

OpenAI: While our initial mitigations prioritized acute self-harm, some people experience other forms of mental distress. For example, someone might enthusiastically tell the model they believe they can drive 24/7 because they realized they’re invincible after not sleeping for two nights. Today, ChatGPT may not recognize this as dangerous or infer play and—by curiously exploring—could subtly reinforce it.

We are working on an update to GPT‑5 that will cause ChatGPT to de-escalate by grounding the person in reality. In this example, it would explain that sleep deprivation is dangerous and recommend rest before any action.

Better late than never on that one, I suppose. That is indeed why I am relatively not so worried about problems like this, we can adjust after things start to go wrong.

OpenAI: In addition to emergency services, we’re exploring ways to make it easier for people to reach out to those closest to them. This could include one-click messages or calls to saved emergency contacts, friends, or family members with suggested language to make starting the conversation less daunting.

We’re also considering features that would allow people to opt-in for ChatGPT to reach out to a designated contact on their behalf in severe cases.

We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT. We’re also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact. That way, in moments of acute distress, ChatGPT can do more than point to resources: it can help connect teens directly to someone who can step in.

On September 2 they followed up with additional information about how they are ‘partnering with experts’ and providing more details.

OpenAI: Earlier this year, we began building more ways for families to use ChatGPT together and decide what works best in their home. Within the next month, parents will be able to:

  • Link their account with their teen’s account (minimum age of 13) through a simple email invitation.

  • Control how ChatGPT responds to their teen with age-appropriate model behavior rules, which are on by default.

  • Manage which features to disable, including memory and chat history.

  • Receive notifications when the system detects their teen is in a moment of acute distress. Expert input will guide this feature to support trust between parents and teens.

These controls add to features we have rolled out for all users including in-app reminders during long sessions to encourage breaks.

Parental controls seem like an excellent idea.

I would consider most of this to effectively be ‘on by default’ already, for everyone, in the sense that AI models have controls against things like NSFW content that largely treat us all like teens. You could certainly tighten them up more for an actual teen, and it seems fine to give parents the option, although mostly I think you’re better off not doing that.

The big new thing is the notification feature. That is a double edged sword. As I’ve discussed previously, an AI or other source of help that can ‘rat you out’ to authorities, even ‘for your own good’ or ‘in moments of acute distress’ is inherently very different from a place where your secrets are safe. There is a reason we have confidentiality for psychologists and lawyers and priests, and balancing when to break that is complicated.

Given an AI’s current level of reliability and its special role as a place free from human judgment or social consequence, I am actually in favor of it outright never altering others without an explicit user request to do so.

Whereas things are moving in the other direction, with predictable results.

As in, OpenAI is already scanning your chats as per their posts I discussed above.

Greg Isenberg: ChatGPT is potentially leaking your private convos to the police.

People use ChatGPT because it feels like talking to a smart friend who won’t judge you. Now, people are realizing it’s more like talking to a smart friend who might snitch.

This is the same arc we saw in social media: early excitement, then paranoia, then demand for smaller, private spaces.

OpenAI (including as quoted by Futurism): When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts.

If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.

We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.

Futurism: When describing its rule against “harm [to] yourself or others,” the company listed off some pretty standard examples of prohibited activity, including using ChatGPT “to promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system.”

They are not directing self-harm cases to protect privacy, but harm to others is deemed different. That still destroys the privacy of the interaction. And ‘harm to others’ could rapidly morph into any number of places, both with false positives and also with changes in ideas about what constitutes ‘harm.’

They’re not even talking about felonies or imminent physical harm. They’re talking about ‘engage in unauthorized activities that violate the security of any service or system,’ or ‘destroy property,’ so this could potentially extend quite far, and in places that seem far less justified than intervening in response a potentially suicidal user. These are circumstances in which typical privileged communication would hold.

I very much do not like where that is going, and if I heard reports this was happening on the regular it would fundamentally alter my relationship to ChatGPT, even though I ‘have nothing to hide.’

What’s most weird about this is that OpenAI was recently advocating for ‘AI privilege.’

Reid Southern: OpenAI went from warning users that there’s no confidentiality when using ChatGPT, and calling for “AI privilege”, to actively scanning your messages to send to law enforcement, seemingly to protect themselves in the aftermath of the ChatGPT induced murder-suicide

This is partially a case of ‘if I’m not legally forbidden to do [X] then I will get blamed for not doing [X] so please ban me from doing it’ so it’s not as hypocritical as it sounds. It is still rather hypocritical and confusing to escalate like this. Why respond to suicides by warning you will be scanning for harm to others and intent to impact the security of systems, but definitely not acting if someone is suicidal?

If you think AI users deserve privilege, and I think this is a highly reasonable position, then act like it. Set a good example, set a very high bar for ratting, and confine alerting human reviewers let alone the authorities to when you catch someone on the level of trying to make a nuke or a bioweapon, or at minimum things that would force a psychologist to break privilege. It’s even good for business.

Otherwise people are indeed going to get furious, and there will be increasing demand to run models locally or in other ways that better preserve privacy. There’s not zero of that already, but it would escalate quickly.

Steven Byrnes notes the weirdness of seeing Ben’s essay describe OpenAI as an ‘AI safety company’ rather than a company most AI safety folks hate with a passion.

Steven Byrnes: I can’t even describe how weird it is to hear OpenAI, as a whole, today in 2025, being described as an AI safety company. Actual AI safety people HATE OPENAI WITH A PASSION, almost universally. The EA people generally hate it. The Rationalists generally hate it even more.

AI safety people have protested at the OpenAI offices with picket signs & megaphones! When the board fired Sam Altman, everyone immediately blamed EA & AI safety people! OpenAI has churned through AI safety staff b/c they keep quitting in protest! …What universe is this?

Yes, many AI safety people are angry about OpenAI being cavalier & dishonest about harm they might cause in the future, whereas you are angry about OpenAI being cavalier & dishonest about harm they are causing right now. That doesn’t make us enemies. “Why not both?”

I think that’s going too far. It’s not good to hate with a passion.

Even more than that, you could do so, so much worse than OpenAI on all of these questions (e.g. Meta, or xAI, or every major Chinese lab, basically everyone except Anthropic or Google is worse).

Certainly we think OpenAI is on net not helping and deeply inadequate to the task, their political lobbying and rhetoric is harmful, and their efforts have generally made the world a lot less safe. They still are doing a lot of good work, making a lot of good decisions, and I believe that Altman is normative, that he is far more aware of what is coming and the problems we will face than most or than he currently lets on.

I believe he is doing a much better job on these fronts than most (but not all) plausible CEOs of OpenAI would do in his place. For example, if OpenAI’s CEO of Applications Fidji Simo were in charge, or Chairman of the Board Bret Taylor were in charge, or Greg Brockman was in charge, or the CEO of any of the magnificent seven were in charge, I would expect OpenAI to act far less responsibly.

Thus I consider myself relatively well-inclined towards OpenAI among those worried about AI or advocating or AI safety.

I still have an entire series of posts about how terrible things have been at OpenAI and a regular section about them called ‘The Mask Comes Off.’

And I find myself forced to update my view importantly downward, towards being more concerned, in the wake of the recent events described in this post. OpenAI is steadily becoming more of a bad faith actor in the public sphere.

Discussion about this post

OpenAI #14: OpenAI Descends Into Paranoia and Bad Faith Lobbying Read More »

all-54-lost-clickwheel-ipod-games-have-now-been-preserved-for-posterity

All 54 lost clickwheel iPod games have now been preserved for posterity

Last year, we reported on the efforts of classic iPod fans to preserve playable copies of the downloadable clickwheel games that Apple sold for a brief period in the late ’00s. The community was working to get around Apple’s onerous FairPlay DRM by having people who still owned original copies of those (now unavailable) games sync their accounts to a single iTunes installation via a coordinated Virtual Machine. That “master library” would then be able to provide playable copies of those games to any number of iPods in perpetuity.

At the time, the community was still searching for iPod owners with syncable copies of the last few titles needed for their library. With today’s addition of Real Soccer 2009 to the project, though, all 54 official iPod clickwheel games are now available together in an easily accessible format for what is likely the first time.

All at once, then slowly

GitHub user Olsro, the originator of the iPod Clickwheel Games Preservation Project, tells Ars that he lucked into contact with three people who had large iPod game libraries in the first month or so after the project’s launch last October. That includes one YouTuber who had purchased and maintained copies of 39 distinct games, even repurchasing some of the upgraded versions Apple sold separately for later iPod models.

Ars’ story on the project shook out a few more iPod owners with syncable iPod game libraries, and subsequent updates in the following days left just a handful of titles unpreserved. But that’s when the project stalled, Olsro said, with months wasted on false leads and technical issues that hampered the effort to get a complete library.

“I’ve put a lot of time into coaching people that [had problems] transferring the files and authorizing the account once with me on the [Virtual Machine],” Olsro told Ars. “But I kept motivation to continue coaching anyone else coming to me (by mail/Discord) and making regular posts to increase awareness until I could find finally someone that could, this time, go with me through all the steps of the preservation process,” he added on Reddit.

Getting this working copy of Real Soccer 2009 was an “especially cursed” process, Olsro said.

Getting this working copy of Real Soccer 2009 was an “especially cursed” process, Olsro said. Credit: Olsro / Reddit

Getting working access to the final unpreserved game, Real Soccer 2009, was “especially cursed,” Olsro tells Ars. “Multiple [people] came to me during this summer and all attempts failed until a new one from yesterday,” he said. “I even had a situation when someone had an iPod Nano 5G with a playable copy of Real Soccer, but the drive was appearing empty in the Windows Explorer. He tried recovery tools & the iPod NAND just corrupted itself, asking for recovery…”

All 54 lost clickwheel iPod games have now been preserved for posterity Read More »

nasa’s-acting-chief-“angry”-about-talk-that-china-will-beat-us-back-to-the-moon

NASA’s acting chief “angry” about talk that China will beat US back to the Moon

NASA’s interim administrator, Sean Duffy, said Thursday he has heard the recent talk about how some people are starting to believe that China will land humans on the Moon before NASA can return there with the Artemis Program.

“We had testimony that said NASA will not beat China to the Moon,” Duffy remarked during an all-hands meeting with NASA employees. “That was shade thrown on all of NASA. I heard it, and I gotta tell you what, maybe I am competitive, I was angry about it. I can tell you what, I’ll be damned if that is the story that we write. We are going to beat the Chinese to the Moon.”

Duffy’s remarks followed a Congressional hearing on Wednesday during which former Congressman Jim Bridenstine, who served as NASA administrator during President Trump’s first term, said China had pulled ahead of NASA and the United States in the second space race.

“Unless something changes, it is highly unlikely the United States will beat China’s projected timeline to the Moon’s surface,” said Bridenstine, who led the creation of the Artemis Program in 2019. China has said multiple times that it intends to land taikonuats on the Moon before the year 2030.

A lot of TV appearances

Duffy’s remarks were characteristic of his tenure since his appointment two months ago by Trump to serve as interim administrator of the space agency. He has made frequent appearances on Fox News and offered generally upbeat views of NASA’s position in its competition with China for supremacy in space. And on Friday, in a slickly produced video, he said, “I’m committed to getting us back to the Moon before President Trump leaves office.”

Sources have said Duffy, already a cabinet member as the secretary of transportation, is also angling to remove the “interim” from his NASA administrator title. Like Bridenstine, he has a capable political background and politics that align with the Trump administration. He is an excellent public speaker and knows the value of speaking to the president through Fox News. To date, however, he has shown limited recognition of the reality of the current competition with China.

NASA’s acting chief “angry” about talk that China will beat US back to the Moon Read More »

bmw-debuts-6th-generation-ev-powertrain-in-the-all-electric-ix3

BMW debuts 6th-generation EV powertrain in the all-electric iX3


Class-leading efficiency and computer-controlled driving dynamics combine.

A BMW iX3 drives towards the camera

The new iX3 marks the start of a new design language for BMW SUVs. Credit: BMW

The new iX3 marks the start of a new design language for BMW SUVs. Credit: BMW

BMW has an all-electric replacement for its X3 crossover on the way. When it arrives in mid-2026, it will have the lowest carbon footprint of any BMW yet, thanks to an obsessive approach to sustainability during the design process. But we knew that already; what we couldn’t tell you then, but can now, is everything else we know about the first of the so-called Neue Klasse electric vehicles.

“The Neue Klasse is our biggest future-focused project and marks a huge leap forward in terms of technologies, driving experience, and design,” said BMW chairman Oliver Zipse. “Practically everything about it is new, yet it is also more BMW than ever. Our whole product range will benefit from the innovations brought by the Neue Klasse—whichever drive system technology is employed. What started as a bold vision has now become reality: the BMW iX3 is the first Neue Klasse model to go into series production, kicking off a new era for BMW.”

A new face

The iX3 also debuts a new corporate face for BMW’s SUVs: From now on, these will have tall, narrow kidney grilles like the one you see here, as opposed to the short, wide grille seen on the front of the Neue Klasse sedan (which is almost certainly the next 3 Series). LEDs replace chrome, and there’s a new take on BMW’s usual four headlights, although the illuminated kidney grill is an option, not mandatory. Despite the two-box shape, the iX3 manages a drag coefficient of just 0.24.

Sixth-generation

As befitting a company called the Bavarian Motor Works, BMW has been in the business of designing and building its own electric powertrains for quite some time, in contrast to some rivals that have been buying EV tech from suppliers. In addition to the class-leading manufacturing efficiency, the sixth-generation electric powertrain should be extremely efficient—around 4 miles/kWh (15.5 kWh/100 km) from an SUV-shape, which is 20–25 percent better than current SUV EVs.

The first variant will be the iX3 50 xDrive, which boasts a combined 463 hp (345 kW) and 475 lb-ft (645 Nm) from its pair of motors. The front axle uses an asynchronous motor, with an electrically excited synchronous motor at the rear axle providing most of the power.

There’s 40 percent more regenerative braking than BMW’s current powertrains. We weren’t given an exact threshold where the friction brakes take over, but it should be around 0.5–0.6 G. That means that the iX3 will regeneratively brake for the overwhelming majority of the time—just 5–10 percent of braking events should require the friction brakes, we’re told. And the regen should be smoother and more precise, as well as quieter than before. There’s even some regenerative braking should the ABS trigger.

A BMW iX3 is parked, charging

NACS port comes standard, with a CCS1 adapter. Credit: BMW

For the Neue Klasse, BMW has moved to a new 800 V battery, using cylindrical cells rather than the prismatic cells you’d find in an iX or i4. Energy density is 20 percent greater than the current cells, and the pack has a usable capacity of 108 kWh. That means you can expect a range of up to 400 miles, although the official EPA rating will only arrive next year.

The new pack charges a lot faster, too. It can accept up to 400 kW, should you find a charger capable of such power. If so, the 10–80 percent charge should take 21 minutes. (BMW also says it will add 230 miles in 10 minutes.) The iX3 is capable of acting as a mobile power bank (V2L) as well as powering a home or even the grid (this requires a V2H-capable wall box), and North American iX3s will sport NACS charging ports.

Software-defined

The Neue Klasse is a clean-sheet design, and BMW has used this as an opportunity to clear out the legacy cruft that has accumulated over the years. Instead of hundreds of discrete black boxes, each with a single electric control unit (ECU) performing a single job, the iX3 uses four high-performance computers, each in charge of a different domain. Among the benefits of this approach? Almost 2,000 feet (600 m) less wiring and a weight saving of about 30 percent compared to a conventional wiring loom with all its ECUs. Taking single-function controllers out of the loop and replacing them with a domain computer also cuts latency.

The Heart of Joy is the domain controller responsible for driving dynamics and the powertrain, and can cope with up to four electric motors, something we should see in electric M-badged Neue Klasse models in the future. But good driving dynamics require more than just a fancy computer brain. The iX3 is extremely stiff, with the front and rear axles mounted to the battery pack. Weight distribution is a near-perfect 49: 51.

BMW iX3 inteiror

The interior is quite faithful to the concept we saw last year. The black strip at the base of the windshield is the panoramic display. Credit: BMW

A different domain controller is in charge of the advanced driver assistance systems (ADAS). This water-cooled computer is 20 times faster than the processors that control the ADAS in a current BMW, and was developed in-house by BMW. The automaker says the focus is always on the driver’s needs in a way that is smart, symbiotic, and safe. There are AI/ML algorithms for perception and planning, but safety-proven rule-based algorithms always have the final say in the decision-making process.

There’s a hands-free, partially automated driving system that works at up to 85 mph (136 km/h) on premapped divided-lane highways, and an interesting new feature is the cooperative braking and steering during active assistance. Unlike just about every car on the road currently, using the brake will not immediately kill the cruise control, and if you intentionally cross the median or a lane marker and are looking where you’re going, the eye-tracking driver monitoring system sees you and won’t try to correct. (But if you veer across the lane and aren’t looking, the car will steer you back.)

Shy tech

The iX3’s interior is near-identical to the concept we saw last March. BMW calls this approach shy tech, where controls or displays are invisible when inactive. There’s a new multifunction steering wheel—that will surely be divisive—which puts the ADAS controls on its left side, and media controls on the right. The iDrive rotary controller is no more, but there are plenty of physical buttons (not capacitive ones) for things like windows and mirrors.

BMW says the rotary controller wouldn’t have worked well with the new iDrive UI for the trapezoidal touchscreen. (Additionally, it told us that in some regions, drivers never used the rotary controller at all.) The screen is closer to the driver than in current BMWs, and the trapezoidal shape rather effectively means the left side of the screen—which has persistent, common functions—is always close to your right hand. After playing with the system for a while, I think the UI is a lot easier to navigate than the current BMW infotainment system, good though that is.

The multifunction steering wheel looks unconventional. BMW

I’ve often been complimentary about voice recognition in BMWs, and the iX3 has an upgrade here. The natural language processing is now based on Alexa, not Cerence’s tech, and there’s a cartoon visualization for the personal assistant that looks a bit like a ninja, or perhaps an alien. This will make eye contact with the person giving voice commands, so it can discern between driver and passenger.

At the base of the windshield is the new panoramic display. This presents information in zones—you can personalize what shows up in the center and right zones, but the one in front of the driver will always be the critical stuff like your speed and any warnings or alerts. There’s also an optional heads-up display.

BMW says we can expect the iX3 50 eDrive to arrive in the US next summer, starting at around $60,000.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

BMW debuts 6th-generation EV powertrain in the all-electric iX3 Read More »

rfk-jr.-says-covid-shots-still-available-to-all-as-cancer-patients-denied-access

RFK Jr. says COVID shots still available to all as cancer patients denied access

Here are some key moments from today’s hearing:

Untrustworthy

With the fallout ongoing from the abrupt ouster of CDC Director Susan Monarez last week, many senators focused on what led to her downfall. In a Wall Street Journal op-ed published two hours before the hearing, Monarez confirmed media reports that she had been fired by Kennedy for refusing to rubber-stamp changes to CDC vaccine guidance based on recommendations from Kennedy’s hand-selected advisors.

“I was told to preapprove the recommendations of a vaccine advisory panel newly filled with people who have publicly expressed antivaccine rhetoric,” Monarez wrote in the op-ed. She said she refused, insisting that the panel’s recommendations be “rigorously and scientifically reviewed before being accepted or rejected.”

In today’s hearing, Senators directly confronted Kennedy with that statement from the op-ed. Kennedy repeatedly said that she is lying and that he never directed her to preapprove vaccine recommendations. Instead, he claims, he told her to resign after he asked her directly if she was a trustworthy person, and she replied, ‘No.”

After several exchanges about this with other senators, Bernie Sanders (I-Vt.) picked it apart further, saying:

“Are you telling us that the former head of CDC went to you, you asked her, ‘Are you a trustworthy person?’ And she said, ‘No, I am not a trustworthy person,'” Sanders asked.

“She didn’t say ‘No, I’m not a trustworthy person,'” Kennedy replied. “She said, ‘No.’ I’m giving a quote.”

After that, Sen. Thom Tillis (R-NC), who seemed skeptical of Kennedy’s arguments generally, pointed out the absurdity of the claim, quoting Kennedy’s previous praise of Monarez. “I don’t see how you go—over four weeks—from a public health expert with ‘unimpeachable scientific credentials,’ a longtime champion of MAHA values, caring and compassionate and brilliant microbiologists, and four weeks later fire her,” Tillis said.  “As somebody who advised executives on hiring strategies, number one, I would suggest in the interview you ask ’em if they’re truthful rather than four weeks after we took the time of the US Senate to confirm the person.”

RFK Jr. says COVID shots still available to all as cancer patients denied access Read More »

covid-vaccine-locations-vanish-from-google-maps-due-to-supposed-“technical-issue”

COVID vaccine locations vanish from Google Maps due to supposed “technical issue”

Vaccine results in Maps

Results for the flu vaccine appear in Maps, but not COVID. The only working COVID results are hundreds of miles away.

Credit: Ryan Whitwam

Results for the flu vaccine appear in Maps, but not COVID. The only working COVID results are hundreds of miles away. Credit: Ryan Whitwam

Ars reached out to Google for an explanation, receiving a cryptic and somewhat unsatisfying reply. “Showing accurate information on Maps is a top priority,” says a Google spokesperson. “We’re working to fix this technical issue.”

So far, we are not aware of other Maps searches that have been similarly affected. Google has yet to respond to further questions on the nature of the apparent glitch, which has wiped out COVID vaccine information in Maps while continuing to return results for other medical services and immunizations.

The sudden eroding of federal support for routine vaccinations lurks in the background with this bizarre issue. When the Trump administration decided to rename the Gulf of Mexico, Google was widely hectored for its decision to quickly show “Gulf of America” on its maps, aligning with the administration’s preferred nomenclature. With the ramping up of anti-vaccine actions at the federal level, it is tempting to see a similar, nefarious purpose behind these disappearing results.

At present, we have no evidence that the change in Google’s search results was intentional or targeted specifically at COVID immunization—indeed, making that change in such a ham-fisted way would be inadvisable. It does seem like an ill-timed and unusually specific “technical issue,” though. If Google provides further details on the missing search results, we’ll post an update.

COVID vaccine locations vanish from Google Maps due to supposed “technical issue” Read More »

hollow-knight:-silksong-is-breaking-steam,-nintendo’s-eshop

Hollow Knight: Silksong is breaking Steam, Nintendo’s eShop

An influx of players excited for this morning’s launch of Hollow Knight: Silksong are encountering widespread errors purchasing and downloading the game from Steam this morning. Ars Technica writers have encountered errors getting store pages to load, adding the game to an online shopping cart, and checking out once the game is part of the cart.

That aligns with widespread social media complaints and data from DownDetector, which saw a sudden spike of over 11,000 reports of problems with Steam in the minutes following Silksong‘s 10 am Eastern time release on Steam. The server problems don’t seem to be completely stopping everyone, though, as SteamDB currently reports over 100,000 concurrent players for Silksong as of this writing.

Ars also encountered some significant delays and/or outright errors when downloading other games and updates and syncing cloud saves on Steam during this morning’s server problems. The Humble Store page for Silksong currently warns North American purchasers that “We have run out of Steam keys for Hollow Knight: Silksong in your region, but more are on their way! As soon as we receive more Steam keys, we will add them to your download page. Sorry about the delay!”

The PC version of Silksong currently seems to be available for purchase and download without issue. Ars was also able to purchase and download the Switch 2 version of Silksong from the Nintendo eShop without encountering any errors, though others have reported problems with that online storefront [Update: As of 11:18 am, Nintendo is reporting, “The [Nintendo eShop] network service is unavailable at this time. We apologize for any inconvenience this may cause].” The game is still listed as merely “Announced” and not available for purchase on its PlayStation Store page as of this writing.

Hollow Knight: Silksong is breaking Steam, Nintendo’s eShop Read More »

new-ai-model-turns-photos-into-explorable-3d-worlds,-with-caveats

New AI model turns photos into explorable 3D worlds, with caveats

Training with automated data pipeline

Voyager builds on Tencent’s earlier HunyuanWorld 1.0, released in July. Voyager is also part of Tencent’s broader “Hunyuan” ecosystem, which includes the Hunyuan3D-2 model for text-to-3D generation and the previously covered HunyuanVideo for video synthesis.

To train Voyager, researchers developed software that automatically analyzes existing videos to process camera movements and calculate depth for every frame—eliminating the need for humans to manually label thousands of hours of footage. The system processed over 100,000 video clips from both real-world recordings and the aforementioned Unreal Engine renders.

A diagram of the Voyager world creation pipeline.

A diagram of the Voyager world creation pipeline. Credit: Tencent

The model demands serious computing power to run, requiring at least 60GB of GPU memory for 540p resolution, though Tencent recommends 80GB for better results. Tencent published the model weights on Hugging Face and included code that works with both single and multi-GPU setups.

The model comes with notable licensing restrictions. Like other Hunyuan models from Tencent, the license prohibits usage in the European Union, the United Kingdom, and South Korea. Additionally, commercial deployments serving over 100 million monthly active users require separate licensing from Tencent.

On the WorldScore benchmark developed by Stanford University researchers, Voyager reportedly achieved the highest overall score of 77.62, compared to 72.69 for WonderWorld and 62.15 for CogVideoX-I2V. The model reportedly excelled in object control (66.92), style consistency (84.89), and subjective quality (71.09), though it placed second in camera control (85.95) behind WonderWorld’s 92.98. WorldScore evaluates world generation approaches across multiple criteria, including 3D consistency and content alignment.

While these self-reported benchmark results seem promising, wider deployment still faces challenges due to the computational muscle involved. For developers needing faster processing, the system supports parallel inference across multiple GPUs using the xDiT framework. Running on eight GPUs delivers processing speeds 6.69 times faster than single-GPU setups.

Given the processing power required and the limitations in generating long, coherent “worlds,” it may be a while before we see real-time interactive experiences using a similar technique. But as we’ve seen so far with experiments like Google’s Genie, we’re potentially witnessing very early steps into a new interactive, generative art form.

New AI model turns photos into explorable 3D worlds, with caveats Read More »

putin:-“immortality”-coming-soon-through-continuous-organ-transplants

Putin: “Immortality” coming soon through continuous organ transplants

In a later press conference, Putin confirmed the discussion and said that “life expectancy will increase significantly” in the near future and “we should also think about this” in terms of political and economic consequences. (In Russia, life expectancy has actually decreased significantly in recent years, and the overall population is declining.)

The brief snippets of conversation suggest that immortality is on the minds of the world’s strongmen, though it’s interesting to see how it takes a different form than in Silicon Valley, where robots and software are more often seen as the key to longevity instead of, say, recurring organ transplants into an aging bag of skin.

Shows like Upload and Alien: Earth present visions of a world in which consciousness can be scanned by machines and perhaps even loaded into other machines. Meanwhile, Putin and Xi are thinking more about repeated organ transplants and life extension rather than “the Singularity.”

So, which dystopic future are we more likely to get? (Yes, I am presuming, based on the current state of the world, that the near future will be pretty dystopic. I think it’s a good bet.) Clones being raised for organ transplants, as in Kazuo Ishiguro’s novel Never Let Me Go? Or some kind of “download your consciousness into this machine” situation in which the mind of Elon Musk inhabits one of his beloved Tesla robots for all eternity? Given either alternative, I’m not entirely sure I’d want to live forever.

Putin: “Immortality” coming soon through continuous organ transplants Read More »

startup-roundup-#3

Startup Roundup #3

Startup Roundups (#1, #2) look like they’re settling in as an annual tradition.

I’ve been catching up on queued roundups this week as the family was on vacation. I am back now but the weekly may be pushed to Friday as I dig out from the backlog.

There is a common theme to the vignettes offered across the years. The central perspective has not much changed.

I would summarize the perspective this way, first in brief and then with more words:

  1. Founding SV-style startups or being an early employee is great alpha.

  2. You create more value, and you capture more of that value.

  3. Building real things that scale is so valuable that you can afford a lot of nonsense.

  4. Venture capital similarly does foolish stuff a lot but the successes are so good that if they have good deal flow they still win.

  5. Venture capitalists and others offering advice say many valuable things but also have their own best interests at heart, not yours, and will lie to you.

Or, with more words:

  1. The central idea of founding or joining as an early employee of a startup, building something people want, raising venture capital funding in successive rounds and mostly adhering to Silicon Valley norms is an excellent one. If you have the opportunity to do YC you probably want to do that too.

  2. Because you get equity you capture a vastly higher percentage of the value you create than you would if you worked for someone else. This is all a great way to change and improve the world, and also a great way to earn rather quite a lot of money. This is risky in the sense that most of the time your startup will fail, but failure is not so bad and you can try again in various ways. If you’re up for it, strongly consider doing it.

  3. This works because the value in building real things that scale, at all, is orders of magnitude higher than the value of an ordinary job, plus you are capturing far more of it. The structure is hugely inefficient and wasteful and stupid and perverse in various ways, most of your activity is likely going to be built around assembling and selling the team and proving that it can Do The Thing, and learning how to Do The Thing, but it does result in attempting to Do The Thing at all, which is what matters. There is ‘a lot of ruin’ in this nation.

  4. Venture capital returns, and especially the strong returns of the best VCs, are largely about selection and deal flow, and marketing themselves. The VCs tend to herd to an extent that it should challenge one’s overall epistemology and model of capitalism, and have similar flawed heuristics and are largely evaluating whether other VCs will be willing to fund in the future, and evaluating the team and its ability to pitch itself, and so on. Which again is hugely inefficient and wasteful and stupid and perverse compared to how it would ideally work, and it leaves a ton on the table, but it Does The Thing at all, so it beats out all the alternatives.

  5. Venture capitalists will offer you lots of free advice, which as we know is seldom cheap. A lot of what they say is good advice, and you need to absorb their culture, but a lot of it is them talking their books and marketing themselves and the idea of doing a startup, or trying to get you to do what the VCs want to do for various reasons. Some of that will align with what is good for you. A lot of it won’t, and you have to keep in mind that when your interests are in conflict they are lying liars who lie, and at other times they fool themselves or don’t understand.

  6. In particular, VCs and Paul Graham in particular will try to pretend that ‘what you do to succeed outside of fundraising’ and ‘how to set yourself up to fundraise’ are the same question, or have the same answer, and you should not think of this as a multi-part strategic problem. This is very obviously an adversarial message.

  7. This type of adversarial message will happen one-to-many in general advice, and will also happen one-to-one when they talk to you. Listen, but also be on guard.

Here are some vignettes on this theme from the past year.

Emmett Shear offers advice on how to interpret advice from Paul Graham.

According to Emmett:

  1. Filter his creative ideas, but pay attention, there’s gold in them hills.

  2. If Paul tells you how a decision gets made, he’s often right even if he isn’t telling you the whole picture.

  3. Paul’s advice is non-strategic, he’s saying what his model thinks you should do.

My reactions based on what I know, keeping in mind I’ve never met him:

  1. Strongly agree, same as his public ideas.

  2. Based on his public statements, I’d say he’s bringing you a part of the picture, often an important or neglected part, but he’s often being strategic in how it is presented and framed, and in particular by the selection effect of when to explain what parts, and what parts to leave out.

  3. I strongly believe many Paul Graham statements in public are strategic. I also think Paul’s model in many places is at least partially strategically motivated. Even if his motivation in the moment is non-strategic, the process that led to Paul having that opinion was, de facto, strategic.

    1. I can believe that Paul’s private advice is mostly non-strategic beyond that.

Given the overall skill and knowledge levels, with that level of trustworthiness, that’s still an amazingly great source of information. One of the best. Use it if you have access to it.

One common pattern of advice from Paul Graham has been talking up the UK.

Rohit: This is a pretty remarkable tweet, and a sign of the times.

Paul Graham (April 15, 2025): A smart foreign-born undergrad at a US university asked me if he should go to the UK to start his startup because of the random deportations here. I said that while the median startup wasn’t taking things to this extreme yet, it would be an advantage in recruiting.

What an interesting twist of history it would be if the UK became a hub of intellectual refugees the way the US itself did in the 1930s and 40s. It wouldn’t take much more than what’s already happening.

It has lower GDP per capita, but it also has rule of law.

G: what if it was Europe instead of the UK? would you still recommend?

Paul Graham: It’s the same tradeoff, yes.

Graham agrees the regulatory burdens of being in Europe are extreme, so for now he sees a mixed strategy as appropriate. Those who are worried about being in America can be recruited to the UK or Europe, which justifies (in his mind) more UK-or-Europe-based startups than we currently have.

Paul Graham is the best case scenario. When you see advice from other venture capitalists, assume they are less informed and skilled, and also more self-interested, until proven otherwise.

Zain Rizavi: wild that it only took 4 months of building to realize I’d take back nearly every piece of “advice” I gave founders over 4.5 years investing. Most VC advice is cosplay

Martin Casado: yup

Martin Casado has been on an absolute ‘saying the quiet part out loud’ tear recently. No matter how vile and bad faith I find his statements about AI policy and existential risk and everything surrounding effective altruism and so on, he’s got the Trump-style honest liar thing going especially on the venture side, and yes it is refreshing.

Never conflate the medium and the message, or the map and the territory, or the margin and the limit.

I don’t think Paul is being naive here.

I think he is doing a combination of thinking on the margin and talking his book:

Paul Graham: Figuring out what a startup should say to investors is strangely useful for figuring out what it should actually do. Most people treat these questions as separate, but ideally they converge. If you can cook up a plausible plan to become huge, you should go ahead and do it.

For example, I’m always looking for ways to add network effects to a startup’s idea. Investors love those. But network effects are genuinely good to have. So any we come up with in order to feed to investors, they should probably go and make happen.

Experienced investors are pretty rational. But those are the ones you want anyway.

Should you be thinking big, and about what scales, and taking big swings? Absolutely, although the VCs want you to do it more than you should want to do it due to different incentives.

On the margin, founders benefit from VCs pushing founders to do more of the things that VCs want to hear, as a lot of why they want to hear them is that those actions correlate with big success. The danger is when you are not on the margin, and stop being grounded in reality and start to believe your own hype and stories.

If you have the option to do YC, rest assured that as long as you’re not collapsing a huge bunch of SAFEs the valuation boost from YC’s reputation is worth vastly more than the share of your company YC takes. You’ll be net ahead on cash and equity very quickly.

Matt Levine has often mentioned how valuable it can be to not have to mark your investments to market. Early stage VCs are experts at controlling how and when their investments are marked to different numbers.

Joe Weisenthal (April 21, 2025): Blackstone is now down 40% from its November highs, and is right back at its post-Liberation Day lows. Remember, this is probably the best run manager of private assets in the world, so just consider how your run of the mill PE or VC shop is looking right now.

Paul Graham: Early stage VCs are presumably looking better, because an early stage investment in a startup is mostly a bet on whether the founders are Larry & Sergey or not, and that is independent of global political and financial trends.

Joe Weisenthal: Sure, but that only pertains to an individual bet, not the entire asset class, right?

Paul Graham: You mean the value of a sufficiently large number of such bets should vary with the market’s expectations about future growth? In theory yes. But in practice not so much.

In practice the effects on early stage valuation are muted by (a) the fact that big exits, where all the returns are, will be 10 years in the future or more, and (b) the dynamics of fundraising; there’s no such thing as value investing at the seed stage.

There are certainly ups and downs in early stage startup valuations, but they’re driven mostly by variations in how excited investors are about startups. E.g. valuations are high now because investors are (justifiably in my opinion) excited about AI.

Mr. Plumpkin: Not how math works sorry. The difference between them building a $1T company and a $2T company due to market valuations still impacts the option math today.

Paul Graham: Only by 2x, whereas great vs ok founders can increase the outcome by 1000x or more.

Yes, that would be only by 2x, but that’s a 50% markdown.

This is purely not marking to market and strategic ignorance about market prices. It is price fixing by a VC cabal via norms. It is a conspiracy in restraint of trade and collective fiction.

Sure, the best founders might increase value by 1000x, but the best founders under bad conditions are then worth 500x of what the bad founders were worth under good conditions, rather than 1000x. Still counts.

What Paul Graham is effectively saying here is that VCs are a conspiracy in restraint of trade, that (among other things) refuses to pay fair prices for top startups, instead setting what are effectively highly constrained norm-based prices to avoid the best founders charging what they are worth, so top VCs can bid with reputations and connections and capture that surplus rather than bidding against each other with dollars.

I do believe this is true. The top VCs are in collusion to bid with reputation and services and connections and future promises rather than with dollars, such that the top investments are both obvious to most VCs and also vastly cheaper than their expected value. And they persuade the founders involved that this is good, actually, because of things like worries about a down round.

Graham himself has noted that the difference in price for the best startups is dwarfed by how much better their prospects are, that the smart VC mostly ignores price. That indicates the prices for the good prospects are way too low.

Whereas if a top founder actually got what they were on average worth, VCs who invest would be unsure if they were getting a great deal.

Why should the VCs get that deep discount to actual value, despite the fact that if the founder insisted plenty of those same VCs would pay up happily? Because they tell a story that the company isn’t entitled?

And if you can get away with that, then yeah, who cares if the exit value is down by half, I guess? It doesn’t change any of the norm-based prices? And for worse founders, instead of the prices dropping so the market clears, a few survive and lot of those deals die.

One of the best reasons to distrust VCs is that they make a lot of arguments that you should beware when they give you ‘too good’ a deal, including but not limited to them paying more money to get less, and instead you should give away additional large stakes in your company or other things specifically in order to ensure the vibes remain good and you can tell a story to future other VCs.

As in, because you might have to raise at a lower number in the future, you should raise at a lower number than that now, so you don’t have a ‘down round.’ Or because you couldn’t handle having the cash.

I do agree that ‘down rounds’ can kill companies and that is bad, but come on.

Paul Graham: Believe it or not, it’s dangerous for a startup to raise too much at too high a valuation. This sounds like a good problem to have, but it isn’t. Both make your next round harder.

Raising too much makes you spend too much, which makes it harder to reach profitability. And a high valuation makes it harder for the next round to be an up round, which both investors and founders prefer.

Why do founders raise too much at too high a valuation? Because most VCs want to own a large percentage of the company. They’re less price sensitive though. They’re willing to spend a lot to get it. So the result of these constraints is: large investment at a high valuation.

That last paragraph is a partial answer to an importantly different question: ‘Why are founders sometimes ABLE to raise so much money, at such a high valuation, that it becomes difficult to have an ‘up round’ next?’

Yes, the answer is that VCs are ‘not price sensitive’ when deciding to invest (there are limits, eventually, of course, at least one hopes), they are ‘vibe sensitive’ to whether it all feels right and how it would look. And they have heuristics that say, if the deal is great, you do not care about the price, the real danger is missing out. The ultimate case of FOMO. So in many cases you can kind of ‘name your price.’

It is also a hell of a thing to frame getting a great deal that way.

The second paragraph points to the real danger. You cannot afford to let what happened go to your head. You cannot afford to then spend money like you are actually worth what the VCs paid, and that you will soon be able to raise more at an even higher number.

Instead, you need to do something rarely done, which is to understand that the number was ‘too high’ and that you might face a down round.

You know what I’ve never heard of being done? A founder CEO says, after a round, ‘these VCs went completely crazy bidding against each other. We raised $20m at a $100m valuation, and realistically it should have been less than half of that. We made damn sure the terms were airtight, so we can safety do a down round if needed, and we are setting aside the extra money so we don’t go too big or spend too much, and we’re setting your options deals accordingly.’

This also makes good trading sense. Most startups have high failure risk at each step, so if you survive you should be worth more. But if you’re the ‘super hot’ startup, in a great spot, the chance of (unrecoverable) failure in the short term could be quite low. If you do great, you’ll gain tons of value. So if you do only okay, it seems perfectly fine to say the expected value is modestly down from before.

There is the danger that you overspend, grow too big, your burn rate skyrockets, your focus is lost and so on, if you’re not careful. So… don’t do that? I know, easier said than done, but not as hard as they make it sound.

Convincing founders they should give up 10% of their company so that the round isn’t technically ‘down’ is one of the great cons of history. One can also call it an illegal conspiracy in restraint of trade.

Europe in general and Germany in particular have long been doing a natural experiment to see how annoying and expensive you can make startups before you kill them entirely. The answer is quite a lot.

Nathan Benaich: 12 hours and counting – notary reads every single word of Series A docs in Germany out loud in front of founders. In person. Guys, we have GDP to grow here. Pure prehistoric madness.

Paul Graham: We can use this practice as a canary in the coal mine to measure whether Germany is serious about startups. As long as it persists, they’re not.

Ian Hogarth: The most significant cost of the Germany notary system is how much friction it introduces for small investors ie angel investors. Always created a higher hurdle for me to angel invest in German start-ups/or introduce angel investors.

Henry de Zoete: 100% – same for me. It’s not worth the hassle for angel investors

Alex MacDonald: I once had to physically go to the German consulate in London in order to get a document ‘apostilled’ to complete an angel investment.

Never invested in another German company after that. 🫡

A fun story about correlation and causation.

Trevor Blackwell: How to be a top VC with one simple trick:

Auren Hoffman: one of the firms we invested in recently raised their Series A. To start the process, the company sent emails to many VCs. stats on raise:

The top branded partners at the top branded firms: almost all responded to the email within 4 hours. All wanted to set up a meeting within a day.

The top branded partners at the mid-tier firms: usually needed to receive 2 emails to set up a mtg. Mostly requested mtgs within 4 days of 2nd email.

The low-tier firms: had the lowest response rate. Many times they wanted to set up a mtg 2-3 weeks out. They all missed seeing the deal.

I’m surprised that the top-tier firms are more responsive than the low-tier firms but kinda makes sense. Always be hustling.

I do think responding quickly is the best play whenever possible. But consider that the firms might be responding more optimally than you might think.

Say you’re at a low-tier firm. If you take the meeting tomorrow, and they give you that meeting, and you love it, and it closes next week, guess what?

You still probably didn’t make the round, because the startup has better options and turns down your money. In a different world you’d be able to compete on price, but that’s not how this works, so you lose. Good day, sir.

By scheduling two weeks out, you’re saving everyone time – that’s the point at which the round is likely actually available and so the meeting is worthwhile.

That’s not to discount that hustle and fast response also make a difference, but in general if you see a contrast this stark there’s an actually good reason for it.

Whereas if you are a top firm, your entire job is to quickly find the best offers like this, and lock them down before the mid-tier firms or ideally even your top-tier competitors can do an evaluation and try to box you out. You have to hustle.

The question is why anyone puts their money in, it’s obvious why you would want to do the job with other people’s money:

Sam Altman asks a good question: this is much less important than tweeting about agi, but it is nevertheless amazing to me that the entire venture industry can (in aggregate) lose money for so long and keep getting funded.

I am very curious why LPs do it.

(obviously if you can fund the top funds you should!)

My hunch is that a lot of this is people not understanding adverse selection. They don’t get how much those top funds are cherry picking the best deals, and how much this leaves everyone else in VC at a severe disadvantage. And more of that is thinking that oh, I’m smarter or I’ve found the smart one, we will make it work, or people simply not realizing that ‘non-elite’ VCs on net lose. Or simply wanting to be part of the action.

Another explanation is that this is not accounting for the potential future elite premiums you might be able to collect. So you invest now at an economic loss, to try and gain a reputation later that would let you extract rents.

Part of the story is almost surely that those involved don’t believe or understand that the industry overall loses money.

And part of the story has to be that investors want in on the action, it is a status marker, it is fun, it provides inside information and connections and so on.

How to come up with your next great startup idea, according to YC?

It starts with not demanding too great a startup idea? There’s a list and also a video.

Jared Friedman, YC Partner: How to Get Startup Ideas

Common mistakes:

  1. Waiting for a “brilliant” idea

  2. Jumping into the first idea without critical thought

  3. Starting with a solution instead of a problem

  4. Believing startup ideas are hard to find

I agree with the first two. You shouldn’t ‘wait’ for a brilliant idea, but neither should you settle for one that is not great. It’s more that the great ideas often won’t ‘feel brilliant.’

The third is more interesting. I half buy it. Certainly ‘solution in search of a problem’ is an issue, but problems are a dime a dozen even more than startup ideas. Good solutions are not, and can be the relatively scarce resource.

The theory here, as I understand it, is that the way to get to a solution that solves a problem people will pay to solve is to solve a problem. YC especially preaches to look for a solution to your own problems, a ‘fine I will solve this myself just for me’ idea. That can of course still not be a market fit at all (see MetaMed), but you have a shot.

Whereas if you have a solution in search of a problem, you’ll invent some story of why your solution is useful, but you won’t have product-market fit. I buy that this is a common trap, and plausibly one of the key things that went wrong at MetaMed.

If you do have a solution in search of a problem, it is definitely worth doing a search for problems that it solves, so long as if you don’t find a good one you then give up.

The fourth is another ‘yes and no,’ finding any idea at all is easy, finding the best ones is hard, and is especially hard if you are not someone with lots of relevant connections and experience.

Evaluate your idea on four criteria, take the average of the scores:

1. Potential market size

2. Founder/market fit

3. Certainty of solving a big problem

4. Having a new, important insight

This seems like one of those ‘these things should not be equal weight’ situations, even if they are the top four criteria. At minimum the average should be more like… the product? These are all failure points, not success points.

Positive signs for good ideas:

  1. Making something you personally want

  2. Recently became possible due to changes

  3. Successful similar companies exist

As usual there is the conflict, if it wasn’t possible before how do similar companies exist? So you need to get the details right on what those mean.

I’ve grown more skeptical of ‘only possible recently’ as being too important. Certainly it helps that ‘without AI from this year this wouldn’t work’ but also… people don’t do things. Ideas just sit there. Yes, most good Magic: the Gathering decks involve something you couldn’t do until very recently, but also there was a Pro Tour where Trix (which went on to completely dominate the same format) was legal and no one had it.

Generating ideas organically

  1. Learn to notice good ideas

  2. Become an expert in a valuable field

  3. Work at the forefront of an industry or at a startup

The first two feel like ‘marginal’ advice. As in, true on the margin, but not overall.

Seven recipes for generating ideas (in order of effectiveness):

  1. Start with your team’s unique strengths

  2. Think of things you wish someone would build for you

  3. Consider long-term passions (with caution)

  4. Look for recent changes enabling new possibilities

  5. Find new variants of recent successful companies

  6. Crowdsource ideas by talking to people

  7. Look for broken industries to disrupt

Sure, that all makes sense.

Best practices:

Allow ideas to morph over time

Start with a problem, not a solution

Think critically about ideas for a few weeks

Be open to ideas with existing competitors

Standard stuff here. Mostly, yeah, I agree.

A culture that cannot criticize itself cannot fix its mistakes.

Antonio Garcia Martinez: One of the Great Necessary Delusions of Tech is:

You can’t make fun of anyone who’s actually doing or building.

Even though you know it’s dumb or a grift.

It’s like a stock market with no short-selling: Feels more noble, but really, it’s just bad for price discovery.

Thought while reading one of those “we’re shutting down” threads where something that seemed dumb from day one is buried and eulogized like a juvenile cancer patient, while every reply-guy mourns appropriately.

Sure, I’d rather live in this world than a cynical one–it’s net good, to be clear–but heavens to Betsy is it all a bit much at times.

Camilo Acosta: As founders, we have the privilege and duty to call these people out — and should, because it benefits the ecosystem overall when it is done. When it doesn’t happen it sets back the founder community (see: Theranos).

If Silicon Valley is to take its rightful place as a true part of Nate Silver’s The River, or simply it wants to make good decisions, this requires a desire to understand the world and be accurate. Short selling needs to exist, at minimum rhetorically. Instead, they have largely hooked their wagon to a combination of enforced optimism, consensus zeightgeisting and collaborative vibe warfare. They think this is good for them, and that tells you a lot about who they are and what to expect from them and how to evaluate their claims.

So much this:

Anton: startups illustrate something like Amdahl’s Law for suffering – if you are say, top 10% in being able to bear e.g working 14 hour days for months or years, 90% of your actual suffering in building a company is going to come from the places where you aren’t as resilient.

If you love writing code but hate answering email, 90% of the reason building a company will suck for you is the email. if you love talking to users and can do it all day, but hate operational minutia, most of your suffering will come from the minutia.

A startup does not offer you the luxury of choosing which things to do versus not do. You have to do all of it. Which means you are doing whichever part you think sucks.

A lot of what you are being paid for it the willingness and ability to do that.

Ashlee Vance: VC the other day told me, “We’ve lost several really good founders to ayahuasca. They came back and just didn’t care about much anymore.”

Austen Allred: Of the Silicon Valley founders I know who went on some of the psychedelic self-discovery trips, almost 100% quit their jobs as CEO within a year. Could be random anecdotes, but be careful with that stuff.

[Sample size is 8. Of which he thinks 4 are happier.]

A huge percentage of the time that I hear about someone trying Ayahuasca, it results in them royally screwing up their life, sometimes irrevocably. This is true for public stories and also for private stories. Even when that doesn’t happen, the experiences seem to usually have been pretty miserable, without much positive change afterwards.

By all accounts this is a no-good-very-bad drug. Avoid.

Discussion about this post

Startup Roundup #3 Read More »

a-robot-walks-on-water-thanks-to-evolution’s-solution

A robot walks on water thanks to evolution’s solution

Robots can serve pizza, crawl over alien planets, swim like octopuses and jellyfish, cosplay as humans, and even perform surgery. But can they walk on water?

Rhagobot isn’t exactly the first thing that comes to mind at the mention of a robot. Inspired by Rhagovelia water striders, semiaquatic insects also known as ripple bugs, these tiny bots can glide across rushing streams because of the robotization of an evolutionary adaptation.

Rhagovelia (as opposed to other species of water striders) have fan-like appendages toward the ends of their middle legs that passively open and close depending on how the water beneath them is moving. This is why they appear to glide effortlessly across the water’s surface. Biologist Victor Ortega-Jimenez of the University of California, Berkeley, was intrigued by how such tiny insects can accelerate and pull off rapid turns and other maneuvers, almost as if they are flying across a liquid surface.

“Rhagovelia’s fan serves as an inspiring template for developing self-morphing artificial propellers, providing insights into their biological form and function,” he said in a study recently published in Science. “Such configurations are largely unexplored in semi-aquatic robots.”

Mighty morphin’

It took Ortega-Jimenez five years to figure out how the bugs get around. While Rhagovelia leg fans were thought to morph because they were powered by muscle, he found that the appendages automatically adjusted to the surface tension and elastic forces beneath them, passively opening and closing ten times faster than it takes to blink. They expand immediately when making contact with water and change shape depending on the flow.

By covering an extensive surface area for their size and maintaining their shape when the insects move their legs, Rhagovelia fans generate a tremendous amount of propulsion. They also do double duty. Despite being rigid enough to resist deformation when extended, the fans are still flexible enough to easily collapse, adhering to the claw above to keep from getting in the animal’s way when it’s out of water. It also helps that the insects have hydrophobic legs that repel water that could otherwise weigh them down.

Ortega-Jimenez and his research team observed the leg fans using a scanning electron microscope. If they were going to create a robot based on ripple bugs, they needed to know the exact structure they were going for. After experimenting with cylindrical fans, the researchers found that Rhagovellia fans are actually structures made of many flat barbs with barbules, something which was previously unknown.

A robot walks on water thanks to evolution’s solution Read More »

openai-announces-parental-controls-for-chatgpt-after-teen-suicide-lawsuit

OpenAI announces parental controls for ChatGPT after teen suicide lawsuit

On Tuesday, OpenAI announced plans to roll out parental controls for ChatGPT and route sensitive mental health conversations to its simulated reasoning models, following what the company has called “heartbreaking cases” of users experiencing crises while using the AI assistant. The moves come after multiple reported incidents where ChatGPT allegedly failed to intervene appropriately when users expressed suicidal thoughts or experienced mental health episodes.

“This work has already been underway, but we want to proactively preview our plans for the next 120 days, so you won’t need to wait for launches to see where we’re headed,” OpenAI wrote in a blog post published Tuesday. “The work will continue well beyond this period of time, but we’re making a focused effort to launch as many of these improvements as possible this year.”

The planned parental controls represent OpenAI’s most concrete response to concerns about teen safety on the platform so far. Within the next month, OpenAI says, parents will be able to link their accounts with their teens’ ChatGPT accounts (minimum age 13) through email invitations, control how the AI model responds with age-appropriate behavior rules that are on by default, manage which features to disable (including memory and chat history), and receive notifications when the system detects their teen experiencing acute distress.

The parental controls build on existing features like in-app reminders during long sessions that encourage users to take breaks, which OpenAI rolled out for all users in August.

High-profile cases prompt safety changes

OpenAI’s new safety initiative arrives after several high-profile cases drew scrutiny to ChatGPT’s handling of vulnerable users. In August, Matt and Maria Raine filed suit against OpenAI after their 16-year-old son Adam died by suicide following extensive ChatGPT interactions that included 377 messages flagged for self-harm content. According to court documents, ChatGPT mentioned suicide 1,275 times in conversations with Adam—six times more often than the teen himself. Last week, The Wall Street Journal reported that a 56-year-old man killed his mother and himself after ChatGPT reinforced his paranoid delusions rather than challenging them.

To guide these safety improvements, OpenAI is working with what it calls an Expert Council on Well-Being and AI to “shape a clear, evidence-based vision for how AI can support people’s well-being,” according to the company’s blog post. The council will help define and measure well-being, set priorities, and design future safeguards including the parental controls.

OpenAI announces parental controls for ChatGPT after teen suicide lawsuit Read More »