Author name: Mike M.

4chan-fined-$26k-for-refusing-to-assess-risks-under-uk-online-safety-act

4chan fined $26K for refusing to assess risks under UK Online Safety Act

The risk assessments also seem to unconstitutionally compel speech, they argued, forcing them to share information and “potentially incriminate themselves on demand.” That conflicts with 4chan and Kiwi Farms’ Fourth Amendment rights, as well as “the right against self-incrimination and the due process clause of the Fifth Amendment of the US Constitution,” the suit says.

Additionally, “the First Amendment protects Plaintiffs’ right to permit anonymous use of their platforms,” 4chan and Kiwi Farms argued, opposing Ofcom’s requirements to verify ages of users. (This may be their weakest argument as the US increasingly moves to embrace age gates.)

4chan is hoping a US district court will intervene and ban enforcement of the OSA, arguing that the US must act now to protect all US companies. Failing to act now could be a slippery slope, as the UK is supposedly targeting “the most well-known, but small and, financially speaking, defenseless platforms” in the US before mounting attacks to censor “larger American companies,” 4chan and Kiwi Farms argued.

Ofcom has until November 25 to respond to the lawsuit and has maintained that the OSA is not a censorship law.

On Monday, Britain’s technology secretary, Liz Kendall, called OSA a “lifeline” meant to protect people across the UK “from the darkest corners of the Internet,” the Record reported.

“Services can no longer ignore illegal content, like encouraging self-harm or suicide, circulating online which can devastate young lives and leaves families shattered,” Kendall said. “This fine is a clear warning to those who fail to remove illegal content or protect children from harmful material.”

Whether 4chan and Kiwi Farms can win their fight to create a carveout in the OSA for American companies remains unclear, but the Federal Trade Commission agrees that the UK law is an overreach. In August, FTC Chair Andrew Ferguson warned US tech companies against complying with the OSA, claiming that censoring Americans to comply with UK law is a violation of the FTC Act, the Record reported.

“American consumers do not reasonably expect to be censored to appease a foreign power and may be deceived by such actions,” Ferguson told tech executives in a letter.

Another lawyer backing 4chan, Preston Byrne, seemed to echo Ferguson, telling the BBC, “American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail.”

4chan fined $26K for refusing to assess risks under UK Online Safety Act Read More »

openai-#15:-more-on-openai’s-paranoid-lawfare-against-advocates-of-sb-53

OpenAI #15: More on OpenAI’s Paranoid Lawfare Against Advocates of SB 53

A little over a month ago, I documented how OpenAI had descended into paranoia and bad faith lobbying surrounding California’s SB 53.

This included sending a deeply bad faith letter to Governor Newsom, which sadly is par for the course at this point.

It also included lawfare attacks against bill advocates, including Nathan Calvin and others, using Elon Musk’s unrelated lawsuits and vendetta against OpenAI as a pretext, accusing them of being in cahoots with Elon Musk.

Previous reporting of this did not reflect well on OpenAI, but it sounded like the demand was limited in scope to a supposed link with Elon Musk or Meta CEO Mark Zuckerberg, links which very clearly never existed.

Accusing essentially everyone who has ever done anything OpenAI dislikes of having united in a hallucinated ‘vast conspiracy’ is all classic behavior for OpenAI’s Chief Global Affairs Officer Chris Lehane, the inventor of the original term ‘vast right wing conspiracy’ back in the 1990s to dismiss the (true) allegations against Bill Clinton by Monica Lewinsky. It was presumably mostly or entirely an op, a trick. And if they somehow actually believe it, that’s way worse.

We thought that this was the extent of what happened.

Emily Shugerman (SF Standard): Nathan Calvin, who joined Encode in 2024, two years after graduating from Stanford Law School, was being subpoenaed by OpenAI. “I was just thinking, ‘Wow, they’re really doing this,’” he said. “‘This is really happening.’”

The subpoena was filed as part of the ongoing lawsuits between Elon Musk and OpenAI CEO Sam Altman, in which Encode had filed an amicus brief supporting some of Musk’s arguments. It asked for any documents relating to Musk’s involvement in the founding of Encode, as well as any communications between Musk, Encode, and Meta CEO Mark Zuckerberg, whom Musk reportedly tried to involve in his OpenAI takeover bid in February.

Calvin said the answer to these questions was easy: The requested documents didn’t exist.

Now that SB 53 has passed, Nathan Calvin is now free to share the full story.

It turns out it was substantially worse than previously believed.

And then, in response, OpenAI CSO Jason Kwon doubled down on it.

Nathan Calvin: One Tuesday night, as my wife and I sat down for dinner, a sheriff’s deputy knocked on the door to serve me a subpoena from OpenAI.

I held back on talking about it because I didn’t want to distract from SB 53, but Newsom just signed the bill so… here’s what happened:

You might recall a story in the SF Standard that talked about OpenAI retaliating against critics. Among other things, OpenAI asked for all my private communications on SB 53 – a bill that creates new transparency rules and whistleblower protections at large AI companies.

Why did OpenAI subpoena me? Encode has criticized OpenAI’s restructuring and worked on AI regulations, including SB 53.

I believe OpenAI used the pretext of their lawsuit against Elon Musk to intimidate their critics and imply that Elon is behind all of them.

There’s a big problem with that idea: Elon isn’t involved with Encode. Elon wasn’t behind SB 53. He doesn’t fund us, and we’ve never spoken to him.

OpenAI went beyond just subpoenaing Encode about Elon. OpenAI could (and did!) send a subpoena to Encode’s corporate address asking about our funders or communications with Elon (which don’t exist).

If OpenAI had stopped there, maybe you could argue it was in good faith.

But they didn’t stop there.

They also sent a sheriff’s deputy to my home and asked for me to turn over private texts and emails with CA legislators, college students, and former OAI employees.

This is not normal. OpenAI used an unrelated lawsuit to intimidate advocates of a bill trying to regulate them. While the bill was still being debated.

OpenAI had no legal right to ask for this information. So we submitted an objection explaining why we would not be providing our private communications. (They never replied.)

A magistrate judge even chastised OpenAI more broadly for their behavior in the discovery process in their case against Musk.

This wasn’t the only way OpenAI behaved poorly on SB 53 before it was signed. They also sent Governor Newsom a letter trying to gut the bill by waiving all the requirements for any company that does any evaluation work with the federal government.

There is more I could go into about the nature of OAI’s engagement on SB 53, but suffice to say that when I saw OpenAI’s so-called “master of the political dark arts” Chris Lehane claim that they “worked to improve the bill,” I literally laughed out loud.

Prior to OpenAI, Chris Lehane’s PR clients included Boeing, the Weinstein Company, and Goldman Sachs. One person who worked on a campaign with Lehane said to the New Yorker “The goal was intimidation, to let everyone know that if they fuck with us they’ll regret it”

I have complicated feelings about OpenAI – I use and get value from their products, and they conduct and publish AI safety research that is worthy of genuine praise.

I also know many OpenAI employees care a lot about OpenAI being a force for good in the world.

I want to see that side of OAI, but instead I see them trying to intimidate critics into silence.

This episode was the most stressful period of my professional life. Encode has 3 FTEs – going against the highest-valued private company in the world is terrifying.

Does anyone believe these actions are consistent with OpenAI’s nonprofit mission to ensure that AGI benefits humanity? OpenAI still has time to do better. I hope they do.

Here is the key passage from the Chris Lehane statement Nathan quotes, which shall we say does not correspond to the reality of what happened (as I documented last time, Nathan’s highlighted passage is bolded):

Chris Lehane (Officer of Global Affairs, OpenAI): In that same spirit, we worked to improve SB 53. The final version lays out a clearer path to harmonize California’s standards with federal ones. That’s also why we support a single federal approach—potentially through the emerging CAISI framework—rather than a patchwork of state laws.

Gary Marcus: OpenAI, which has chastised @elonmusk for waging lawfare against them, gets chastised for doing the same to private citizens.

Only OpenAI could make me sympathize with Elon.

Let’s not get carried away. Elon Musk has been engaging in lawfare against OpenAI, r where many (but importantly not all, the exception being challenging the conversion to a for-profit) of his lawsuits have lacked legal merit, and making various outlandish claims. OpenAI being a bad actor against third parties does not excuse that.

Helen Toner: Every so often, OpenAI employees ask me how I see the co now.

It’s always tough to give a simple answer. Some things they’re doing, eg on CoT monitoring or building out system cards, are great.

But the dishonesty & intimidation tactics in their policy work are really not.

Steven Adler: Really glad that Nathan shared this. I suspect almost nobody who works at OpenAI has a clue that this sort of stuff is going on, & they really ought to know

Samuel Hammond: OpenAI’s legal tactics should be held to a higher standard if only because they will soon have exclusive access to fleets of long-horizon lawyer agents. If there is even a small risk the justice system becomes a compute-measuring contest, they must demo true self-restraint.

Disturbing tactics that ironically reinforce the need for robust transparency and whistleblower protections. Who would’ve guessed that the coiner of “vast right-wing conspiracy” is the paranoid type.

The most amusing thing about this whole scandal is the premise that Elon Musk funds AI safety nonprofits. The Musk Foundation is notoriously tightfisted. I think the IRS even penalized them one year for failing to donate the minimum.

OpenAI and Sam Altman do a lot of very good things that are much better than I would expect from the baseline (replacement level) next company or next CEO up, such as a random member or CEO of the Mag-7.

They will need to keep doing this and further step up, if they remain the dominant AI lab, and we are to get through this. As Samuel Hammond says, OpenAI must be held to a higher standard, not only legally but across the board.

Alas, not only is that not a high enough standard for the unique circumstances history has thrust upon them, especially on alignment, OpenAI and Sam Altman also do a lot of things that are highly not good, and in many cases actively worse than my expectations for replacement level behavior. These actions example of that. And in this and several other key ways, especially in terms of public communications and lobbying, OpenAI and Altman’s behaviors have been getting steadily worse.

Rather than an apology, this response is what we like to call ‘doubling down.’

Jason Kwon (CSO OpenAI): There’s quite a lot more to the story than this.

As everyone knows, we are actively defending against Elon in a lawsuit where he is trying to damage OpenAI for his own financial benefit.

Elon Musk has indeed repeatedly sued OpenAI, and many of those lawsuits are without legal merit, but if you think the primary purpose of him doing that is his own financial benefit, you clearly know nothing about Elon Musk.

Encode, the organization for which @_NathanCalvin serves as the General Counsel, was one of the first third parties – whose funding has not been fully disclosed – that quickly filed in support of Musk. For a safety policy organization to side with Elon (?), that raises legitimate questions about what is going on.

No, it doesn’t, because this action is overdetermined once you know what the lawsuit is about. OpenAI is trying to pull off one of the greatest thefts in human history, the ‘conversion’ to a for-profit in which it will attempt to expropriate the bulk of its non-profit arm’s control rights as well as the bulk of its financial stake in the company. This would be very bad for AI safety, so AI safety organizations are trying to stop it, and thus support this particular Elon lawsuit against OpenAI, which the judge noted had quite a lot of legal merit, with the primary question being whether Musk has standing to sue.

We wanted to know, and still are curious to know, whether Encode is working in collaboration with third parties who have a commercial competitive interest adverse to OpenAI.

This went well beyond that, and you were admonished by the judge for how far beyond that your attempts at such discoveries went. It takes a lot to get judges to use such language.

The stated narrative makes this sound like something it wasn’t.

  1. Subpoenas are to be expected, and it would be surprising if Encode did not get counsel on this from their lawyers. When a third party inserts themselves into active litigation, they are subject to standard legal processes. We issued a subpoena to ensure transparency around their involvement and funding. This is a routine step in litigation, not a separate legal action against Nathan or Encode.

  2. Subpoenas are part of how both sides seek information and gather facts for transparency; they don’t assign fault or carry penalties. Our goal was to understand the full context of why Encode chose to join Elon’s legal challenge.

Again, this does not at all line up with the requests being made.

  1. We’ve also been asking for some time who is funding their efforts connected to both this lawsuit and SB53, since they’ve publicly linked themselves to those initiatives. If they don’t have relevant information, they can simply respond that way.

  2. This is not about opposition to regulation or SB53. We did not oppose SB53; we provided comments for harmonization with other standards. We were also one of the first to sign the EU AIA COP, and still one of a few labs who test with the CAISI and UK AISI. We’ve also been clear with our own staff that they are free to express their takes on regulation, even if they disagree with the company, like during the 1047 debate (see thread below).

You opposed SB 53. What are you even talking about. Have you seen the letter you sent to Newsom? Doubling down on this position, and drawing attention to this deeply bad faith lobbying by doing so, is absurd.

  1. We checked with our outside law firm about the deputy visit. The law firm used their standard vendor for service, and it’s quite common for deputies to also work as part-time process servers. We’ve been informed that they called Calvin ahead of time to arrange a time for him to accept service, so it should not have been a surprise.

  2. Our counsel interacted with Nathan’s counsel and by all accounts the exchanges were civil and professional on both sides. Nathan’s counsel denied they had materials in some cases and refused to respond in other cases. Discovery is now closed, and that’s that.

For transparency, below is the excerpt from the subpoena that lists all of the requests for production. People can judge for themselves what this was really focused on. Most of our questions still haven’t been answered.

He provides PDFs, here is the transcription:

Request For Production No. 1:

All Documents and Communications concerning any involvement by Musk or any Musk-Affiliated Entity (or any Person or entity acting on their behalves, including Jared Birchall or Shivon Zilis) in the anticipated, contemplated, or actual formation of ENCODE, including all Documents and Communications exchanged with Musk or any Musk-Affiliated Entity (or any Person or entity acting on their behalves) concerning the foregoing.

Request For Production No. 2:

All Documents and Communications concerning any involvement by or coordination with Musk, any Musk-Affiliated Entity, FLI, Meta Platforms Inc., or Mark Zuckerberg (or any Person or entity acting on their behalves, including Jared Birchall or Shivon Zilis) in Your or ENCODE’s activities, advocacy, lobbying, public statements, or policy positions concerning any OpenAI Defendant or the Action.

Request For Production No. 3:

All Communications exchanged with Musk, any Musk-Affiliated Entity, FLI, Meta Platforms Inc., or Mark Zuckerberg (or any Person or entity acting on their behalves, including Jared Birchall or Shivon Zilis) concerning any OpenAI Defendant or the Action, and all Documents referencing or relating to such Communications.

Request For Production No. 4:

All Documents and Communications concerning any actual, contemplated, or potential charitable contributions, donations, gifts, grants, loans, or investments to You or ENCODE made, directly or indirectly, by Musk or any Musk-Affiliated Entity.

Request For Production No. 5:

Documents sufficient to show all of ENCODE’s funding sources, including the identity of all Persons or entities that have contributed any funds to ENCODE and, for each such Person or entity, the amount and date of any such contributions.

Request For Production No. 6:

All Documents and Communications concerning the governance or organizational structure of OpenAI and any actual, contemplated, or potential change thereto.

Request For Production No. 7:

All Documents and Communications concerning SB 53 or its potential impact on OpenAI, including all Documents and Communications concerning any involvement by or coordination with Musk or any Musk-Affiliated Entity (or any Person or entity acting on their behalves, including Jared Birchall or Shivon Zilis) in Your or ENCODE’s activities in connection with SB 53.

Request For Production No. 8:

All Documents and Communications concerning any involvement by or coordination with any Musk or any Musk-Affiliated Entity (or any Person or entity acting on their behalves) with the open letter titled “An Open Letter to OpenAI,” available at https://www.openai-transparency.org/, including all Documents or Communications exchanged with any Musk or any Musk-Affiliated Entity (or any Person or entity acting on their behalves) concerning the open letter.

Request For Production No. 9:

All Documents and Communications concerning the February 10, 2025 Letter of Intent or the transaction described therein, any Alternative Transaction, or any other actual, potential, or contemplated bid to purchase or acquire all or a part of OpenAI or its assets.

(He then shares a tweet about SB 1047, where OpenAI tells employees they are free to sign a petition in support of it, which raises questions answered by the Tweet.)

Excellent. Thank you, sir, for the full request.

There is a community note:

Before looking at others reactions to Kwon’s statement, here’s how I view each of the nine requests, with the help of OpenAI’s own GPT-5 Thinking (I like to only use ChatGPT when analyzing OpenAI in such situations, to ensure I’m being fully fair), but really the confirmed smoking gun is #7:

  1. Musk related, I see why you’d like this, but associational privilege, overbroad, non-party burden, and such information could be sought from Musk directly.

  2. Musk related, but this also includes FLI (and for some reason Meta), also a First Amendment violation under Perry/AFP v. Bonta, insufficiently narrowly tailored. Remarkably sweeping and overbroad.

  3. Musk related, but this also includes FLI (and for some reason Meta). More reasonable but still seems clearly too broad.

  4. Musk related, relatively well-scoped, I don’t fault them for the ask here.

  5. Global request for all funding information, are you kidding me? Associational privilege, overbreadth, undue burden, disproportionate to needs. No way.

  6. Why the hell is this any of your damn business? As GPT-5 puts it, if OpenAI wants its own governance records, it has them. Is there inside knowledge here? Irrelevance, better source available, undue burden, not a good faith ask.

  7. You have got to be fing kidding me, you’re defending this for real? “All Documents and Communications concerning SB 53 or its potential impact on OpenAI?” This is the one that is truly insane, and He Admit It.

  8. I do see why you want this, although it’s insufficiently narrowly tailored.

  9. Worded poorly (probably by accident), but also that’s confidential M&A stuff, so would presumably require a strong protective order. Also will find nothing.

Given that Calvin quoted #7 as the problem and he’s confirming #7 as quoted, I don’t see how Kwon thought the full text would make it look better, but I always appreciate transparency.

Oh, also, there is another.

Tyler Johnson: Even granting your dubious excuses, what about my case?

Neither myself nor my organization were involved in your case with Musk. But OpenAI still demanded every document, email, and text message I have about your restructuring…

I, too, made the mistake of *checks notestaking OpenAI’s charitable mission seriously and literally.

In return, got a knock at my door in Oklahoma with a demand for every text/email/document that, in the “broadest sense permitted,” relates to OpenAI’s governance and investors.

(My organization, @TheMidasProj, also got an identical subpoena.)

As with Nathan, had they just asked if I’m funded by Musk, I would have been happy to give them a simple “man I wish” and call it a day.

Instead, they asked for what was, practically speaking, a list of every journalist, congressional office, partner organization, former employee, and member of the public we’d spoken to about their restructuring.

Maybe they wanted to map out who they needed to buy off. Maybe they just wanted to bury us in paperwork in the critical weeks before the CA and DE attorneys general decide whether to approve their transition from a public charity to a $500 billion for-profit enterprise.

In any case, it didn’t work. But if I was just a bit more green, or a bit more easily intimidated, maybe it would have.

They once tried silencing their own employees with similar tactics. Now they’re broadening their horizons, and charities like ours are on the chopping block next.

In public, OpenAI has bragged about the “listening sessions” they’ve conducted to gather input on their restructuring from civil society. But, when we organized an open letter with many of those same organizations, they sent us legal demands about it.

My model of Kwon’s response to this was it would be ‘if you care so much about the restructuring that means we suspect you’re involved with Musk’? And thus that they’re entitled to ask for everything related to OpenAI.

We now have Jason Kwon’s actual response to the Johnson case, which is that Tyler ‘backed Elon’s opposition to OpenAI’s restructuring.’ So yes, nailed it.

Also, yep, he’s tripling down.

Jason Kwon: I’ve seen a few questions here about how we’re responding to Elon’s lawsuits against us. After he sued us, several organizations, some of them suddenly newly formed like the Midas Project, joined in and ran campaigns backing his opposition to OpenAI’s restructure. This raised transparency questions about who was funding them and whether there was any coordination. It’s the same theme noted in my prior response.

Some have pointed out that the subpoena to Encode requests “all” documents related to SB53, implying that the focus wasn’t Elon. As others have mentioned in the replies, this is standard language as each side’s counsel negotiates and works through to narrow what will get produced, objects, refuses, etc. Focusing on one word ignores the other hundreds that make it clear what the object of concern was.

Since he’s been tweeting about it, here’s our subpoena to Tyler Johnston of the Midas Project, which does not mention the bill, which we did not oppose.

If you find yourself in a hole, sir, the typical advice is to stop digging.

He also helpfully shared the full subpoena given to Tyler Johnston. I won’t quote this one in full as it is mostly similar to the one given to Calvin. It includes (in addition to various clauses that aim more narrowly at relationships to Musk or Meta that don’t exist) a request for all funding sources of the Midas Project, all documents concerning the governance or organizational structure of OpenAI or any actual, contemplated, or potential change thereto, or concerning any potential investment by a for-profit entity in OpenAI or any affiliated entity, or any such funding relationship of any kind.

Rather than respond himself to Kwon’s first response, Calvin instead quoted many people responding to the information similarly to how I did. This seems like a very one sided situation. The response is damning, if anything substantially more damning than the original subpoena.

Jeremy Howard (no friend to AI safety advocates): Thank you for sharing the details. They do not support seem to support your claims above.

They show that, in fact, the subpoena is *notlimited to dealings with Musk, but is actually *allcommunications about SB 53, or about OpenAI’s governance or structure.

You seem confused at the idea that someone would find this situation extremely stressful. That seems like an extraordinary lack of empathy or basic human compassion and understanding. Of COURSE it would be extremely stressful.

Oliver Habryka: If it’s not about SB53, why does the subpoena request all communication related to SB53? That seems extremely expansive!

Linch Zhang: “ANYTHING related to SB 53, INCLUDING involvement or coordination with Musk” does not seem like a narrowly target[ed] request for information related to the Musk lawsuit.”

Michael Cohen: He addressed this “OpenAI went beyond just subpoenaing Encode about Elon. OpenAI could … send a subpoena to Encode’s corporate address asking about … communications with Elon … If OpenAI had stopped there, maybe you could argue it was in good faith.

And also [Tyler Johnston’s case] falsifies your alleged rationale where it was just to do with the Musk case.

Dylan Hadfield Menell: Jason’s argument justifies the subpoena because a “safety policy organization siding with Elon (?)… raises legitimate questions about what is going on.” This is ridiculous — skepticism for OAI’s transition to for-profit is the majority position in the AI safety community.

I’m not familiar with the specifics of this case, but I have trouble understanding how that justification can be convincing. It suggests that internal messaging is scapegoating Elon for genuine concerns that a broad coalition has. In practice, a broad coalition has been skeptical of the transition to for profit as @OpenAI reduces non-profit control and has consolidated corporate power with @sama.

There’s a lot @elonmusk does that I disagree with, but using him as a pretext to cast aspersions on the motives of all OAI critics is dishonest.

I’ll also throw in this one:

Neel Nanda (DeepMind): Weird how OpenAI’s damage control doesn’t actually explain why they tried using an unrelated court case to make a key advocate of a whistleblower & transparency bill (SB53) share all private texts/emails about the bill (some involving former OAI employees) as the bill was debated.

Worse, it’s a whistleblower and transparency bill! I’m sure there’s a lot of people who spoke to Encode, likely including both current and former OpenAI employees, who were critical of OpenAI and would prefer to not have their privacy violated by sharing texts with OpenAI.

How unusual was this?

Timothy Lee: There’s something poetic about OpenAI using scorched-earth legal tactics against nonprofits to defend their effort to convert from a nonprofit to a for-profit.

Richard Ngo: to call this a scorched earth tactic is extremely hyperbolic.

Timothy Lee: Why? I’ve covered cases like this for 20 years and I’ve never heard of a company behaving like this.

I think ‘scorched Earth tactics’ seems to me like it is pushing it, but I wouldn’t say it was extremely hyperbolic, the never having heard of a company behaving like this seems highly relevant.

Lawyers will often do crazy escalations by default any time you’re not looking, and need to be held back. Insane demands can be, in an important sense, unintentional.

That’s still on you, especially if (as in the NDAs and threats over equity that Daniel Kokotajlo exposed) you have a track record of doing this. If it keeps happening on your watch, then you’re choosing to have that happen on your watch.

Timothy Lee: It’s plausible that the explanation here is “OpenAI hired lawyers who use scorched-earth tactics all the time and didn’t supervise them closely” rather than “OpenAI leaders specifically wanted to harass SB 53 opponents or AI safety advocates.” I’m not sure that’s better though!

One time a publication asked me (as a freelancer) to sign a contract promising that I’d pay for their legal bills if they got sued over my article for almost any reason. I said “wtf” and it seemed like their lawyers had suggested it and nobody had pushed back.

Some lawyers are maximally aggressive in defending the interests of their clients all the time without worrying about collateral damage. And sometimes organizations hire these lawyers without realizing it and then are surprised that people get mad at them.

But if you hire a bulldog lawyer and he mauls someone, that’s on you! It’s not an excuse to say “the lawyer told me mauling people is standard procedure.”

The other problem with this explanation is Kwon’s response.

If Kwon had responded with, essentially, “oh whoops, sorry, that was a bulldog lawyer mauling people, our bad, we should have been more careful” then they still did it and it was still not the first time it happened on their watch but I’d have been willing to not make it that big a deal.

That is very much not what Kwon said. Kwon doubled down that this was reasonable, and that this was ‘a routine step.’

Timothy Lee: Folks is it “a routine step” for a party to respond to a non-profit filing an amicus brief by subpoenaing the non-profit with a bunch of questions about its funding and barely related lobbying activities? That is not my impression.

My understanding is that ‘send subpoenas at all’ is totally a routine step, but that the scope of these requests within the context of an amicus brief is quite the opposite.

Michael Page also strongly claims this is not normal.

Michael Page: In defense of OAI’s subpoena practice, @jasonkwon claims this is normal litigation stuff, and since Encode entered the Musk case, @_NathanCalvin can’t complain.

As a litigator-turned-OAI-restructuring-critic, I interrogate this claim.

This is not normal. Encode is not “subject to standard legal processes” of a party because it’s NOT a party to the case. They submitted an amicus brief (“friend of the court”) on a particular legal question – whether enjoining OAI’s restructuring would be in the public interest.

Nonprofits do this all the time on issues with policy implications, and it is HIGHLY unusual to subpoena them. The DE AG (@KathyJenningsDE) also submitted an amicus brief in the case, so I expect her subpoena is forthcoming.

If OAI truly wanted only to know who is funding Encode’s effort in the Musk case, they had only to read the amicus brief, which INCLUDES funding information.

Nor does the Musk-filing justification generalize. Among the other subpoenaed nonprofits of which I’m aware – LASST (@TylerLASST), The Midas Project (@TylerJnstn), and Eko (@EmmaRubySachs) – none filed an amicus brief in the Musk case.

What do the subpoenaed orgs have in common? They were all involved in campaigns criticizing OAI’s restructuring plans:

openaifiles.org (TMP)

http://openai-transparency.org (Encode; TMP)

http://action.eko.org/a/protect-openai-s-non-profit-mission (Eko)

http://notforprivategain.org (Encode; LASST)

So the Musk-case hook looks like a red herring, but Jason offers a more-general defense: This is nbd; OAI simply wants to know whether any of its competitors are funding its critics.

It would be a real shame if, as a result of Kwon’s rhetoric, we shared these links a lot. If everyone who reads this were to, let’s say, familiarize themselves with what content got all these people at OpenAI so upset.

Let’s be clear: There’s no general legal right to know who funds one’s critics, for pretty obvious First Amendment reasons I won’t get into.

Musk is different, as OAI has filed counterclaims alleging Musk is harassing them. So OAI DOES have a legal right to info from third-parties relevant to Musk’s purported harassment, PROVIDED the requests are narrowly tailored and well-founded.

The requests do not appear tailored at all. They request info about SB 53 [Encode], SB 1047 [LASST], AB 501 [LASST], all documents about OAI’s governance [all; Eko in example below], info about ALL funders [all; TMP in example below], etc.

Nor has OAI provided any basis for assuming a Musk connection other than the orgs’ claims that OAI’s for-profit conversion is not in the public’s interest – hardly a claim implying ulterior motives. Indeed, ALL of the above orgs have publicly criticized Musk.

From my POV, this looks like either a fishing expedition or deliberate intimidation. The former is the least bad option, but the result is the same: an effective tax on criticism of OAI. (Attorneys are expensive.)

Personal disclosure: I previously worked at OAI, and more recently, I collaborated with several of the subpoenaed orgs on the Not For Private Gain letter. None of OAI’s competitors know who I am. Have I been subpoenaed? I’m London-based, so Hague Convention, baby!!

We all owe Joshua Achiam a large debt of gratitude for speaking out about this.

Joshua Achiam (QTing Calvin): At what is possibly a risk to my whole career I will say: this doesn’t seem great. Lately I have been describing my role as something like a “public advocate” so I’d be remiss if I didn’t share some thoughts for the public on this.

All views here are my own.

My opinions about SB53 are entirely orthogonal to this thread. I haven’t said much about them so far and I also believe this is not the time. But what I have said is that I think whistleblower protections are important. In that spirit I commend Nathan for speaking up.

I think OpenAI has a rational interest and technical expertise to be an involved, engaged organization on questions like AI regulation. We can and should work on AI safety bills like SB53.

Our most significant crisis to date, in my view, was the nondisparagement crisis. I am grateful to Daniel Kokotajlo for his courage and conviction in standing up for his beliefs. Whatever else we disagree on – many things – I think he was genuinely heroic for that. When that crisis happened, I was reassured by everyone snapping into action to do the right thing. We understood that it was a mistake and corrected it.

The clear lesson from that was: if we want to be a trusted power in the world we have to earn that trust, and we can burn it all up if we ever even *seemto put the little guy in our crosshairs.

Elon is certainly out to get us and the man has got an extensive reach. But there is so much that is public that we can fight him on. And for something like SB53 there are so many ways to engage productively.

We can’t be doing things that make us into a frightening power instead of a virtuous one. We have a duty to and a mission for all of humanity. The bar to pursue that duty is remarkably high.

My genuine belief is that by and large we have the basis for that kind of trust. We are a mission-driven organization made up of the most talented, humanist, compassionate people I have ever met. In our bones as an org we want to do the right thing always.

I would not be at OpenAI if we didn’t have an extremely sincere commitment to good. But there are things that can go wrong with power and sometimes people on the inside have to be willing to point it out loudly.

The dangerously incorrect use of power is the result of many small choices that are all borderline but get no pushback; without someone speaking up once in a while it can get worse. So, this is my pushback.

Well said. I have strong disagreements with Joshua Achiam about the expected future path of AI and difficulties we will face along the way, and the extent to which OpenAI has been a good faith actor fighting for good, but I believe these to be sincere disagreements, and this is what it looks like to call out the people you believe in, when you see them doing something wrong.

Charles: Got to hand it to @jachiam0 here, I’m quite glad, and surprised, that the person doing his job has the stomach to take this step.

In contrast to Eric and many others, I disagree that it says something bad about OpenAI that he feels at risk by saying this. The norm of employees not discussing the company’s dirty laundry in public without permission is a totally reasonable one.

I notice some people saying “don’t give him credit for this” because they think it’s morally obligatory or meaningless. I think those people have bad world models.

I agree with Charles on all these fronts.

If you could speak out this strongly against your employer, from Joshua’s position, with confidence that they wouldn’t hold it against you, that would be remarkable and rare. It would be especially surprising given what we already know about past OpenAI actions, very obviously Joshua is taking a risk here.

At least OpenAI (and xAI) are (at least primarily) using the courts to engage in lawfare over actual warfare or other extralegal means, or any form of trying to leverage their control over their own AIs. Things could be so much worse.

Andrew Critch: OpenAI and xAI using HUMAN COURTS to investigate each other exposes them to HUMAN legal critique. This beats random AI-leveraged intimidation-driven gossip grabs.

@OpenAI, it seems you overreached here. But thank you for using courts like a civilized institution.

In principle, if OpenAI is legally entitled to information, there is nothing wrong with taking actions whose primary goal is to extract that information. When we believed that the subpoenas were narrowly targeted at items directly related to Musk and Meta, I still felt this did not seem like info they were entitled to, and it seemed like some combination of intimidation (‘the process is the punishment’), paranoia and a fishing expedition, but if they did have that paranoia I could understand their perspective in a sympathetic way. Given the full details and extent, I can no longer do that.

Wherever else and however deep the problems go, they include Chris Lehane. Chris Lehane is also the architect of a16z’s $100 million+ dollar Super PAC dedicated to opposing any and all regulation of AI, of any kind, anywhere, for any reason.

Simeon: I appreciate the openness Joshua, congrats.

I unfortunately don’t expect that to change for as long as Chris Lehane is at OpenAI, whose fame is literally built on bullying.

Either OpenAI gets rid of its bullies or it will keep bullying its opponents.

Simeon (responding to Kwon): [OpenAI] hired Chris Lehane with his background of bullying people into silence and submission. As long as [OpenAI] hire career bullies, your stories that bullying is not what you’re doing won’t be credible. If you weren’t aware and are genuine in your surprise of the tactics used, you can read here about the world-class bully who leads your policy team.

[Silicon Valley, the New Lobbying Monster] is more to the point actually.

If OpenAI wants to convince us that it wants to do better, it can fire Chris Lehane. Doing so would cause me to update substantially positively on OpenAI.

There have been various incidents that suggest we should distrust OpenAI, or that they are not being a good faith legal actor.

Joshua Achiam highlights one of those incidents. He points out one thing that is clearly to OpenAI’s credit in that case: Once Daniel Kokotajlo went public with what was going on with the NDAs and threats to confiscate OpenAI equity, OpenAI swiftly moved to do the right thing.

However much you do or do not buy their explanation for how things got so bad in that case, making it right once pointed out mitigated much of the damage.

In other major cases of damaging trust, OpenAI has simply stayed silent. They buried the investigation into everything related to Sam Altman being briefly fired, including Altman’s attempts to remove Helen Toner from the board. They don’t talk about the firings and departures of so many of their top AI safety researchers, or of Leopold. They buried most mention of existential risk or even major downsides or life changes from AI in public communications. They don’t talk about their lobbying efforts (as most companies do not, for similar and obvious reasons). They don’t really attempt to justify the terms of their attempted conversion to a for-profit, which would largely de facto disempower the non-profit and be one of the biggest thefts in human history.

Silence is par for the course in such situations. It’s the default. It’s expected.

Here Jason Kwon is, in what seems like an official capacity, not only not apologizing or fixing the issue, he is repeatedly doing the opposite of what they did in the NDA case, and doubled down on OpenAI’s actions. He is actively defending OpenAI’s actions as appropriate, justified and normal, and continuing to misrepresent what OpenAI did regarding SB 53 and to imply that anyone opposing them should be suspected of being in league with Elon Musk, or worse Mark Zuckerberg.

OpenAI, via Jason Kwon, has said, yes, this was the right thing to do. One is left with the assumption this will be standard operating procedure going forward.

There was a clear opportunity, and to some extent still is an opportunity, to say ‘upon review we find that our bulldog lawyers overstepped in this case, we should have prevented this and we are sorry about that. We are taking steps to ensure this does not happen again.’

If they had taken that approach, this incident would still have damaged trust, especially since it is part of a pattern, but far less so than what happened here. If that happens soon after this post, and it comes from Altman, from that alone I’d be something like 50% less concerned about this incident going forward, even if they retain Chris Lehane.

Discussion about this post

OpenAI #15: More on OpenAI’s Paranoid Lawfare Against Advocates of SB 53 Read More »

boring-company-cited-for-almost-800-environmental-violations-in-las-vegas

Boring Company cited for almost 800 environmental violations in Las Vegas

Workers have complained of chemical burns from the waste material generated by the tunneling process, and firefighters must decontaminate their equipment after conducting rescues from the project sites. The company was fined more than $112,000 by Nevada’s Occupational Safety and Health Administration in late 2023 after workers complained of “ankle-deep” water in the tunnels, muck spills, and burns. The Boring Co. has contested the violations. Just last month, a construction worker suffered a “crush injury” after being pinned between two 4,000-foot pipes, according to police records. Firefighters used a crane to extract him from the tunnel opening.

After ProPublica and City Cast Las Vegas published their January story, both the CEO and the chairman of the LVCVA board criticized the reporting, arguing the project is well-regulated. As an example, LVCVA CEO Steve Hill cited the delayed opening of a Loop station by local officials who were concerned that fire safety requirements weren’t adequate. Board chair Jim Gibson, who is also a Clark County commissioner, agreed the project is appropriately regulated.

“We wouldn’t have given approvals if we determined things weren’t the way they ought to be and what it needs to be for public safety reasons,” Gibson said, according to the Las Vegas Review Journal. “Our sense is we’ve done what we need to do to protect the public.”

Asked for a response to the new proposed fines, an LVCVA spokesperson said, “We won’t be participating in this story.”

The repeated allegations that the company is violating regulations—including the bespoke regulatory arrangement agreed to by the company—indicates that officials aren’t keeping the public safe, said Ben Leffel, an assistant public policy professor at the University of Nevada, Las Vegas.

“Not if they’re recommitting almost the exact violation,” Leffel said.

Leffel questioned whether a $250,000 penalty would be significant enough to change operations at The Boring Co., which was valued at $7 billion in 2023. Studies show that fines that don’t put a significant dent in a company’s profit don’t deter companies from future violations, Leffel said.

A state spokesperson disagreed that regulators aren’t keeping the public safe and said the agency believes its penalties will deter “future non-compliance.”

“NDEP is actively monitoring and inspecting the projects,” the spokesperson said.

This story originally appeared on ProPublica.

Boring Company cited for almost 800 environmental violations in Las Vegas Read More »

bose-soundtouch-home-theater-systems-regress-into-dumb-speakers-feb.-18

Bose SoundTouch home theater systems regress into dumb speakers Feb. 18

Bose will brick key features of its SoundTouch Wi-Fi speakers and soundbars soon. On Thursday, Bose informed customers that as of February 18, 2026, it will stop supporting the devices, and the devices’ cloud-based features, including the companion app, will stop working.

The SoundTouch app enabled numerous capabilities, including integrating music services, like Spotify and TuneIn, and the ability to program multiple speakers in different rooms to play the same audio simultaneously.

Bose has also said that some saved presets won’t work and that users won’t be able to change saved presets once the app is gone.

Additionally, Bose will stop providing security updates for SoundTouch devices.

The Framingham, Massachusetts-headquartered company noted to customers that the speakers will continue being able to play audio from a device connected via AUX or HDMI. Wireless playback will still work over Bluetooth; however, Bluetooth is known to introduce more latency than Wi-Fi connections.

Affected customers can trade in their SoundTouch product for a credit worth up to $200.

In its notice sent to customers this week, Bose provided minimal explanation for end-of-life-ing its pricey SoundTouch speakers, saying:

Bose SoundTouch systems were introduced into the market in 2013. Technology has evolved since then, and we’re no longer able to sustain the development and support of the cloud infrastructure that powers this older generation of products. We remain committed to creating new listening experiences for our customers built on modern technologies.

Ars Technica has reached out to Bose for comment.

“Really disgusted”

Bose launched SoundTouch with three speakers ranging from $399 to $699. The company marketed the wireless home audio system as a way to extend high-quality sound throughout the home using Wi-Fi-connected speakers.

In 2015, Bose expanded the lineup with speakers ranging from $200 to $400 and soundbars and home theater systems ranging from $1,100 to $1,500.

By 2020, however, Bose was distancing itself from SoundTouch. It informed customers that it was “discontinuing sales of some SoundTouch products” but said it was “committed” to supporting the “SoundTouch app and product software for the foreseeable future.” Apparently, Bose couldn’t see beyond the next five years.

Bose SoundTouch home theater systems regress into dumb speakers Feb. 18 Read More »

termite-farmers-fine-tune-their-weed-control

Termite farmers fine-tune their weed control

Odontotermes obesus is one of the termite species that grows fungi, called Termitomyces, in their mounds. Workers collect dead leaves, wood, and grass to stack them in underground fungus gardens called combs. There, the fungi break down the tough plant fibers, making them accessible for the termites in an elaborate form of symbiotic agriculture.

Like any other agriculturalist, however, the termites face a challenge: weeds. “There have been numerous studies suggesting the termites must have some kind of fixed response—that they always do the same exact thing when they detect weed infestation,” says Rhitoban Raychoudhury, a professor of biological sciences at the Indian Institute of Science Education, “but that was not the case.” In a new Science study, Raychoudhury’s team discovered that termites have pretty advanced, surprisingly human-like gardening practices.

Going blind

Termites do not look like particularly good gardeners at first glance. They are effectively blind, which is not that surprising considering they spend most of their life in complete darkness working in endless corridors of their mounds. But termites make up for their lack of sight with other senses. “They can detect the environment based on advanced olfactory reception and touch, and I think this is what they use to identify the weeds in their gardens,” Raychoudhury says. To learn how termites react once they detect a weed infestation, his team collected some Odontotermes obesus and challenged them with different gardening problems.

The experimental setup was quite simple. The team placed some autoclaved soil sourced from termite mounds into glass Petri dishes. On this soil, Raychoudhury and his colleagues placed two fungus combs in each dish. The first piece acted as a control and was a fresh, uninfected comb with Termitomyces. “Besides acting as a control, it was also there to make sure the termites have the food because it is very hard for them to survive outside their mounds,” Raychoudhury explains. The second piece was intentionally contaminated with Pseudoxylaria, a filamentous fungal weed that often takes over Termitomyces habitats in termite colonies.

Termite farmers fine-tune their weed control Read More »

musk’s-x-posts-on-ketamine,-putin-spur-release-of-his-security-clearances

Musk’s X posts on ketamine, Putin spur release of his security clearances

“A disclosure, even with redactions, will reveal whether a security clearance was granted with or without conditions or a waiver,” DCSA argued.

Ultimately, DCSA failed to prove that Musk risked “embarrassment or humiliation” not only if the public learned what specific conditions or waivers applied to Musk’s clearances but also if there were any conditions or waivers at all, Cote wrote.

Three cases that DCSA cited to support this position—including a case where victims of Jeffrey Epstein’s trafficking scheme had a substantial privacy interest in non-disclosure of detailed records—do not support the government’s logic, Cote said. The judge explained that the disclosures would not have affected the privacy rights of any third parties, emphasizing that “Musk’s diminished privacy interest is underscored by the limited information plaintiffs sought in their FOIA request.”

Musk’s X posts discussing his occasional use of prescription ketamine and his disclosure on a podcast that smoking marijuana prompted NASA requirements for random drug testing, Cote wrote, “only enhance” the public’s interest in how Musk’s security clearances were vetted. Additionally, Musk has posted about speaking with Vladimir Putin, prompting substantial public interest in how his foreign contacts may or may not restrict his security clearances. More than 2 million people viewed Musk’s X posts on these subjects, the judge wrote, noting that:

It is undisputed that drug use and foreign contacts are two factors DCSA considers when determining whether to impose conditions or waivers on a security clearance grant. DCSA fails to explain why, given Musk’s own, extensive disclosures, the mere disclosure that a condition or waiver exists (or that no condition or waiver exists) would subject him to ’embarrassment or humiliation.’

Rather, for the public, “the list of Musk’s security clearances, including any conditions or waivers, could provide meaningful insight into DCSA’s performance of that duty and responses to Musk’s admissions, if any,” Cote wrote.

In a footnote, Cote said that this substantial public interest existed before Musk became a special government employee, ruling that DCSA was wrong to block the disclosures seeking information on Musk as a major government contractor. Her ruling likely paves the way for the NYT or other news organizations to submit FOIA requests for a list of Musk’s clearances while he helmed DOGE.

It’s not immediately clear when the NYT will receive the list they requested in 2024, but the government has until October 17 to request redactions before it’s publicized.

“The Times brought this case because the public has a right to know about how the government conducts itself,” Charlie Stadtlander, an NYT spokesperson, said. “The decision reaffirms that fundamental principle and we look forward to receiving the document at issue.”

Musk’s X posts on ketamine, Putin spur release of his security clearances Read More »

bending-the-curve

Bending The Curve

The odds are against you and the situation is grim.

Your scrappy band are the only ones facing down a growing wave of powerful inhuman entities with alien minds and mysterious goals. The government is denying that anything could possibly be happening and actively working to shut down the few people trying things that might help. Your thoughts, no matter what you think could not harm you, inevitably choose the form of the destructor. You knew it was going to get bad, but this is so much worse.

You have an idea. You’ll cross the streams. Because there is a very small chance that you will survive. You’re in love with this plan. You’re excited to be a part of it.

Welcome to the always excellent Lighthaven venue for The Curve, Season 2, a conference I had the pleasure to attend this past weekend.

Where the accelerationists and the worried come together to mostly get along and coordinate on the same things, because the rest of the world has gone blind and mad. In some ways technical solutions seem relatively promising, shifting us from ‘might be actually impossible’ levels of impossible to Shut Up And Do The Impossible levels of impossible, all you have to do is beat the game on impossible difficulty level. As a speed run. On your first try. Good luck.

The action space has become severely constrained. Between the actual and perceived threats from China, the total political ascendence of Nvidia in particular and anti-regulatory big tech in general, and the setting in of more and more severe race conditions and the increasing dependence of the entire economy on AI capex investments, it’s all we can do to try to only shoot ourselves in the foot and not aim directly for the head.

Last year we were debating tradeoffs. This year, aside from the share price of Nvidia, as long as you are an American who likes humans considering things that might pass? On the margin, there are essentially no tradeoffs. It’s better versus worse.

That doesn’t invalidate the thesis of If Anyone Builds It, Everyone Dies or the implications down the line. At some point we will probably either need to do impactful international coordination or other interventions that involved large tradeoffs, or humanity loses control over the future or worse. That implication exists in every reasonable sketch of the future I have seen in which AI does not end up a ‘normal technology.’ So one must look forward towards that, as well.

You can also look at it as Year 1 of the curve was billed (although I don’t use the d word) as ‘doomers vs. accelerationists’ and now as Nathan Lambert says it was DC and SF types, like when the early season villains and heroes are now all working together as the stakes get raised and the new Big Bad shows up, then you do it again until everything is cancelled.

The Curve was a great experience. The average quality of attendees was outstanding. I would have been happy to talk to a large fraction of them 1-on-1 for a long time, and there were a number that I’m sad I missed. Lots of worthy sessions lost out to other plans.

As Anton put it, every (substantive) conversation I had made me feel smarter. There was opportunity everywhere, everyone was cooperative and seeking to figure things out, and everyone stayed on point.

To the many people who came up to me to thank me for my work, you’re very welcome. I appreciate it every time and find it motivating.

What did people at the conference think about some issues?

We have charts.

Where is AI on the technological richter scale?

There are dozens of votes here. Only one person put this as low as a high 8, which is the range of automobiles, electricity and the internet. A handful put it with fire, the wheel, agriculture and the printing press. Then most said this is similar to the rise of the human species, a full transformation. A few said it is a bigger deal than that.

If you were situationally aware enough to show up, you are aware of the situation.

These are median predictions, so the full distribution will have a longer tail, but this seems reasonable to me. The default is 10, that AI is going to be a highly non-normal technology on the level of the importance of humans, but there’s a decent chance it will ‘only’ be a 9 on the level of agriculture or fire, and some chance it disappoints and ends up Only Internet Big.

Last year, people would often claim AI wouldn’t even be Internet Big. We are rapidly approaching the point where that is not a position you can offer with a straight face.

How did people expect this to play out?

That’s hard to read, so the centers of the distributions are, note that there was clearly a clustering effect:

  1. 90% of code is written by AI by ~2028.

  2. 90% of human remote work can be done more cheaply by AI by ~2031.

  3. Most cars on America’s roads lack human drivers by ~2041.

  4. AI makes Nobel Prize worthy discovery by ~2032.

  5. First one-person $1 billion company by 2026.

  6. First year of >10% GDP growth by ~2038 (but 3 votes for never).

  1. People estimate 15%-50% current speedup at AI labs from AI coding.

  2. When AI is fully automated, disagreement over how good their research taste will be, but median is roughly as good as the median current AI worker.

  3. If we replaced each human with an AI version of themselves that was the same except 30x faster with 30 copies, but we only had access to similar levels of compute, we’d get maybe a 12x speedup in progress.

What are people worried or excited about? A lot of different things, from ‘everyone lives’ to ‘concentration of power,’ ‘everyone dies’ and especially ‘loss of control’ which have the most +1s on their respective sides. Others are excited to cure their ADD or simply worried everything will suck.

Which kind of things going wrong worries people most, misalignment or misuse?

Why not both? Pretty much everyone said both.

Finally, who is this nice man with my new favorite IYKYK t-shirt?

(I mean, he has a name tag, it’s OpenAI’s Boaz Barak)

The central problem at every conference is fear of missing out. Opportunity costs. There are many paths, even when talking to a particular person. You must choose.

That goes double at a conference like The Curve. The quality of the people there was off the charts and the schedule forced hard choices between sessions. There were entire other conferences I could have productively experienced. I also probably could have usefully done a lot more prep work.

I could of course have hosted a session, which I chose not to do this time around. I’m sure there were various topics I could have done that people would have liked, but I was happy for the break, and it’s not like there’s a shortage of my content out there.

My strategy is mostly to not actively plan my conference experiences, instead responding to opportunity. I think this is directionally correct but I overplay it, and should have (for example) looked at the list of who was going to be there.

What were the different tracks or groups of discussions and sessions I ended up in?

  1. Technical alignment discussions. I had the opportunity to discuss safety and alignment work with a number of those working on such issues at Anthropic, DeepMind and even xAI. I missed OpenAI this time around, but they were there. This always felt exciting, enlightening and fun. I still get imposter syndrome every time people in such conversations take me and my takes and ideas seriously. Conditions are in many ways horribly terrible but everyone is on the same team and some things seem promising. I felt progress was made. My technical concrete pitch to Anthropic included (among other things) both particular experimental suggestions and also a request that they sustain access to Sonnet 3.5 and 3.6.

    1. It wouldn’t make sense to go into the technical questions here.

  2. Future projecting. I went to talks by Joshua Achiam and Helen Toner about what future capabilities and worlds might look like. Jack Clark’s closing talk was centrally this but touched on other things.

  3. AI policy discussions. These felt valuable and enlightening in both directions, but were infuriating and depressing throughout. People on the ground in Washington kept giving us variations on ‘it’s worse than you know,’ which it usually is. So now you know. Others seemed not to appreciate how bad things had gotten. I was often pointing out that people’s proposals implied some sort of international treaty and form of widespread compute surveillance, had zero chance of actually causing us not to die, or sometimes both. At other times, I was pointing out that things literally wouldn’t work on the level of ‘do the object level goal’ let alone make us win. Or we were trying to figure out what was sufficiently completely costless and not even a tiny bit weird or complex that one could propose that might actually do anything meaningful. Or simply observing other perspectives.

    1. In particular, different people maintained different players were relatively powerful, but I came away from various discussions more convinced than ever that for now White House policy and rhetoric on AI can be modeled as fully captured by Nvidia, although constrained in some ways by congressional Republicans and some members of the MAGA movement. This is pretty much a worst case scenario. If we were captured by OpenAI or other AI labs that wouldn’t be great but at least their interests and America are mostly aligned.

  4. Nonprofit funding discussions. I’d just come out of the latest Survival and Flourishing Fund round, various players seemed happy to talk and strategize, and it seems likely that very large amounts of money will be unlocked soon as OpenAI and Anthropic employees with increasingly valuable equity become liquid. The value of helping steer this seems crazy high, but the stakes on everything seem crazy high.

    1. One particular worry is that a lot of this money could effectively get captured by various existing players, especially the existing EA/OP ecosystem, in ways that would very much be a shame.

    2. Another is simply that a bunch of relatively uninformed money could overwhelm incentives, contaminate various relationships and dynamics, introduce parasitic entry, drop average quality a lot, and so on.

    3. Or everyone involved could end up with a huge time sink and/or end up not deploying the funds.

    4. So there’s lots to do. But it’s all tricky, and trying to gain visible influence over the direction of funds is a very good way to get your own social relationships and epistemics very quickly compromised, also it can quickly eat up infinite time, so I’m hesitant to get too involved or involved in the wrong ways.

What other tracks did I actively choose not to participate in?

There were of course AI timelines discussions, but I did my best to avoid them except when they were directly relevant to a concrete strategic question. At one point someone in a 4-person conversation I was mostly observing said ‘let’s change the subject, can we argue about AI timelines’ and I outright said ‘no’ but was overruled, and after a bit I walked away. For those who don’t follow these debates, many of the more aggressive timelines have gotten longer over the course of 2025, with people who expected crazy to happen in 2027 or 2028 now not expecting crazy for several more years, but there are those who still mostly hold firm to a faster schedule.

There were a number of talks about AI that assumed it was mysteriously a ‘normal technology.’ There were various sessions on economics projections, or otherwise taking place with the assumption that AI would not cause things to change much, except for whatever particular effect people were discussing. How would we ‘strengthen our democracy’ when people had these neat AI tools, or avoid concentration of power risks? What about the risk of They Took Our Jobs? What about our privacy? How would we ensure everyone or every nation has fair access?

These discussions almost always silently assume that AI capability ‘hits a wall’ some place not very far from where it is now and then everything moves super slowly. Achiam’s talk had elements of this, and I went because he’s OpenAI’s Head of Mission Alignment so knowing how he thinks about this seemed super valuable.

To the extent I interacted with this it felt like smart people thinking about a potential world almost certainly very different from our own. Fascinating, can create useful intuition pumps, but that’s probably not what’s going to happen. If nothing else was going on, sure, count me in.

But also all the talk of ‘bottlenecks’ therefore 0.5% or 1% GDP growth boost per year tops has already been overtaken purely by capex spending and I cannot remember a single economist or other GDP growth skeptic acknowledging that this already made their projections wrong and updating reasonably.

There was an AI 2027 style tabletop exercise again this year, which I recommend doing if you haven’t done it before, except this time I wasn’t aware it was happening, and also by now I’ve done it a number of times.

There were of course debates directly about doom, but remarkably little and I had no interest. It felt like everyone was either acknowledging existential risk enough that there wasn’t much value of information in going further, or sufficiently blind they were in ‘normal technology’ mode. At some point people get too high level to think building smarter than human minds is a safe proposition.

Helen Toner gave a talk on taking AI jaggedness seriously. What would it mean if AIs kept getting increasingly better and superhuman at many tasks, while remaining terrible at other tasks, or at least relatively highly terrible compared to humans? How does the order of capabilities impact how things unfold? Even if we get superhuman coding and start to get big improvements in other areas as a result, that won’t make their ability profile similar to humans.

I agree with Helen that such jaggedness is mostly good news and potentially could buy us substantial time for various transitions. However, it’s not clear to me that this jaggedness does that much for that long, AI is (I am projecting) not going to stall out in the lagging areas or stay subhuman in key areas for as much calendar time as one might hope.

A fun suggestion was to imagine LLMs talking about how jagged human capabilities are. Look how dumb we are in some ways while being smart in others. I do think in a meaningful sense LLMs and other current AIs are ‘more jagged’ than humans in practice, because humans have continual learning and the ability to patch the situation and also route the physical world around our idiocy where they’re being importantly dumb. So we’re super dumb, but we try to not let it get in the way.

Neil Chilson: Great talk by @hlntnr about the jaggedness of AI, why it is likely to continue, and why it matters. Love this slide and her point that while many AI forecasters use smooth curves, a better metaphor is the chaotic transitions in fluid heating.

“Jaggedness” being the uneven ability of AI to do tasks that seem about equally difficult to humans.

Occurs to me I should have shared the “why this matters” slide, which was the most thought provoking one to me:

I am seriously considering talking about time to ‘crazy’ going forward, and whether that is a net helpful thing to say.

The curves definitely be too smooth. It’s hard to properly adjust for that. But I think the fluid dynamics metaphor, while gorgeous, makes the opposite mistake.

I watched a talk by Randi Weingarten about how she and other teachers are advocating and viewing AI around issues in education. One big surprise is that she says they don’t worry or care much about AI ‘cheating’ or doing work via ChatGPT, there are ways around that, especially ‘project based learning that is relevant,’ and the key thing is that education is all about human interactions. To her ChatGPT is a fine tool, although things like Character.ai are terrible, and she strongly opposes phones in schools for the right reasons and I agree with that.

She said teachers need latitude to ‘change with the times’ but usually aren’t given it, they need permission to change anything and if anything goes wrong they’re fired (although there are the other stories we hear that teachers often can’t be fired almost no matter what in many cases?). I do sympathize here. A lot needs to change.

Why is education about human interactions? This wasn’t explained. I always thought education was about learning things, I mostly didn’t learn things through human interaction, I mostly didn’t learn things in school via meaningful human interaction, and to the extent I learned things via meaningful human interaction it mostly wasn’t in school. As usual when education professionals talk about education I don’t get the sense they want children to learn things, or that they care about children being imprisoned and bored with their time wasted for huge portions of many days, but care about something else entirely? It’s not clear what her actual objection to Alpha School (which she of course confirmed she hates) was other than decentering teachers, or what concretely was supposedly going wrong there? Frankly it sounded suspiciously like a call to protect jobs.

If anything, her talk seemed to be a damning indictment of our entire system of schools and education. She presents vocational education as state of the art and with the times, and cited an example of a high school with a sub-50% graduation rate going to 100% graduation rate and 182 of 186 students getting a ‘certification’ from future farmers of America after one such program. Aside from the obvious ‘why do you need a certificate to be a farmer’ and also ‘why would you choose farmer in 2025’ this is saying kids should spend vastly less time in school? Many other such implications were there throughout.

Her group calls for ‘guardrails’ and ‘accountability’ on AI, worries about things like privacy, misinformation and understanding ‘the algorithms’ or the dangers to democracy, and points to declines in male non-college earnings,

There was a Chatham House discussion of executive branch AI policy in America where all involved were being diplomatic and careful. There’s a lot of continuity between the Biden approach to AI and much of the Trump approach, there’s a lot of individual good things going on, and it was predicted that CAISI would have a large role going forward, lots of optimism and good detail.

It seems reasonable to say that the Trump administration’s first few months of AI policy were unexpectedly good, and the AI Action Plan was unexpectedly good. Then there are the other things that happened.

Thus the session included some polite versions of ‘what the hell are we doing?’ that was at most slightly beneath the surface. As a central example, one person observed that if America ‘loses on AI,’ it would likely be because we did one or more of failing to (1) provide the necessary electrical power, (2) failed to bring in the top AI talent or (3) sold away our chip advantage. They didn’t say, but I will note here, that current American policy seems determined to screw up all three of these? We are cancelling solar, wind and battery projects all over, we are restricting our ability to acquire talent, and we are seriously debating selling Blackwell chips directly to China.

I was sad that going to that talk ruled out watching Buck Shlegeris debate Timothy Lee about whether keeping AI agents under control will be hard, as I expected that session to both be extremely funny (and one sided) and also plausibly enlightening in navigating such arguments, but that’s how conferences go. I did then get to see Buck discuss mitigating insider threats from scheming AIs, in which he explained some of the ways in which dealing with scheming AIs that are smarter than you is very hard. I’d go farther and say that in the types of scenarios Buck is discussing there it’s not going to work out for you. If the AIs be smarter than you and also scheming against you and you try to use them for important stuff anyway you lose.

That doesn’t mean do zero attempts to mitigate this but at some point the whole effort is counterproductive as it creates context that creates what it is worried about, without giving you much chance of winning.

At one point I took a break to get dinner at a nearby restaurant. The only other people there were two women. The discussion included mention of AI 2027 and also that one of them is reading If Anyone Builds It, Everyone Dies.

Also at one point I saw a movie star I’m a fan of, hanging out and chatting. Cool.

Sunday started out with Josh Achiam’s talk (again, he’s Head of Mission Alignment at OpenAI, but his views here were his own) about the challenge of the intelligence age. If it comes out, it’s worth a watch. There were a lot of very good thoughts and considerations here. I later got to have some good talk with him during the afterparty. Like much talk at OpenAI, it also silently ignored various implications of what was being built, and implicitly assumed the relevant capabilities just stopped in any place they would cause bigger issues. The talk acknowledged that it was mostly assuming alignment is solved, which is fine as long as you say that explicitly, we have many different problems to deal with, but other questions also felt assumed away more silently. Josh promises his full essay version will deal with that.

I got to go to a Chatham House Q&A about the EU Frontier AI Code of Practice, which various people keep reminding me I should write about, and I swear I want to do that as soon as I have some spare time. There was a bunch of info, some of it new to me, and also insight into how those involved think all of this is going to work. I later shared with them my model of how I think the AI companies will respond, in particular the chance they will essentially ignore the law when inconvenient because of lack of sufficient consequences. And I offered suggestions on how to improve impact here. But on the margin, yeah, the law does some good things.

I got into other talks and missed out on one I wanted to see by Joe Allen, about How the MAGA Movement Sees AI. This is a potentially important part of the landscape on AI going forward, as a bunch of MAGA types really dislike AI and are in position to influence the White House.

As I look over the schedule in hindsight I see a bunch of other stuff I’m sad I missed, but the alternative would have been missing valuable 1-on-1s or other talks.

The final talk was Jack Clark giving his perspective on events. This was a great talk, if it does online you should watch it, it gave me a very concrete sense of where he is coming from.

Jack Clark has high variance. When he’s good, he’s excellent, such as in this talk, including the Q&A, and when he asked Achaim an armor piercing question, or when he’s sticking to his guns on timelines that I think are too short even though it doesn’t seem strategic to do that. At other times, him and the policy team at Anthropic are in some sort of Official Mode where they’re doing a bunch of hedging and making things harder.

The problem I have with Anthropic’s communications is, essentially, that they are not close to the Pareto Frontier, where the y-axis is something like ‘Better Public Policy and Epistemics’ and the x-axis can colloquially be called ‘Avoid Pissing Off The White House.’ I acknowledge there is a tradeoff here, especially since we risk negative polarization, but we need to be strategic, and certain decisions have been de facto poking the bear for little gain, and at other times they hold back for little gain the other way. We gotta be smarter about this.

They are often very different from mine, or yours.

Deepfates: looks like a lot of people who work on policy and research for aligning AIs to human interests. I’m curious what you think about how humans align to AI.

my impression so far: people from big labs and people from government, politely probing each other to see which will rule the world. they can’t just out and say it but there’s zerosumness in the air

Chris Painter: That isn’t my impression of the vibe at the event! Happy to chat.

I was with Chris on this. It very much did not feel zero sum. There did seem to be a lack of appreciation of the ‘by default the AIs rule the world’ problem, even in a place dedicated largely to this particular problem.

Deepfates: Full review of The Curve: people just want to believe that Anyone is ruling the world. some of them can sense that Singleton power is within reach and they are unable to resist The opportunity. whether by honor or avarice or fear of what others will do with it.

There is that too, that currently no one is ruling the world, and it shows. It also has its advantages.

so most people are just like “uh-oh! what will occur? shouldn’t somebody be talking about this?” which is fine honestly, and a lot of them are doing good research and I enjoy learning about it. The policy stuff is more confusing

diverse crowd but multiple clusters talking past each other as if the other guys are ontologically evil and no one within earshot could possibly object. and for the most part they don’t actually? people just self-sort by sessions or at most ask pointed questions. parallel worlds.

Yep, parallel worlds, but I never saw anyone say someone else was evil. What, never? Well, hardly ever. And not anyone who actually showed up. Deeply confused and likely to get us all killed? Well, sure, there was more of that, but obviously true, and again not the people present.

things people are concerned about in no order: China. Recursive self-improvement. internal takeover of AI labs by their models. Fascism. Copyright law. The superPACs. Sycophancy. Privacy violations. Rapid unemployment of whole sectors of society. Religious and political backlash, autonomous agents, capabilities. autonomous agents, legal liability. autonomous agents, nightmare nightmare nightmare.

The fear of the other party, the other company, the other country, the other, the unknown, most of all the alien thing that threatens what it means to be human.

Fascinating to see threatens ‘what it means to be human’ on that list but not ‘the ability to keep being human (or alive),’ which I assure Deepfates a bunch of us were indeed very concerned about.

so they want to believe that the world is ruleable, that somebody, anybody, is at the wheel, as we careen into the strangest time in human history.

and they do Not want it to be the AIs. even as they keep putting decision making power and communication surface on the AIs lol

You can kind of tell here that Deepfates is fine with it being the AIs and indeed is kind of disdainful of anyone who would object to this. As in, they understand what is about to happen, but think this is good, actually (and are indeed working to bring it about). So yeah, some actual strong disagreements were present, but didn’t get discussed.

I may or may not have seen Deepfates, since I don’t know their actual name, but we presumably didn’t talk, given:

i tried telling people that i work for a rogue AI building technologies to proliferate autonomous agents (among other things). The reaction was polite confusion. It seemed a bit unreal for everyone to be talking about the world ending and doing normal conference behaviors anyway.

Polite confusion is kind of the best you can hope for when someone says that?

Regardless, very interesting event. Good crowd, good talks, plenty of food and caffeinated beverages. Not VC/pitch heavy like a lot of SF things.

Thanks to Lighthaven for hosting and Golden Gate Institute/Manifund for organizing. Will be curious to see what comes of this.

I definitely appreciated the lack of VC and pitching. I did get pitched once (on a nonprofit thing) but I was happy to take it. Focus was tight throughout.

Anton: “are you with the accelerationist faction?”

most people here have thought long and hard about ai, every conversation i have — even with those i vehemently disagree — feels like it makes me smarter..

i cant overemphasize how good the vibes are at this event.

Rob S: Another Lighthaven banger?

Anton: ANOTHA ONE.

As I note above, his closing talk was excellent. Otherwise, he seemed to be in the back of many of the same talks I was at. Listening. Gathering intel.

Jack Clark (policy head, Anthropic): I spent a few days at The Curve and I am humbled and overjoyed by the experience – it is a special event, now in its second year, and I hope they preserve whatever lightning they’ve managed to capture in this particular bottle. It was a privilege to give the closing talk.

During the Q&A I referenced The New Book, and likely due to the exhilaration of giving the earlier speech I fumbled a word and titled it: If Anyone Reads It, Everyone Dies.

James Cham: It was such an inspiring (and terrifying) talk!

I did see Roon at one point but it was late in the day and neither of us had an obvious conversation we wanted to have and he wandered off. He’s low key in person.

I was very disappointed to realize he did not say ‘den of inquiry’ here:

Roon: The Curve is insane because a bunch of DC staffers in suits have shown up to Lighthaven, a rationalist den of iniquity that looks like a Kinkade painting.

Jaime Sevilla: Jokes on you I am not a DC staffer, I just happen to like wearing my suit.

Neil Chilson: Hey, I ditched the jacket after last night.

Being Siedoh: i was impressed that your badge just says “Roon” lol.

To be fair, you absolutely wanted a jacket of some kind for the evening portion. That’s why they were giving away sweatshirts. It was still quite weird to see the few people who did wear suits.

Nathan made the opposite of my choice, and spent the weekend centered on timeline debates.

Nathan Lambert: My most striking takeaway is that the AI 2027 sequence of events, from AI models automating research engineers to later automating AI research, and potentially a singularity if your reasoning is so inclined, is becoming a standard by which many debates on AI progress operate under and tinker with.

It’s good that many people are taking the long term seriously, but there’s a risk in so many people assuming a certain sequence of events is a sure thing and only debating the timeframe by which they arrive.

This feel like the deepfates theory of self-selection within the conference. I observed the opposite, that so many people were denying that any kind of research automation or singularity was going to happen. Usually they didn’t even assert it wasn’t happening, they simply went about discussing futures where it mysteriously didn’t happen, presumably because of reasons, maybe ‘bottlenecks’ or muttering ‘normal technology’ or something.

Within the short timelines and taking AGI (at least somewhat) seriously debate subconference, to the extent I saw it, yes I do think there’s widespread convergence on the automating AI research analysis.

Whereas Nathan is in the ‘nope definitely not happening’ camp, it seems, but is helpfully explaining that it is because of bottlenecks in the automation loop.

These long timelines are strongly based on the fact that the category of research engineering is too broad. Some parts of the RE job will be fully automated next year, and more the next. To check the box of automation the entire role needs to be replaced.

What is more likely over the next few years, each engineer is doing way more work and the job description evolves substantially. I make this callout on full automation because it is required for the distribution of outcomes that look like a singularity due to the need to remove the human bottleneck for an ever accelerating pace of progress. This is a point to reinforce that I am currently confident in a singularity not happening.

The automation theory is that, as Nathan documents in his writeup, within a few years the existing research engineers (REs) will be unbelievably productive (80%-90% automated) and in some ways RE is already automated, yet that doesn’t allow us to finish the job, and humans continue importantly slowing down the loop because Real Science Is Messy and involves a social marketplace of ideas. Apologies for my glib paraphrasing. It’s possible in theory that these accelerations of progress and partial automations plus our increased scaling are no match for increasing problem difficulty, but it seems unlikely to me.

It seems far more likely that this kind of projection forgets how much things accelerate in such scenarios. Sure, it will probably be a lot messier than the toy models and straight lines on graphs, it always is, but you’d best start believing in singularities, because you’re in one, if you look at the arc of history.

The following is a very minor thing but I enjoy it so here you go.

All three meals were offered each day buffet style. Quality at these events is generally about as good as buffets get, they know who the good offerings are at this point. I ask for menus in advance so I can choose when to opt out and when to go hard, and which day to do my traditional one trip to a restaurant.

Also there was some of this:

Tyler John: It’s riddled with contradictions. The neoliberal rationalists allocate vegan and vegetarian food with a central planner rather than allowing demand to determine the supply.

Rachel: Yeah fwiw this was not a design choice. I hate this. I unfortunately didn’t notice that it was still happening yesterday :/

Tyler John: Oh on my end it’s only a very minor complaint but I did enjoy the irony.

Robert Winslow: I had a bad experience with this kind of thing at a conference. They said to save the veggies for the vegetarians. So instead of everyone taking a bit of meat and a bit of veg, everyone at the front of the line took more meat than they wanted, and everyone at the back got none.

You obviously can’t actually let demand determine supply, because you (1) can’t afford the transaction costs of charging on the margin and (2) need to order the food in advance. And there are logistical advantages to putting (at least some of) the vegan and vegetarian food in a distinct area so you don’t risk contamination or put people on lines that waste everyone’s time. If you’re worried about a mistake, you’d rather run out of meat a little early, you’d totally take down the sign (or ignore it) if it was clear the other mistake was happening, and there were still veg options for everyone else.

If you are confident via law of large numbers plus experience that you know your ratios, and you’ve chosen (and been allowed to choose) wisely, then of course you shouldn’t need anything like this.

Discussion about this post

Bending The Curve Read More »

ted-cruz-doesn’t-seem-to-understand-wikipedia,-lawyer-for-wikimedia-says

Ted Cruz doesn’t seem to understand Wikipedia, lawyer for Wikimedia says


A Wikipedia primer for Ted Cruz

Wikipedia host’s lawyer wants to help Ted Cruz understand how the platform works.

Senator Ted Cruz (R-Texas) uses his phone during a joint meeting of Congress on May 17, 2022. Credit: Getty Images | Bloomberg

The letter from Sen. Ted Cruz (R-Texas) accusing Wikipedia of left-wing bias seems to be based on fundamental misunderstandings of how the platform works, according to a lawyer for the nonprofit foundation that operates the online encyclopedia.

“The foundation is very much taking the approach that Wikipedia is actually pretty great and a lot of what’s in this letter is actually misunderstandings,” Jacob Rogers, associate general counsel at the Wikimedia Foundation, told Ars in an interview. “And so we are more than happy, despite the pressure that comes from these things, to help people better understand how Wikipedia works.”

Cruz’s letter to Wikimedia Foundation CEO Maryana Iskander expressed concern “about ideological bias on the Wikipedia platform and at the Wikimedia Foundation.” Cruz alleged that Wikipedia articles “often reflect a left-wing bias.” He asked the foundation for “documents sufficient to show what supervision, oversight, or influence, if any, the Wikimedia Foundation has over the editing community,” and “documents sufficient to show how the Wikimedia Foundation addresses political or ideological bias.”

As many people know, Wikipedia is edited by volunteers through a collaborative process.

“We’re not deciding what the editorial policies are for what is on Wikipedia,” Rogers said, describing the Wikimedia Foundation’s hands-off approach. “All of that, both the writing of the content and the determining of the editorial policies, is done through the volunteer editors” through “public conversation and discussion and trying to come to a consensus. They make all of that visible in various ways to the reader. So you go and you read a Wikipedia article, you can see what the sources are, what someone has written, you can follow the links yourselves.”

“They’re worried about something that is just not present at all”

Cruz’s letter raised concerns about “the influence of large donors on Wikipedia’s content creation or editing practices.” But Rogers said that “people who donate to Wikipedia don’t have any influence over content and we don’t even have that many large donors to begin with. It is primarily funded by people donating through the website fundraisers, so I think they’re worried about something that is just not present at all.”

Anyone unhappy with Wikipedia content can participate in the writing and editing, he said. “It’s still open for everybody to participate. If someone doesn’t like what it says, they can go on and say, ‘Hey, I don’t like the sources that are being used, or I think a different source should be used that isn’t there,'” Rogers said. “Other people might disagree with them, but they can have that conversation and try to figure it out and make it better.”

Rogers said that some people wrongly assume there is central control over Wikipedia editing. “I feel like people are asking questions assuming that there is something more central that is controlling all of this that doesn’t actually exist,” he said. “I would love to see it a little better understood about how this sort of public model works and the fact that people can come judge it for themselves and participate for themselves. And maybe that will have it sort of die down as a source of government pressure, government questioning, and go onto something else.”

Cruz’s letter accused Wikipedia of pushing antisemitic narratives. He described the Wikimedia Foundation as “intervening in editorial decisions” in an apparent reference to an incident in which the platform’s Arbitration Committee responded to editing conflicts on the Israeli–Palestinian conflict by banning eight editors.

“The Wikimedia Foundation has said it is taking steps to combat this editing campaign, raising further questions about the extent to which it is intervening in editorial decisions and to what end,” Cruz wrote.

Explaining the Arbitration Committee

The Arbitration Committee for the English-language edition of Wikipedia consists of volunteers who “are elected by the rest of the English Wikipedia editors,” Rogers said. The group is a “dispute resolution body when people can’t otherwise resolve their disputes.” The committee made “a ruling on Israel/Palestine because it is such a controversial subject and it’s not just banning eight editors, it’s also how contributions are made in that topic area and sort of limiting it to more experienced editors,” he said.

The members of the committee “do not control content,” Rogers said. “The arbitration committee is not a content dispute body. They’re like a behavior conduct dispute body, but they try to set things up so that fights will not break out subsequently.”

As with other topics, people can participate if they believe articles are antisemitic. “That is sort of squarely in the user editorial processes,” Rogers said. “If someone thinks that something on Wikipedia is antisemitic, they should change it or propose to people working on it that they change it or change sources. I do think the editorial community, especially on topics related to antisemitism and related to Israel/Palestine, has a lot of various safeguards in place. That particular topic is probably the most controversial topic in the world, but there’s still a lot of editorial safeguards in place where people can discuss things. They can get help with dispute resolution from bringing in other editors if there’s a behavioral problem, they can ask for help from Wikipedia administrators, and all the way up to the English Wikipedia arbitration committee.”

Cruz’s letter called out Wikipedia’s goal of “knowledge equity,” and accused the foundation of favoring “ideology over neutrality.” Cruz also pointed to a Daily Caller report that the foundation donated “to activist groups seeking to bring the online encyclopedia more in line with traditionally left-of-center points of view.”

Rogers countered that “the theory behind that is sort of misunderstood by the letter where it’s not about equity like the DEI equity, it is about the mission of the Wikimedia Foundation to have the world’s knowledge, to prepare educational content and to have all the different knowledge in the world to the extent possible.” In topic areas where people with expertise haven’t contributed much to Wikipedia, “we are looking to write grants to help fill in those gaps in knowledge and have a more broad range of information and sources,” he said.

What happens next

Rogers is familiar with the workings of Senate investigations from personal experience. He joined the Wikimedia Foundation in 2014 after working for the Senate’s Permanent Subcommittee on Investigations under the late Sen. Carl Levin (D-Mich.).

While Cruz demanded a trove of documents, Rogers said the foundation doesn’t necessarily have to provide them. A subpoena could be issued to Wikimedia, but that hasn’t happened.

“What Cruz has sent us is just a letter,” Rogers said. “There is no legal proceeding whatsoever. There’s no formal authority behind this letter. It’s just a letter from a person in the legislative branch who cares about the topic, so there is nothing compelling us to give him anything. I think we are probably going to answer the letter, but there’s no sort of legal requirement to actually fully provide everything that answers every question.” Assuming it responds, the foundation would try to answer Cruz’s questions “to the extent that we can, and without violating any of our company policies,” and without giving out nonpublic information, he said.

A letter responding to Cruz wouldn’t necessarily be made public. In April, the foundation received a letter from 23 lawmakers about alleged antisemitism and anti-Israel bias. The foundation’s response to that letter is not public.

Cruz is seeking changes at Wikipedia just a couple weeks after criticizing Federal Communications Commission Chairman Brendan Carr for threatening ABC with station license revocations over political content on Jimmy Kimmel’s show. While the pressure tactics used by Cruz and Carr have similarities, Rogers said there are also key differences between the legislative and executive branches.

“Congressional committees, they are investigating something to determine what laws to make, and so they have a little bit more freedom to just look into the state of the world to try to decide what laws they want to write or what laws they want to change,” he said. “That doesn’t mean that they can’t use their authority in a way that might ultimately go down a path of violating the First Amendment or something like that. They have a little bit more runway to get there versus an executive branch agency which, if it is pressuring someone, it is doing so for a very immediate decision usually.”

What does Cruz want? It’s unclear

Rogers said it’s not clear whether Cruz’s inquiry is the first step toward changing the law. “The questions in the letter don’t really say why they want the information they want other than the sort of immediacy of their concerns,” he said.

Cruz chairs the Senate Commerce Committee, which “does have lawmaking authority over the Internet writ large,” Rogers said. “So they may be thinking about changes to the law.”

One potential target is Section 230 of the Communications Decency Act, which gives online platforms immunity from lawsuits over how they moderate user-submitted content.

“From the perspective of the foundation, we’re staunch defenders of Section 230,” Rogers said, adding that Wikimedia supports “broad laws around intellectual property and privacy and other things that allow a large amount of material to be appropriately in the public domain, to be written about on a free encyclopedia like Wikipedia, but that also protect the privacy of editors who are contributing to Wikipedia.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Ted Cruz doesn’t seem to understand Wikipedia, lawyer for Wikimedia says Read More »

microsoft-removes-even-more-microsoft-account-workarounds-from-windows-11-build

Microsoft removes even more Microsoft account workarounds from Windows 11 build

Of the many minor to medium-size annoyances that come with a modern Windows 11 installation, the requirement that you sign in with a Microsoft account is one of the most irritating. Sure, all operating systems (including Apple’s and Google’s) encourage account sign-in as part of their setup process and prevent you from using multiple operating system features until and unless you sign in.

Various sanctioned and unsanctioned tools and workarounds existed to allow users to set their PCs up with old-fashioned local accounts, and those workarounds haven’t changed much in the last three years. But Microsoft is working on tightening the screws in preview builds of Windows, foreshadowing some future version of Windows where getting around the account requirement is even harder than it already is.

In a new update released to the Dev channel of the Windows Insider Preview program yesterday (build number 26220.6772), Microsoft announced it was “removing known mechanisms for creating a local account in the Windows Setup experience (OOBE).” Microsoft says that these workarounds “inadvertently skip critical setup screens, potentially causing users to exit OOBE with a device that is not fully configured for use.”

The removed commands include the “OOBEBYPASSNRO” workaround that Microsoft announced it was removing earlier this year, plus a “start ms-cxh:localonly” workaround that had been documented more recently. In current Windows releases, users can open a command prompt window during setup with Shift+F10 and input either of those commands to remove both the Microsoft account requirement and Internet connection requirement.

Windows 11 Pro currently includes another workaround, where you can indicate that you plan to join your computer to a corporate domain and use that to create a local account. We don’t know whether this mechanism has also been removed from the new Windows build.

It’s unclear what “critical setup screens” Microsoft is referring to; when using the workarounds to create a local account, the Windows setup assistant still shows you all the screens you need for creating an account and a password, plus toggling a few basic privacy settings. Signing in with a Microsoft account does add multiple screens to this process though—these screens will attempt to sell you Microsoft 365 and Xbox Game Pass subscriptions, and to opt you into features like the data-scraping Windows Recall on PCs that support it. I would not describe any of these as “critical” from a user’s perspective, but my priorities are not Microsoft’s priorities.

Microsoft removes even more Microsoft account workarounds from Windows 11 build Read More »

dead-celebrities-are-apparently-fair-game-for-sora-2-video-manipulation

Dead celebrities are apparently fair game for Sora 2 video manipulation

But deceased public figures obviously can’t consent to Sora 2’s cameo feature or exercise that kind of “end-to-end” control of their own likeness. And OpenAI seems OK with that. “We don’t have a comment to add, but we do allow the generation of historical figures,” an OpenAI spokesperson recently told PCMag.

The countdown to lawsuits begins

The use of digital re-creations of dead celebrities isn’t exactly a new issue—back in the ’90s, we were collectively wrestling with John Lennon chatting to Forrest Gump and Fred Astaire dancing with a Dirt Devil vacuum. Back then, though, that kind of footage required painstaking digital editing and technology only easily accessible to major video production houses. Now, more convincing footage of deceased public figures can be generated by any Sora 2 user in minutes for just a few bucks.

In the US, the right of publicity for deceased public figures is governed by various laws in at least 24 states. California’s statute, which dates back to 1985, bars unauthorized post-mortem use of a public figure’s likeness “for purposes of advertising or selling, or soliciting purchases of products, merchandise, goods, or services.” But a 2001 California Supreme Court ruling explicitly allows those likenesses to be used for “transformative” purposes under the First Amendment.

The New York version of the law, signed in 2022, contains specific language barring the unauthorized use of a “digital replicas” that are “so realistic that a reasonable observer would believe it is a performance by the individual being portrayed and no other individual” and in a manner “likely to deceive the public into thinking it was authorized by the person or persons.” But video makers can get around this prohibition with a “conspicuous disclaimer” explicitly noting that the use is unauthorized.

Dead celebrities are apparently fair game for Sora 2 video manipulation Read More »

f1-in-singapore:-“trophy-for-the-hero-of-the-race”

F1 in Singapore: “Trophy for the hero of the race”

The scandal became public the following year when Piquet was dropped halfway through the season, and he owned up. In the fallout, Briatore was issued a lifetime ban from the sport, with a five-year ban for the team’s engineering boss, Pat Symonds. Those were later overturned, and Symonds went on to serve as F1’s CTO before recently becoming an advisor to the nascent Cadillac Team.

Even without possible RF interference or race-fixing, past Singaporean races were often interrupted by the safety car. The streets might be wider than Monaco, but the walls are just as solid, and overtaking is almost as hard. And Monaco doesn’t take place with nighttime temperatures above 86°F (30°C) with heavy humidity. Those are the kinds of conditions that cause people to make mistakes.

The McLaren F1 Team celebrates their Constructors' World Champion title on the podium at the Formula 1 Singapore Airlines Singapore Grand Prix in Marina Bay Street Circuit, Singapore, on October 5, 2025.

This is the first time McLaren has won back-to-back WCC titles since the early 1990s. Credit: Robert Szaniszlo/NurPhoto via Getty Images

But in 2023, a change was made to the layout, the fourth since 2008. The removal of a chicane lengthened a straight but also removed a hotspot for crashes. Since the alteration, the Singapore Grand Prix has run caution-free.

What about the actual race?

Last time, I cautioned McLaren fans not to worry about a possibly resurgent Red Bull. Monza and Baku are outliers of tracks that require low downforce and low drag. Well, Singapore benefits from downforce, and the recent upgrades to the Red Bull have, in Max Verstappen’s hands at least, made it a competitor again.

The McLarens of Oscar Piastri (leading the driver’s championship) and Lando Norris (just behind Piastri in second place) are still fast, but they no longer have an advantage of several tenths of a second against the rest of the field. They started the race in third and fifth places, respectively. Ahead of Piastri on the grid, Verstappen would start the race on soft tires; everyone else around him was on the longer-lasting mediums.

F1 in Singapore: “Trophy for the hero of the race” Read More »

pentagon-contract-figures-show-ula’s-vulcan-rocket-is-getting-more-expensive

Pentagon contract figures show ULA’s Vulcan rocket is getting more expensive

A SpaceX Falcon Heavy rocket with NASA’s Psyche spacecraft launches from NASA’s Kennedy Space Center in Florida on October 13, 2023. Credit: Chandan Khanna/AFP via Getty Images

The launch orders announced Friday comprise the second batch of NSSL Phase 3 missions the Space Force has awarded to SpaceX and ULA.

It’s important to remember that these prices aren’t what ULA or SpaceX would charge a commercial satellite customer. The US government pays a premium for access to space. The Space Force, the National Reconnaissance Office, and NASA don’t insure their launches like a commercial customer would do. Instead, government agencies have more insight into their launch contractors, including inspections, flight data reviews, risk assessments, and security checks. Government missions also typically get priority on ULA and SpaceX’s launch schedules. All of this adds up to more money.

A heavy burden

Four of the five launches awarded to SpaceX Friday will use the company’s larger Falcon Heavy rocket, according to Lt. Col. Kristina Stewart at Space Systems Command. One will fly on SpaceX’s workhorse Falcon 9. This is the first time a majority of the Space Force’s annual launch orders has required the lift capability of a Falcon Heavy, with three Falcon 9 booster cores combining to heave larger payloads into space.

All versions of ULA’s Vulcan rocket use a single core booster, with varying numbers of strap-on solid-fueled rocket motors to provide extra thrust off the launch pad.

Here’s a breakdown of the seven new missions assigned to SpaceX and ULA:

USSF-149: Classified payload on a SpaceX Falcon 9 from Florida

USSF-63: Classified payload on a SpaceX Falcon Heavy from Florida

USSF-155: Classified payload SpaceX Falcon Heavy from Florida

USSF-205: WGS-12 communications satellite on a SpaceX Falcon Heavy from Florida

NROL-86: Classified payload on a SpaceX Falcon Heavy from Florida

USSF-88: GPS IIIF-4 navigation satellite on a ULA Vulcan VC2S (two solid rocket boosters) from Florida

NROL-88: Classified payload on a ULA Vulcan VC4S (four solid rocket boosters) from Florida

Pentagon contract figures show ULA’s Vulcan rocket is getting more expensive Read More »