Author name: Paul Patrick

new-pathway-engineered-into-plants-lets-them-suck-up-more-co₂

New pathway engineered into plants lets them suck up more CO₂

And, well, it worked remarkably well. The plants carrying all the genes for the McG cycle weighed two to three times as much as control plants that only had some of the genes. They had more leaves, the leaves themselves were larger, and the plants produced more seeds. In a variety of growing conditions, the plants with an intact McG cycle incorporated more carbon, and they did so without increasing their water uptake.

Having a two-carbon output also worked as expected. By feeding the plants radioactive bicarbonate, they were able to trace the carbon showing up in the expected molecules. And imaging confirmed that the plants were making so many lipids that their cells formed internal pockets containing nothing but fatty materials. Triglyceride levels increased by factors of 100 or more.

So, by a variety of measures, the plants actually did better with an extra pathway for fixing carbon. There are a number of cautions, though. For starters, it’s not clear whether what we’re learning using a small weed will also apply to larger plants or crops, or really anything much beyond Arabidopsis at the moment. It could be that having excess globs of fat floating around the cell has consequences for something like a tree. Plants grown in a lab also tend to be provided with a nutrient-rich soil, and it’s not clear whether all of this would apply to a range of real-world conditions.

Finally, we can’t say whether all the excess carbon these plants are sucking in from the atmosphere would end up being sequestered in any useful sense. It could be that all the fat would just get oxidized as soon as the plant dies. That said, there are a lot of approaches to making biofuel that rely on modifying the fats found in plants or algae. It’s possible that this can eventually help make biofuels efficient so they actually have a net positive effect on the climate.

Regardless of practical impacts, however, it’s pretty amazing that we’ve now reached the point where we can fundamentally rewire a bit of metabolism that has been in operation for billions of years without completely messing up plants.

Science, 2025. DOI: 10.1126/science.adp3528  (About DOIs).

New pathway engineered into plants lets them suck up more CO₂ Read More »

small,-affordable,-efficient:-a-lot-to-like-about-the-2026-nissan-leaf

Small, affordable, efficient: A lot to like about the 2026 Nissan Leaf


Smaller on the outside, bigger on the inside, and it goes farther on a single charge.

A Nissan Leaf in San Diego's Gaslamp District.

The color is called Seabreeze Blue Pearl, and isn’t it great it’s not silver or grey? Credit: Nissan

The color is called Seabreeze Blue Pearl, and isn’t it great it’s not silver or grey? Credit: Nissan

SAN DIEGO—The original Nissan Leaf was a car with a mission. Long before Elon Musk set his sights on Tesla selling vast numbers of electric vehicles to the masses, then-Nissan CEO Carlos Ghosn wanted Nissan to shift half a million Leafs a year in the early 2010s. That didn’t quite come to pass, but by 2020, it had sold its 500,000th EV, which went from its factory in Sunderland, England, to a customer in Norway.

Pioneering though they were, both first- and second-generation Leafs were compromised. They were adapted from existing internal combustion engine platforms, with the electric powertrains shoehorned inside. The cars’ real handicaps were a lack of liquid cooling for the battery packs. Like an older Porsche 911, the Leaf was air-cooled, albeit with none of the collector value. That’s all changed for generation three.

The new Leaf is built on a dedicated EV platform shared with Nissan’s alliance partners Renault and Mitsubishi, and which we have previously seen used to good effect in the Nissan Ariya. The benefits of using a platform purpose-designed for electric propulsion are obvious from the space efficiency. The new car is 3 inches (75 mm) shorter from the outside, but offers nearly 9 inches (221 mm) more rear leg room (yes, really), making it a much more suitable place to put adults.

Is it a sedan? Is it a crossover? Nissan

Although the new Leaf is 0.8 inches (20 mm) wider, it’s a few mm shorter and has a lower drag coefficient (Cd 0.26), so the overall effect is a more efficient shape. The nose bears a family resemblance to the Ariya, and the body style is sort of a crossover, sort of a fastback sedan, depending on your frame of reference.

Here and there, you’ll notice iconography that calls out the automaker’s name: two vertical stripes (ni in Japanese), then three horizontal ones (san in Japanese). I’m told that if you look, there are some ginkgo leaves as Easter eggs hidden in the design, but I did not find them during our hours with the car.

For now, there’s one powertrain option: a 214 hp (160 kW), 262 lb-ft (355 Nm) motor (packaged together with its inventor and reducer), powered by a 75 kWh (net) lithium-ion battery pack. The battery pack is integrated into the car’s thermal management system, which also loops in the chiller, the motor, and the HVAC system. It can fast-charge at up to 150 kW via the NACS port built into its left side (or using a CCS1 adapter here) and should charge from 10–80 percent in 35 minutes. On the driver’s side is a J1772 port for AC charging that can also work bidirectionally to send up to 1.5 kW of AC power to an external device via an adapter.

Nissan said it kept the J1772 port because it expects to sell the new Leaf to a lot of legacy customers who already have their own home charger, and it wanted to minimize the number of adapters necessary.

Let’s talk trim levels

How far it goes on a single charge depends on which trim level you’re in. Nissan brought some preproduction Leaf Platinum+ models to the first drive. These are very highly equipped, with an electrochromic dimming roof, the LED head- and taillights you see in the images, a couple of AC outlets inside the car (with the ability to power up to 3.4 kW across two outlets), and a better sound system. But it also comes on 19-inch alloy wheels, and as we all know, bigger wheels mean smaller range. Indeed, the Leaf Platinum+ has a range of 259 miles (417 km) on a single charge.

The $34,230 SV+ loses the panoramic roof and the interior V2L outlets, and you’ll have to manually open and close the tailgate at the back. And the alloy wheels are an inch smaller, which increases the range to 288 miles (464 km).

But it keeps the heated front seats and the twin 14.3-inch displays (one for your instruments, one for infotainment) with Google built in. For the Platinum+ and SV+, that means onboard Google Maps with a route planner that will take into account your state of charge and which can precondition the battery if it knows your destination is a fast charger.

19-inch Nissan Leaf wheel

Big wheels have their drawbacks. Credit: Nissan

Nissan is only including the Google connected services for the first year, though—after that, owners will have to pay a monthly fee, although Nissan wasn’t able to tell us how much that is. Conveniently, both wireless Android Auto and Apple CarPlay are included and will continue to work after the year’s trial. And you can manually precondition the battery for charging, but automatic preconditioning via the infotainment system will not work without an active subscription.

The SV+ and Platinum can also be optioned with a heat pump ($300).

But the $29,990 S+ cannot. And it lacks the twin displays of the car you see in the images, which means no automatic battery preconditioning, although like the more expensive trims it does still have wireless Apple CarPlay and Android Auto. You also get 18-inch steel wheels with aero hubcaps, and a range of 303 miles (487 km) on a single charge. See what I mean about wheel size and range?

How does it drive?

A Nissan Leaf

Turning over a new leaf. Credit: Nissan

I’d very much like to spend some time in an S+ and an SV+, if only to see what difference a larger tire sidewall makes to the ride comfort. On 19-inch wheels, the ride was firm and translated bumps and divots through the suspension and into the cabin. There wasn’t much body roll, but your progress will be limited by the grip available to the low rolling-resistance tires—push too hard and the result is plenty of understeer.

But this is not a “push too hard” kind of EV. With just 214 hp, it accelerates quickly enough to get out of its own way, but it’s telling that Nissan did not share a 0–60 mph time during the briefing. (If I had to guess, I’d say between 5–6 seconds, which used to be considered very rapid.)

It has four drive modes—Eco, Normal, Sport, and Personal—with three different throttle maps and two steering weights to choose from. And there are now four levels of lift-off regenerative braking, which you toggle on with the left steering wheel paddle and off with the right paddle. You can’t turn regen completely off, so like General Motors’ family of EVs, the Leaf will not really coast and loses a few mph even on downhill stretches, as it converts some kinetic energy to electrical energy.

There’s also an e-Step button on the dash, which turns on maximum regen braking and may add some friction braking to the mix. Unlike using the paddles, this setting should remain on the next time you start the car. But neither of the full regen settings is able to bring the car to a complete stop—we were told that the feature is viewed with suspicion in some markets, including Japan, and like pop-out door handles, it appears that China is in the process of banning one-pedal driving entirely.

There are plenty of real buttons and switches in here. Nissan

Both e-Step and max-regen work very well in traffic or on a twisty road, where they simulate engine braking. But given the choice, I would use the paddles to control regen braking. That’s because, like the Mercedes EQ family of EVs, in this mode the brake pedal moves toward the firewall as the car slows. The engineer’s excuse for this is that the pedal moves by the same distance it would have moved had the driver used it to slow the car by the amount it has just slowed. But my rebuttal is that the brake pedal should always be where I expect to find it in an emergency, and if that’s an inch farther away, that’s not cool.

That’s really a minor gripe, though; no one says you have to push the e-Step switch on the dash. Slightly more annoying—but only slightly—is the wind noise from the sideview mirrors, which is noticeable even at 45 mph (72 km/h), although easily drowned out if you’re listening to something on the audio system.

For a daily driver, the third-generation Leaf is rather compelling, especially the S+, although the lack of heated front seats in that model might be too much of a deal-breaker, considering how important seat heaters are to EV efficiency in winter. (Because it’s more efficient to heat the driver than warm all the air in the car.)

The SV+ is more likely to be the sweet spot—this trim level can have the Seabreeze paint you see here or a white pearl, which are alternatives to the four shades available to the S+. The Hyundai Kona EV and Kia Niro EV are probably the Leaf’s two closest rivals, both of which are compelling cars. And the forthcoming Kia EV3 will probably also be cross-shopped. All of which is good news if you’re looking for a smaller, affordable electric car.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

Small, affordable, efficient: A lot to like about the 2026 Nissan Leaf Read More »

ex-dvd-company-employee-gets-4-years-for-leaking-spider-man-blu-ray

Ex-DVD company employee gets 4 years for leaking Spider-Man Blu-ray

Hale, a 38-year-old with prior felony convictions for armed robbery, risked a potential sentence of 15 years for these crimes, but he reduced his sentence to a maximum of five years through his plea deal. At the time, the DOJ credited him for taking “responsibility,” arguing that he deserved a maximum reduction partly because the total “infringement amount” was likely no more than $40,000, not the “tens of millions” the DOJ claimed in today’s release.

Ultimately, Hale pleaded guilty to criminal copyright infringement, while agreeing to pay restitution (the exact amount is not clarified in the release) and return “approximately 1,160 stolen DVDs and Blu-rays” that the cops seized to his former employer. Hale also pleaded guilty to “being a convicted felon in possession of a firearm,” the DOJ noted, after cops uncovered that he “unlawfully possessed a pistol that was loaded with one live round in the chamber and 13 rounds in the magazine.”

Combining the DVD theft and firearm charges, the US District Court in Tennessee sentenced Hale to 57 months, just short of the five-year maximum sentence he could have faced.

In the DOJ’s press release, acting Assistant Attorney General Matthew R. Galeotti claimed the win, while warning that “today’s sentencing signals our commitment to protecting American innovation from pirates that would exploit others’ work for a quick profit, which, in this case, cost one copyright owner tens of millions of dollars.”

Ex-DVD company employee gets 4 years for leaking Spider-Man Blu-ray Read More »

openai-and-microsoft-sign-preliminary-deal-to-revise-partnership-terms

OpenAI and Microsoft sign preliminary deal to revise partnership terms

On Thursday, OpenAI and Microsoft announced they have signed a non-binding agreement to revise their partnership, marking the latest development in a relationship that has grown increasingly complex as both companies compete for customers in the AI market and seek new partnerships for growing infrastructure needs.

“Microsoft and OpenAI have signed a non-binding memorandum of understanding (MOU) for the next phase of our partnership,” the companies wrote in a joint statement. “We are actively working to finalize contractual terms in a definitive agreement. Together, we remain focused on delivering the best AI tools for everyone, grounded in our shared commitment to safety.”

The announcement comes as OpenAI seeks to restructure from a nonprofit to a for-profit entity, a transition that requires Microsoft’s approval, as the company is OpenAI’s largest investor, with more than $13 billion committed since 2019.

The partnership has shown increasing strain as OpenAI has grown from a research lab into a company valued at $500 billion. Both companies now compete for customers, and OpenAI seeks more compute capacity than Microsoft can provide. The relationship has also faced complications over contract terms, including provisions that would limit Microsoft’s access to OpenAI technology once the company reaches so-called AGI (artificial general intelligence)—a nebulous milestone both companies now economically define as AI systems capable of generating at least $100 billion in profit.

In May, OpenAI abandoned its original plan to fully convert to a for-profit company after pressure from former employees, regulators, and critics, including Elon Musk. Musk has sued to block the conversion, arguing it betrays OpenAI’s founding mission as a nonprofit dedicated to benefiting humanity.

OpenAI and Microsoft sign preliminary deal to revise partnership terms Read More »

gmail-gets-a-dedicated-place-to-track-all-your-purchases

Gmail gets a dedicated place to track all your purchases

An update to Gmail begins rolling out soon, readying Google’s premier email app for all your upcoming holiday purchases. Gmail has been surfacing shipment tracking for some time now, but Google will now add a separate view just for remembering the things you have ordered. And if you want to buy more things, there’s a new interface for that, too. Yay, capitalism.

Gmail is quite good at recognizing purchase information in the form of receipts and shipping notifications. Currently, the app (and web interface) lists upcoming shipments at the top of the inbox. It will continue to do that when you have a delivery within the next 24 hours, but the new Purchases tab brings it all together in one glanceable view.

Purchases will be available in the navigation list alongside all the other stock Gmail labels. When selected, Gmail will filter your messages to only show receipts, order status, and shipping details. This makes it easier to peruse your recent orders and search within this subset of emails. This could be especially handy in this day and age of murky international shipping timelines.

The Promotions tab that has existed for years is also getting a makeover as we head into the holiday season. This tab collects all emails that Google recognizes as deals, marketing offers, and other bulk promos. This keeps them out of your primary inbox, which is appreciated, but venturing into the Promotions tab when the need arises can be overwhelming.

Gmail gets a dedicated place to track all your purchases Read More »

after-ukrainian-testing,-drone-detection-radar-doubles-range-with-simple-software-patch

After Ukrainian testing, drone-detection radar doubles range with simple software patch

As part of its unprovoked invasion, Russia has been firing massed waves of drones and missiles into Ukraine for years, though the tempo has been raised dramatically in recent months. Barrages of 700-plus drones now regularly attack Ukraine during overnight raids. Russia also appears to have upped the ante dramatically by sending at least 19 drones into Poland last night, some of which were shot down by NATO forces.

Many of these drones are Shahed/Geran types built with technology imported from Iran, and they have recently gained the ability to fly higher, making shootdowns more difficult. Given the low cost of the drones (estimates suggest they cost a few tens of thousands of dollars apiece, and many are simply decoys without warheads), hitting them with multimillion-dollar missiles from traditional air-defense batteries makes little sense and would quickly exhaust missile stocks.

So Ukraine has adopted widespread electronic warfare to disrupt control systems and navigation. Drones not forced off their path are fought with mobile anti-aircraft guns, aircraft, and interceptor drones, many launched from mobile fire teams patrolling Ukraine during the night.

For teams like this, early detection of the attack drones is crucial—even seconds matter when it comes to relocating a vehicle and launching a counter drone or aiming a gun. Take too long to get into position and the attack drone overhead has already passed by on the way to its target.

Which brings us to Robin Radar Systems, a Dutch company that initially used radar to detect birds. (Indeed, the name “Robin” is an acronym derived from “Radar OBservation of Bird INtensity.”) This radar technology, good at detecting small flying objects and differentiating them from fauna, has proven useful in Ukraine’s drone war. Last year, the Dutch Ministry of Defence bought 51 mobile Robin Radar IRIS units that could be mounted on vehicles and used by drone defense teams.

After Ukrainian testing, drone-detection radar doubles range with simple software patch Read More »

hbo-max-is-“way-underpriced,”-warner-bros.-discovery-ceo-says

HBO Max is “way underpriced,” Warner Bros. Discovery CEO says

Consumers in America would pay twice as much 10 years ago for content. People were spending, on average, $55 for content 10 years ago, and the quality of the content, the amount of content that we’re getting, the spend is 10 or 12 fold and they’re paying dramatically less. I think we want a good deal for consumers, but I think over time, there’s real opportunity, particularly for us, in that quality area, to raise price.

A question of quality

Zaslav is arguing that the quality of the shows and movies on HBO Max warrants an eventual price bump. But, in general, viewers find streaming services are getting less impressive. A Q4 2024 report from TiVo found that the percentage of people who think the streaming services that they use have “moderate to very good quality” has been declining since Q4 2021.

Bar graph From TiVO's Q4 2024 Video Trends report.

From TiVO’s Q4 2024 Video Trends report.

Credit: TiVo

From TiVO’s Q4 2024 Video Trends report. Credit: TiVo

Research also points to people being at their limit when it comes to TV spending. Hub Entertainment Research’s latest “Monetizing Video” study, released last month, found that for consumers, low prices “by far still matters most to the value of a TV service.”

Meanwhile, niche streaming services have been gaining in popularity as streaming subscribers grow bored with the libraries of mainstream streaming platforms and/or feel like they’ve already seen the best of what those services have to offer. Antenna, a research firm focused on consumer subscription services, reported this month that specialty streaming service subscriptions increased 12 percent year over year in 2025 thus far and grew 22 percent in the first half of 2024.

Zaslav would likely claim that HBO Max is an outlier when it comes to streaming library dissatisfaction. Although WBD’s streaming business (which includes Discovery+) turned a $293 million profit and grew subscriber-related revenue (which includes ad revenues) in its most recent earnings report, investors would likely be unhappy if the company rested on its financial laurels. WBD has one of the most profitable streaming businesses, but it still trails far behind Netflix, which posted an operating income of $3.8 billion in its most recent earnings.

Still, increasing prices is rarely welcomed by customers. With many other options for streaming these days (including free ones), HBO Max will have to do more to convince people that it is worth the extra money than merely making the claim.

HBO Max is “way underpriced,” Warner Bros. Discovery CEO says Read More »

flush-door-handles-are-the-car-industry’s-latest-safety-problem

Flush door handles are the car industry’s latest safety problem

China to the rescue?

In fact, the styling feature might be on borrowed time. It seems that Chinese authorities have been concerned about retractable door handles for some time now and are reportedly close to banning them from 2027. Flush-fit door handles fail far more often during side impacts than regular handles, delaying egress or rescue time after a crash. During heavy rain, flush-fit door handles have short-circuited, trapping people in their cars. Chinese consumers have even reported an increase in finger injuries as they get trapped or pinched.

That’s plenty of safety risk, but what about the benefit to vehicle efficiency? As it turns out, it doesn’t actually help that much. Adding flush door handles cuts the drag coefficient (Cd) by around 0.01. You really need to know a car’s frontal area as well as its Cd, but this equates to perhaps a little more than a mile of EPA range, perhaps two under Europe’s Worldwide Harmonised Light vehicles Test Procedure.

If automakers were that serious about drag reduction, we’d see many more EVs riding on smaller wheels. The rotation of the wheels and tires is one of the greatest contributors to drag, yet the stylists’ love of huge wheels means most EVs you’ll find on the front lot of a dealership will struggle to match their official efficiency numbers (not to mention suffering from a worse ride).

China’s importance to the global EV market means that, if it follows through on this ban, we can expect to see many fewer cars arrive with flush door handles in the future.

Flush door handles are the car industry’s latest safety problem Read More »

reddit-bug-caused-lesbian-subreddit-to-be-labeled-as-a-place-for-“straight”-women

Reddit bug caused lesbian subreddit to be labeled as a place for “straight” women

Explaining further to Ars, Reddit spokesperson Tim Rathschmidt said:

There was a small bug in a test we ran that mistakenly caused the English-to-English translation(s) you saw. That bug has been resolved. Unsurprisingly, English-to-English translations are not part of our strategy, as they aren’t necessary. English-to-English translations were not a desired or expected outcome of the test.

Reddit pulled the test it was running, but its machine learning-powered translations are still functioning, Rathschmidt said. The company plans to fix the bug and run its unspecified “test” again.

Reddit’s explanation differs from user theories floating around beforehand, which were mainly that Reddit was rewriting user-created summaries with generative AI, possibly to boost SEO. Some may still be perturbed by the problem persisting for weeks without explanation and the apparent lack of manual checks for the translation service. However, Redditors can now take comfort in knowing that Reddit is not currently using generative AI to alter user-generated content without notice.

Paige_Railstone, however, maintains frustration and wants to tell Reddit admins, “STOP. Hand off.” The translation bug, they noted, led to people posting on a subreddit for parents with autism that their child might be autistic, “and how terrible that would be for them,” Paige_Railstone recalled.

“These are the kind of unintentionally insulting posts that drive autistics into leaving a community, and it increases the workload of us moderators,” they said.

Paige_Railstone also sees the incident as a reason for moderators to be more cautious.

“This never used to be a concern, but this translation service was rolled out without any notification that I’m aware of, and no option to disable it within the mods’ control. That has the potential to cause problems, as we’ve seen over the past two weeks,” they said.

Disclosure: Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.

Reddit bug caused lesbian subreddit to be labeled as a place for “straight” women Read More »

software-packages-with-more-than-2-billion-weekly-downloads-hit-in-supply-chain-attack

Software packages with more than 2 billion weekly downloads hit in supply-chain attack

Hackers planted malicious code in open source software packages with more than 2 billion weekly updates in what is likely to be the world’s biggest supply-chain attack ever.

The attack, which compromised nearly two dozen packages hosted on the npm repository, came to public notice on Monday in social media posts. Around the same time, Josh Junon, a maintainer or co-maintainer of the affected packages, said he had been “pwned” after falling for an email that claimed his account on the platform would be closed unless he logged in to a site and updated his two-factor authentication credentials.

Defeating 2FA the easy way

“Sorry everyone, I should have paid more attention,” Junon, who uses the moniker Qix, wrote. “Not like me; have had a stressful week. Will work to get this cleaned up.”

The unknown attackers behind the account compromise wasted no time capitalizing on it. Within an hour’s time, dozens of open source packages Junon oversees had received updates that added malicious code for transferring cryptocurrency payments to attacker-controlled wallets. With more than 280 lines of code, the addition worked by monitoring infected systems for cryptocurrency transactions and changing the addresses of wallets receiving payments to those controlled by the attacker.

The packages that were compromised, which at last count numbered 20, included some of the most foundational code driving the JavaScript ecosystem. They are used outright and also have thousands of dependents, meaning other npm packages that don’t work unless they are also installed. (npm is the official code repository for JavaScript files.)

“The overlap with such high-profile projects significantly increases the blast radius of this incident,” researchers from security firm Socket said. “By compromising Qix, the attackers gained the ability to push malicious versions of packages that are indirectly depended on by countless applications, libraries, and frameworks.”

The researchers added: “Given the scope and the selection of packages impacted, this appears to be a targeted attack designed to maximize reach across the ecosystem.”

The email message Junon fell for came from an email address at support.npmjs.help, a domain created three days ago to mimic the official npmjs.com used by npm. It said Junon’s account would be closed unless he updated information related to his 2FA—which requires users to present a physical security key or supply a one-time passcode provided by an authenticator app in addition to a password when logging in.

Software packages with more than 2 billion weekly downloads hit in supply-chain attack Read More »

openai-#14:-openai-descends-into-paranoia-and-bad-faith-lobbying

OpenAI #14: OpenAI Descends Into Paranoia and Bad Faith Lobbying

I am a little late to the party on several key developments at OpenAI:

  1. OpenAI’s Chief Global Affairs Officer Chris Lehane was central to the creation of the new $100 million PAC where they will partner with a16z to oppose any and all attempts of states to regulate AI in any way for any reason.

  2. Effectively as part of that effort, OpenAI sent a deeply bad faith letter to Governor Newsom opposing SB 53.

  3. OpenAI seemingly has embraced descending fully into paranoia around various nonprofit organizations and also Effective Altruism in general, or at least is engaging in rhetoric and legal action to that effect, joining the style of Obvious Nonsense rhetoric about this previously mostly used by a16z.

This is deeply troubling news. It is substantially worse than I was expecting of them. Which is presumably my mistake.

This post covers those events, along with further developments around two recent tragic suicides where ChatGPT was plausibly at fault for what went down, including harsh words from multiple attorneys general who can veto OpenAI’s conversion to a for-profit company.

In OpenAI #11: America Action Plan, I documented that OpenAI:

  1. Submitted an American AI Action Plan proposal that went full jingoist, framing AI as a race against the CCP in which we must prevail, with intentionally toxic vibes throughout.

  2. Requested immunity from all AI regulations.

  3. Attempted to ban DeepSeek using bad faith arguments.

  4. Demanded absolute fair use, for free, for all AI training, or else.

  5. Also included some reasonable technocratic proposals, such as a National Transmission Highway Act, AI Opportunity Zones, along with some I think are worse on the merits such as their ‘national AI readiness strategy.’

Also worth remembering:

  1. This article claims both OpenAI and Microsoft were central in lobbying to take any meaningful requirements for foundation models out of the EU’s AI Act. If I was a board member, I would see this as incompatible with the OpenAI charter. This was then fleshed out further in the OpenAI Files and in this article from Corporate Europe Observatory.

  2. OpenAI lobbied against SB 1047, both reasonably and unreasonably.

  3. OpenAI’s CEO Sam Altman has over time used increasingly jingoistic language throughout his talks, has used steadily less talk about

OpenAI’s Chief Global Affairs Officer, Christopher Lehane, sent a letter to Governor Newsom urging him to gut SB 53 (or see Miles’s in-line responses included here), which is already very much a compromise bill that got compromised further by pushing its ‘large AI companies’ threshold and eliminating the third-party audit requirement. That already eliminated almost all of what little burden could be claimed was being imposed by the bill.

OpenAI’s previous lobbying efforts were in bad faith. This is substantially worse.

Here is the key ask from OpenAI, bold in original:

In order to make California a leader in global, national and state-level AI policy, we encourage the state to consider frontier model developers compliant with its state requirements when they sign onto a parallel regulatory framework like the CoP or enter into a safety-oriented agreement with a relevant US federal government agency.

As in, California should abdicate its responsibilities entirely, and treat giving lip service to the EU’s Code of Practice (not even actually complying with it!) as sufficient to satisfy California on all fronts. It also says that if a company makes any voluntary agreement with the Federal Government on anything safety related, then that too should satisfy all requirements.

This is very close to saying California should have no AI safety regulations at all.

The rhetoric behind this request is what you would expect. You’ve got:

  1. The jingoism.

  2. The talk about ‘innovation.’

  3. The Obvious Nonsense threats about this slowing down progress or causing people to withdraw from California.

  4. The talk about Federal leadership on regulation without any talk of what that would look like while the only Federal proposal that ever got traction was ‘ban the states from acting and still don’t do anything on the Federal level.’

  5. The talk about burden on ‘small developers’ when to be covered by SB 53 at all you now have to spend a full $500 million in training compute, and the only substantive expense (the outside audits) are entirely gone.

  6. The false claim that California lacks state capacity to handle this, and the false assurance us the EU and Federal Government have totally have what they need.

  7. The talk of a ‘California approach’ which here means ‘do nothing.’

They even try to equate SB 53 to CEQA, which is a non-sequitur.

They equate OpenAI’s ‘commitment to work with’ the US federal government in ways that likely amount to running some bespoke tests focused on national security concerns as equivalent to being under a comprehensive regulatory regime, and as a substitute for SB 53 including its transparency requirements.

They emphasize that they are a non-profit, while trying to transform themselves into a for-profit and expropriate most of the non-profit’s wealth for private gain.

Plus we have again the important misstatement of OpenAI’s mission.

OpenAI’s actual mission: Ensure that AGI benefits all of humanity.

OpenAI says its mission is: Building AI that benefits all of humanity.

That is very importantly not the same thing. The best way to ensure AGI benefits all of humanity could importantly be to not build it.

Also as you would expect, the letter does not, anywhere, explain why the even fully complying with Code of Practice, let alone any future unspecified voluntary safety-oriented agreement, would satisfy the policy goals behind SB 53.

Because very obviously, if you read the Code of Practice and SB 53, they wouldn’t.

Miles Brundage responds to the letter in-line (which I recommend if you want to go into the details at that level) and also offers this Twitter thread:

Miles Brundage (September 1): TIL OpenAI sent a letter to Governor Newsom filled with misleading garbage about SB 53 and AI policy generally.

Unsurprising if you follow this stuff, but worth noting for those who work there and don’t know what’s being done in their name.

I don’t think it’s worth dignifying it with a line-by-line response but I’ll just say that it was clearly not written by people who know what they’re talking about (e.g., what’s in the Code of Practice + what’s in SB 53).

It also boils my blood every time that team comes up with new and creative ways to misstate OpenAI’s mission.

Today it’s “the AI Act is so strong, you should just assume that we’re following everything else” [even though the AI Act has a bunch of issues].

Tomorrow it’s “the AI Act is being enforced too stringently — it needs to be relaxed in ways A, B, and C.”

  1. The context here is OpenAI trying to water down SB 53 (which is not that strict to begin with – e.g. initially third parties would verify companies’ safety claims in *2030and now there is just *nosuch requirement)

  2. The letter treats the Code of Practice for the AI Act, one the one hand – imperfect but real regulation – and a voluntary agreement to do some tests sometimes with a friendly government agency, on the other – as if they’re the same. They’re not, and neither is SB 53…

  3. It’s very disingenuous to act as if OpenAI is super interested in harmonious US-EU integration + federal leadership over states when they have literally never laid out a set of coherent principles for US federal AI legislation.

  4. Vague implied threats to slow down shipping products or pull out of CA/the US etc. if SB 53 went through, as if it is super burdensome… that’s just nonsense. No one who knows anything about this stuff thinks any of that is even remotely plausible.

  5. The “California solution” is basically “pretend different things are the same,” which is funny because it’d take two braincells for OpenAI to articulate an actually-distinctively-Californian or actually-distinctively-American approach to AI policy. But there’s no such effort.

  6. For example, talk about how SB 53 is stronger on actual transparency (and how the Code of Practice has a “transparency” section that basically says “tell stuff to regulators/customers, and it’d sure be real nice if you sometimes published it”). Woulda been trivial. The fact that none of that comes up suggests the real strategy is “make number of bills go down.”

  7. OpenAI’s mission is to ensure that AGI benefits all of humanity. Seems like something you’d want to get right when you have court cases about mission creep.

We also have this essay response from Nomads Vagabonds. He is if anything even less kind than Miles. He reminds us that OpenAI through Greg Brockman has teamed up with a16z to dedicate $100 million to ensuring no regulation of AI, anywhere in any state, for any reason, in a PAC that was the brainchild of OpenAI vice president of global affairs Chris Lehane.

He also goes into detail about the various bad faith provisions.

These four things can be true at once.

  1. OpenAI has several competitors that strongly dislike OpenAI and Sam Altman, for a combination of reasons with varying amounts of merit.

  2. Elon Musk’s lawsuits against OpenAI are often without legal merit, although the objections to OpenAI’s conversion to for-profit were ruled by the judge to absolutely have merit, with the question mainly being if Musk had standing.

  3. There are many other complaints about OpenAI that have a lot or merit.

  4. AI might kill everyone and you might want to work to prevent this without having it out for OpenAI in particular or being funded by OpenAI’s competitors.

OpenAI seems, by Shugerman’s reporting, to have responded to this situation by becoming paranoid that there is some sort of vast conspiracy Out To Get Them, funded and motivated by commercial rivalry, as opposed to people who care about AI not killing everyone and also this Musk guy who is Big Mad.

Of course a lot of us, as the primary example, are going to take issue with OpenAI’s attempt to convert from a non-profit to a for-profit while engaging in one of the biggest thefts in human history by expropriating most of the nonprofit’s financial assets, worth hundreds of billions, for private gain. That opposition has very little to do with Elon Musk.

Emily Dreyfuss: Inside OpenAI, there’s a growing paranoia that some of its loudest critics are being funded by Elon Musk and other billionaire competitors. Now, they are going after these nonprofit groups, but their evidence of a vast conspiracy is often extremely thin.

Emily Shugerman (SF Standard): Nathan Calvin, who joined Encode in 2024, two years after graduating from Stanford Law School, was being subpoenaed by OpenAI. “I was just thinking, ‘Wow, they’re really doing this,’” he said. “‘This is really happening.’”

The subpoena was filed as part of the ongoing lawsuits between Elon Musk and OpenAI CEO Sam Altman, in which Encode had filed an amicus brief supporting some of Musk’s arguments. It asked for any documents relating to Musk’s involvement in the founding of Encode, as well as any communications between Musk, Encode, and Meta CEO Mark Zuckerberg, whom Musk reportedly tried to involve in his OpenAI takeover bid in February.

Calvin said the answer to these questions was easy: The requested documents didn’t exist.

In media interviews, representatives for an OpenAI-affiliated super PAC have described a “vast force” working to slow down AI progress and steal American jobs.

This has long been the Obvious Nonsense a16z line, but now OpenAI is joining them via being part of the ‘Leading the Future’ super PAC. If this was merely Brockman contributing it would be one thing, but no, it’s far beyond that:

According to the Wall Street Journal, the PAC is in part the brainchild of Chris Lehane, OpenAI’s vice president of global affairs.

Meanwhile, OpenAI is treating everyone who opposes their transition to a for-profit as if they have to be part of this kind of vast conspiracy.

Around the time Musk mounted his legal fight [against OpenAI’s conversion to a for-profit], advocacy groups began to voice their opposition to the transition plan, too. Earlier this year, groups like the San Francisco Foundation, Latino Prosperity, and Encode organized open letters to the California attorney general, demanding further questioning about OpenAI’s move to a for-profit. One group, the Coalition for AI Nonprofit Integrity (CANI), helped write a California bill introduced in March that would have blocked the transition. (The assemblymember who introduced the bill suddenly gutted it less than a month later, saying the issue required further study.)

In the ensuing months, OpenAI leadership seems to have decided that these groups and Musk were working in concert.

Catherine Bracy: Based on my interaction with the company, it seems they’re very paranoid about Elon Musk and his role in all of this, and it’s become clear to me that that’s driving their strategy

No, these groups were not (as far as I or anyone else can tell) funded by or working in concert with Musk.

The suspicions that Meta was involved, including in Encode which is attempting to push forward SB 53, are not simply paranoid, they flat out don’t make any sense. Nor does the claim about Musk, either, given how he handles opposition:

Both LASST and Encode have spoken out against Musk and Meta — the entities OpenAI is accusing them of being aligned with — and advocated against their aims: Encode recently filed a complaint with the FTC about Musk’s AI company producing nonconsensual nude images; LASST has criticized the company for abandoning its structure as a public benefit corporation. Both say they have not taken money from Musk nor talked to him. “If anything, I’m more concerned about xAi from a safety perspective than OpenAI,” Whitmer said, referring to Musk’s AI product.

I’m more concerned about OpenAI because I think they matter far more than xAI, but pound for pound xAI is by far the bigger menace acting far less responsibly, and most safety organizations in this supposed conspiracy will tell you that if you ask them, and act accordingly when the questions come up.

Miles Brundage: First it was the EAs out to get them, now it’s Elon.

The reality is just that most people think we should be careful about AI

(Elon himself is ofc actually out to get them, but most people who sometimes disagree with OpenAI have nothing to do with Elon, including Encode, the org discussed at the beginning of the article. And ironically, many effective altruists are more worried about Elon than OAI now)

OpenAI’s paranoia started with CANI, and then extended to Encode, and then to LASST.

Nathan Calvin: ​They seem to have a hard time believing that we are an organization of people who just, like, actually care about this.

Emily Shugerman: Lehane, who joined the company last year, is perhaps best known for coining the term “vast right-wing conspiracy” to dismiss the allegations against Bill Clinton during the Monica Lewinsky scandal — a line that seems to have seeped into Leading the Future’s messaging, too.

In a statement to the Journal, representatives from the PAC decried a “vast force out there that’s looking to slow down AI deployment, prevent the American worker from benefiting from the U.S. leading in global innovation and job creation, and erect a patchwork of regulation.””

The hits keep coming as the a16z-level paranoid about EA being a ‘vast conspiracy’ kicks into high gear , such as the idea that Dustin Moskovitz doesn’t care about AI safety, he’s going after them because of his stake in Anthropic, can you possibly be serious right now, why do you think he invested in Anthropic.

Of particular interest to OpenAI is the fact that both Omidyar and Moskovitz are investors in Anthropic — an OpenAI competitor that claims to produce safer, more steerable AI technology.

Groups backed by competitors often present themselves as disinterested public voices or ‘advocates’, when in reality their funders hold direct equity stakes in competitors in their sector – in this case worth billions of dollars,” she said. “Regardless of all the rhetoric, their patrons will undoubtedly benefit if competitors are weakened.”

Never mind that Anthropic has not supported Moskovitz on AI regulation, and that the regulatory interventions funded by Moskovitz would constantly (aside from any role in trying to stop OpenAI’s for-profit conversion) be bad for Anthropic’s commercial outlook.

Open Philanthropy (funded by Dustin Moskovitz): Reasonable people can disagree about the best guardrails to set for emerging technologies, but right now we’re seeing an unusually brazen effort by some of the biggest companies in the world to buy their way out of any regulation they don’t like. They’re putting their potential profits ahead of U.S. national security and the interests of everyday people.

Companies do this sort of thing all the time. This case is still very brazen, and very obvious, and OpenAI has now jumped into a16z levels of paranoia and bad faith between the lawfare, the funding of the new PAC and their letter on SB 53.

Suing and attacking nonprofits engaging in advocacy is a new low. Compare that to the situation with Daniel Kokotajlo, where OpenAI to its credit once confronted with its bad behavior backed down rather than going on a legal offensive.

Daniel Kokotajlo: Having a big corporation come after you legally, even if they are just harassing you and not trying to actually get you imprisoned, must be pretty stressful and scary. (I was terrified last year during the nondisparagement stuff, and that was just the fear of what *mighthappen, whereas in fact OpenAI backed down instead of attacking) I’m glad these groups aren’t cowed.

As in, do OpenAI and Sam Altman believe these false paranoid conspiracy theories?

I have long wondered the same thing about Marc Andreessen and a16z, and others who say there is a ‘vast conspiracy’ out there by which they mean Effective Altruism (EA), or when they claim it’s all some plot to make money.

I mean, these people are way too smart and knowledgeable to actually believe that, asks Padme, right? And certainly Sam Altman and OpenAI have to know better.

Wouldn’t the more plausible theory be that these people are simply lying? That Lehane doesn’t believe in a ‘vast EA conspiracy’ any more than he believed in a ‘vast right-wing conspiracy’ when he coined the term ‘vast right-wing conspiracy’ about the (we now know very true) allegations around Monica Lewinsky. It’s an op. It’s rhetoric. It’s people saying what they think will work to get them what they want. It’s not hard to make that story make sense.

Then again, maybe they do really believe it, or at least aren’t sure? People often believe genuinely crazy things that do not in any way map to reality, especially once politics starts to get involved. And I can see how going up against Elon Musk and being engaged in one the biggest heists in human history in broad daylight, while trying to build superintelligence that poses existential risks to humanity that a lot of people are very worried about and that also will have more upside than anything ever, could combine to make anyone paranoid. Highly understandable and sympathetic.

Or, of course, they could have been talking to their own AIs about these questions. I hear there are some major sycophancy issues there. One must be careful.

I sincerely hope that those involved here are lying. It beats the alternatives.

It seems that OpenAI’s failures on sycophancy and dealing with suicidality might endanger its relationship with those who must approve its attempted restructuring into a for-profit, also known as one of the largest attempted thefts in human history?

Maybe they will take OpenAI’s charitable mission seriously after all, at least in this way, despite presumably not understanding the full stakes involved and having the wrong idea about what kind of safety matters?

Garrison Lovely: Scorching new letter from CA and DE AGs to OpenAI, who each have the power to block the company’s restructuring to loosen nonprofit controls.

They are NOT happy about the recent teen suicide and murder-suicide that followed prolonged and concerning interactions with ChatGPT.

Rob Bonta (California Attorney General) and Kathleen Jennings (Delaware Attorney General) in a letter: In our meeting, we conveyed in the strongest terms that safety is a non-negotiable priority, especially when it comes to children. Our teams made additional requests about OpenAI’s current safety precautions and governance. We expect that your responses to these will be prioritized and that immediate remedial measures are being taken where appropriate.

We recognize that OpenAI has sought to position itself as a leader in the AI industry on safety. Indeed, OpenAI has publicly committed itself to build safe AGI to benefit all humanity, including children. And before we get to benefiting, we need to ensure that adequate safety measures are in place to not harm.

It is our shared view that OpenAI and the industry at large are not where they need to be in ensuring safety in AI products’ development and deployment. As Attorneys General, public safety is one of our core missions. As we continue our dialogue related to OpenAI’s recapitalization plan, we must work to accelerate and amplify safety as a governing force in the future of this powerful technology.

The recent deaths are unacceptable. They have rightly shaken the American public’s confidence in OpenAI and this industry. OpenAI – and the AI industry – must proactively and transparently ensure AI’s safe deployment. Doing so is mandated by OpenAI’s charitable mission, and will be required and enforced by our respective offices.

We look forward to hearing from you and working with your team on these important issues.

Some other things said by the AGs:

Bonta: We were looking for a rapid response. They’ll know what that means, if that’s days or weeks. I don’t see how it can be months or years.

All antitrust laws apply, all consumer protection laws apply, all criminal laws apply. We are not without many tools to regulate and prevent AI from hurting the public and the children.

With a lawsuit filed that OpenAI might well lose and the the two attorney generals that can veto its restructuring breathing down OpenAI’s neck, OpenAI is promising various fixes and in particular OpenAI has decided it is time for parental controls as soon as they can, which should be within a month.

Their first announcement on August 26 included these plans:

OpenAI: While our initial mitigations prioritized acute self-harm, some people experience other forms of mental distress. For example, someone might enthusiastically tell the model they believe they can drive 24/7 because they realized they’re invincible after not sleeping for two nights. Today, ChatGPT may not recognize this as dangerous or infer play and—by curiously exploring—could subtly reinforce it.

We are working on an update to GPT‑5 that will cause ChatGPT to de-escalate by grounding the person in reality. In this example, it would explain that sleep deprivation is dangerous and recommend rest before any action.

Better late than never on that one, I suppose. That is indeed why I am relatively not so worried about problems like this, we can adjust after things start to go wrong.

OpenAI: In addition to emergency services, we’re exploring ways to make it easier for people to reach out to those closest to them. This could include one-click messages or calls to saved emergency contacts, friends, or family members with suggested language to make starting the conversation less daunting.

We’re also considering features that would allow people to opt-in for ChatGPT to reach out to a designated contact on their behalf in severe cases.

We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT. We’re also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact. That way, in moments of acute distress, ChatGPT can do more than point to resources: it can help connect teens directly to someone who can step in.

On September 2 they followed up with additional information about how they are ‘partnering with experts’ and providing more details.

OpenAI: Earlier this year, we began building more ways for families to use ChatGPT together and decide what works best in their home. Within the next month, parents will be able to:

  • Link their account with their teen’s account (minimum age of 13) through a simple email invitation.

  • Control how ChatGPT responds to their teen with age-appropriate model behavior rules, which are on by default.

  • Manage which features to disable, including memory and chat history.

  • Receive notifications when the system detects their teen is in a moment of acute distress. Expert input will guide this feature to support trust between parents and teens.

These controls add to features we have rolled out for all users including in-app reminders during long sessions to encourage breaks.

Parental controls seem like an excellent idea.

I would consider most of this to effectively be ‘on by default’ already, for everyone, in the sense that AI models have controls against things like NSFW content that largely treat us all like teens. You could certainly tighten them up more for an actual teen, and it seems fine to give parents the option, although mostly I think you’re better off not doing that.

The big new thing is the notification feature. That is a double edged sword. As I’ve discussed previously, an AI or other source of help that can ‘rat you out’ to authorities, even ‘for your own good’ or ‘in moments of acute distress’ is inherently very different from a place where your secrets are safe. There is a reason we have confidentiality for psychologists and lawyers and priests, and balancing when to break that is complicated.

Given an AI’s current level of reliability and its special role as a place free from human judgment or social consequence, I am actually in favor of it outright never altering others without an explicit user request to do so.

Whereas things are moving in the other direction, with predictable results.

As in, OpenAI is already scanning your chats as per their posts I discussed above.

Greg Isenberg: ChatGPT is potentially leaking your private convos to the police.

People use ChatGPT because it feels like talking to a smart friend who won’t judge you. Now, people are realizing it’s more like talking to a smart friend who might snitch.

This is the same arc we saw in social media: early excitement, then paranoia, then demand for smaller, private spaces.

OpenAI (including as quoted by Futurism): When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts.

If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.

We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.

Futurism: When describing its rule against “harm [to] yourself or others,” the company listed off some pretty standard examples of prohibited activity, including using ChatGPT “to promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system.”

They are not directing self-harm cases to protect privacy, but harm to others is deemed different. That still destroys the privacy of the interaction. And ‘harm to others’ could rapidly morph into any number of places, both with false positives and also with changes in ideas about what constitutes ‘harm.’

They’re not even talking about felonies or imminent physical harm. They’re talking about ‘engage in unauthorized activities that violate the security of any service or system,’ or ‘destroy property,’ so this could potentially extend quite far, and in places that seem far less justified than intervening in response a potentially suicidal user. These are circumstances in which typical privileged communication would hold.

I very much do not like where that is going, and if I heard reports this was happening on the regular it would fundamentally alter my relationship to ChatGPT, even though I ‘have nothing to hide.’

What’s most weird about this is that OpenAI was recently advocating for ‘AI privilege.’

Reid Southern: OpenAI went from warning users that there’s no confidentiality when using ChatGPT, and calling for “AI privilege”, to actively scanning your messages to send to law enforcement, seemingly to protect themselves in the aftermath of the ChatGPT induced murder-suicide

This is partially a case of ‘if I’m not legally forbidden to do [X] then I will get blamed for not doing [X] so please ban me from doing it’ so it’s not as hypocritical as it sounds. It is still rather hypocritical and confusing to escalate like this. Why respond to suicides by warning you will be scanning for harm to others and intent to impact the security of systems, but definitely not acting if someone is suicidal?

If you think AI users deserve privilege, and I think this is a highly reasonable position, then act like it. Set a good example, set a very high bar for ratting, and confine alerting human reviewers let alone the authorities to when you catch someone on the level of trying to make a nuke or a bioweapon, or at minimum things that would force a psychologist to break privilege. It’s even good for business.

Otherwise people are indeed going to get furious, and there will be increasing demand to run models locally or in other ways that better preserve privacy. There’s not zero of that already, but it would escalate quickly.

Steven Byrnes notes the weirdness of seeing Ben’s essay describe OpenAI as an ‘AI safety company’ rather than a company most AI safety folks hate with a passion.

Steven Byrnes: I can’t even describe how weird it is to hear OpenAI, as a whole, today in 2025, being described as an AI safety company. Actual AI safety people HATE OPENAI WITH A PASSION, almost universally. The EA people generally hate it. The Rationalists generally hate it even more.

AI safety people have protested at the OpenAI offices with picket signs & megaphones! When the board fired Sam Altman, everyone immediately blamed EA & AI safety people! OpenAI has churned through AI safety staff b/c they keep quitting in protest! …What universe is this?

Yes, many AI safety people are angry about OpenAI being cavalier & dishonest about harm they might cause in the future, whereas you are angry about OpenAI being cavalier & dishonest about harm they are causing right now. That doesn’t make us enemies. “Why not both?”

I think that’s going too far. It’s not good to hate with a passion.

Even more than that, you could do so, so much worse than OpenAI on all of these questions (e.g. Meta, or xAI, or every major Chinese lab, basically everyone except Anthropic or Google is worse).

Certainly we think OpenAI is on net not helping and deeply inadequate to the task, their political lobbying and rhetoric is harmful, and their efforts have generally made the world a lot less safe. They still are doing a lot of good work, making a lot of good decisions, and I believe that Altman is normative, that he is far more aware of what is coming and the problems we will face than most or than he currently lets on.

I believe he is doing a much better job on these fronts than most (but not all) plausible CEOs of OpenAI would do in his place. For example, if OpenAI’s CEO of Applications Fidji Simo were in charge, or Chairman of the Board Bret Taylor were in charge, or Greg Brockman was in charge, or the CEO of any of the magnificent seven were in charge, I would expect OpenAI to act far less responsibly.

Thus I consider myself relatively well-inclined towards OpenAI among those worried about AI or advocating or AI safety.

I still have an entire series of posts about how terrible things have been at OpenAI and a regular section about them called ‘The Mask Comes Off.’

And I find myself forced to update my view importantly downward, towards being more concerned, in the wake of the recent events described in this post. OpenAI is steadily becoming more of a bad faith actor in the public sphere.

Discussion about this post

OpenAI #14: OpenAI Descends Into Paranoia and Bad Faith Lobbying Read More »

all-54-lost-clickwheel-ipod-games-have-now-been-preserved-for-posterity

All 54 lost clickwheel iPod games have now been preserved for posterity

Last year, we reported on the efforts of classic iPod fans to preserve playable copies of the downloadable clickwheel games that Apple sold for a brief period in the late ’00s. The community was working to get around Apple’s onerous FairPlay DRM by having people who still owned original copies of those (now unavailable) games sync their accounts to a single iTunes installation via a coordinated Virtual Machine. That “master library” would then be able to provide playable copies of those games to any number of iPods in perpetuity.

At the time, the community was still searching for iPod owners with syncable copies of the last few titles needed for their library. With today’s addition of Real Soccer 2009 to the project, though, all 54 official iPod clickwheel games are now available together in an easily accessible format for what is likely the first time.

All at once, then slowly

GitHub user Olsro, the originator of the iPod Clickwheel Games Preservation Project, tells Ars that he lucked into contact with three people who had large iPod game libraries in the first month or so after the project’s launch last October. That includes one YouTuber who had purchased and maintained copies of 39 distinct games, even repurchasing some of the upgraded versions Apple sold separately for later iPod models.

Ars’ story on the project shook out a few more iPod owners with syncable iPod game libraries, and subsequent updates in the following days left just a handful of titles unpreserved. But that’s when the project stalled, Olsro said, with months wasted on false leads and technical issues that hampered the effort to get a complete library.

“I’ve put a lot of time into coaching people that [had problems] transferring the files and authorizing the account once with me on the [Virtual Machine],” Olsro told Ars. “But I kept motivation to continue coaching anyone else coming to me (by mail/Discord) and making regular posts to increase awareness until I could find finally someone that could, this time, go with me through all the steps of the preservation process,” he added on Reddit.

Getting this working copy of Real Soccer 2009 was an “especially cursed” process, Olsro said.

Getting this working copy of Real Soccer 2009 was an “especially cursed” process, Olsro said. Credit: Olsro / Reddit

Getting working access to the final unpreserved game, Real Soccer 2009, was “especially cursed,” Olsro tells Ars. “Multiple [people] came to me during this summer and all attempts failed until a new one from yesterday,” he said. “I even had a situation when someone had an iPod Nano 5G with a playable copy of Real Soccer, but the drive was appearing empty in the Windows Explorer. He tried recovery tools & the iPod NAND just corrupted itself, asking for recovery…”

All 54 lost clickwheel iPod games have now been preserved for posterity Read More »