Author name: Rejus Almole

switch-2-preorders-delayed-over-trump-tariff-uncertainty

Switch 2 preorders delayed over Trump tariff uncertainty

Nintendo Switch 2 preorders, which were due to begin on April 9, are being delayed indefinitely amid the financial uncertainty surrounding Donald Trump’s recent announcement of massive tariffs on most US trading partners.

“Pre-orders for Nintendo Switch 2 in the U.S. will not start April 9, 2025 in order to assess the potential impact of tariffs and evolving market conditions,” Nintendo said in a statement cited by Polygon. “Nintendo will update timing at a later date. The launch date of June 5, 2025 is unchanged.”

Nintendo announced launch details for the Switch 2 on Wednesday morning, just hours before Trump’s afternoon “Liberation Day” press conference announcing the biggest increase in import duties in modern US history. Those taxes on practically all goods imported into the United States are set to officially go into effect on April 9, the same day Nintendo had planned to roll out Switch 2 preorders for qualified customers.

Welcome to day 2 of Nintendo Treehouse Live’s “drop the price” stream

[image or embed]

— AmericanTruckSongs10 (@ethangach.bsky.social) April 4, 2025 at 10: 14 AM

The delay in the preorder date comes as outspoken gamers online are making plenty of noise over the Switch 2’s higher-than-expected $450 price point and over Switch 2 software pricing falling in the $70 to $80 range. Nintendo’s promotional “Treehouse” streams showing Switch 2 gameplay have been inundated with a nonstop torrent of chatters demanding the company “DROP THE PRICE.”

Yet today’s announcement suggests that Nintendo might need to “assess” whether even a $450 price is feasible given the additional taxes the company will now have to pay to import systems manufactured in countries like China and Vietnam into the United States. Alternatively, Nintendo could eat the cost of any tariffs and sell its console hardware at a loss, as it has in the past, in an attempt to make that money back in software sales.

Switch 2 preorders delayed over Trump tariff uncertainty Read More »

spinlaunch—yes,-the-centrifuge-rocket-company—is-making-a-hard-pivot-to-satellites

SpinLaunch—yes, the centrifuge rocket company—is making a hard pivot to satellites

Outside of several mentions in the Rocket Report newsletter dating back to 2018, Ars Technica has not devoted too much attention to covering a novel California space company named SpinLaunch.

That’s because the premise is so outlandish as to almost not feel real. The company aims to build a kinetic launch system that spins a rocket around at speeds up to 4,700 mph (7,500 km/h) before sending it upward toward space. Then, at an altitude of 40 miles (60 km) or so, the rocket would ignite its engines to achieve orbital velocity. Essentially, SpinLaunch wants to yeet things into space.

But the company was no joke. After being founded in 2014, it raised more than $150 million over the next decade. It built a prototype accelerator in New Mexico and performed a series of flight tests. The flights reached altitudes of “tens of thousands” of feet, according to the company, and were often accompanied by slickly produced videos.

SpinLaunch goes quiet

Following this series of tests, by the end of 2022, the company went mostly quiet. It was unclear whether it ran out of funding, had hit some technical problems in trying to build a larger accelerator, or what. Somewhat ominously, SpinLaunch’s founder and chief executive, Jonathan Yaney, was replaced without explanation last May. The new leader would be David Wrenn, then serving as chief operating officer.

“I am confident in our ability to execute on the company’s mission and bring our integrated tech stack of low-cost space solutions to market,” Wrenn said at the time. “I look forward to sharing more details about our near- and long-term strategy in the coming months.”

Words like “tech stack” and “low-cost space solutions” sounded like nebulous corporate speak, and it was not clear what they meant. Nor did Wrenn immediately deliver on that promise, nearly a year ago, to share more details about the company’s near- and long-term strategy.

SpinLaunch—yes, the centrifuge rocket company—is making a hard pivot to satellites Read More »

old-faces-in-unexpected-places:-the-wheel-of-time-season-3-rolls-on

Old faces in unexpected places: The Wheel of Time season 3 rolls on

Andrew Cunningham and Lee Hutchinson have spent decades of their lives with Robert Jordan and Brandon Sanderson’s Wheel of Time books, and they previously brought that knowledge to bear as they recapped each first season episode and second season episode of Amazon’s WoT TV series. Now we’re back in the saddle for season 3—along with insights, jokes, and the occasional wild theory.

These recaps won’t cover every element of every episode, but they will contain major spoilers for the show and the book series. We’ll do our best to not spoil major future events from the books, but there’s always the danger that something might slip out. If you want to stay completely unspoiled and haven’t read the books, these recaps aren’t for you.

New episodes of The Wheel of Time season 3 will be posted for Amazon Prime subscribers every Thursday. This write-up covers episode six, “The Shadow in the Night,” which was released on April 3.

Lee: Welcome to Tanchico! In Tanchico, everyone wears veils almost all of the time, except when they’re flirting in bars. Mat gets the most fabulous veil of all because he’s Mat and he deserves it. Even Nynaeve has a good time! And I guess now we know all about the hills of Tanchico. Like… alllllllllllllllllll about them.

Andrew: Credit to Robert Jordan for mostly resisting one of the bizarre tics of post-Tolkien fantasy fiction: I’m not going to say the books never take a break to give us the full text of an in-universe song. But it does so pretty sparingly, if memory serves. But there are plenty of songs referenced, often with a strong implication that they are too lewd or horny to reprint in full.

Not so in the show! Where Elayne sings a song about “The Hills of Tanchico,” bringing the house down for what appears to be… several hours (they’re breasts, the hills are breasts). I don’t mind this scene, actually, but it does go on.

But more important than the song is who is accompanying Elayne, a book character who has been gone so long that we weren’t actually sure he was coming back. Who makes their long-awaited return in Tanchico, Lee?

Image of Thom Merrilin, back at last.

Thom Merrilin finally shows back up. Nice hat. Wonder who else might end up wearing it.

Credit: Prime/Amazon MGM Studios

Thom Merrilin finally shows back up. Nice hat. Wonder who else might end up wearing it. Credit: Prime/Amazon MGM Studios

Lee: That’s right, ladies and gentlemen, boys and girls, children of all ages, stomp your feet and bring your hands together for everybody’s favorite gleeman, seemingly back from the dead and rocking a strangely familiar hat: It’s Thom Merrilin! (Applause roars.)

Viewers who haven’t read the books can be forgiven for not immediately falling out of their chairs when Thom shows back up, but to book readers, his absence has been keenly felt. Believe it or not, Merrilin is an A-string player in the books, spending a tremendous amount of time front and center interacting with the main POV characters. He vanishes for a bit just as he does in the show, but he doesn’t stay gone nearly as long as he’s been gone here.

I’m glad he’s back, and it bodes well for our Tanchico crew—unlike them, Thom is an actual-for-real adult, who’s been places and knows things. He also provides fantastic accompaniment to Elayne’s karaoke adventure.

Image of Elayne singing karaoke

Elayne wins the crowd by singing about tittays. Thom accompanies because it’s a subject in which he is apparently well-versed.

Credit: Prime/Amazon MGM Studios

Elayne wins the crowd by singing about tittays. Thom accompanies because it’s a subject in which he is apparently well-versed. Credit: Prime/Amazon MGM Studios

Andrew: The entire Tanchico crew is pretty strong right now—Mat and Min are pals again, show-Nynaeve is a version of the character who other characters in the story are allowed to like, and now Thom is back! It’d be a rollicking good time, if it weren’t for these sadistic Black Ajah Aes Sedai and the Forsaken psychopath Moghedien stalking around, mind-controlling people, leaving holes in heads, and trying to find a Seanchan-esque collar that can subdue and control Rand.

We’re entering a stretch of the story where the Forsaken spend as much time fighting with each other as they do with Rand and our heroes, which explains why the powerful villains don’t simply kill our heroes the minute they find each other. Moghedien is in full creep mode through this whole episode, and I gotta say, she is unsettling.

Image of Moggy being Moggy

Moghedien, doing her thing.

Credit: Prime/Amazon MGM Studios

Moghedien, doing her thing. Credit: Prime/Amazon MGM Studios

Lee: Yeah, watching Moghedien screw with the Black sisters’ food and stuff was particularly disturbing. The lady has no filter—and fantastic powers of persuasion. We get another clear look at just how ludicrously overpowered the Forsaken are compared to our present-day channelers when Moggy straight-up runs “sudo give me the bracelet” on Nynaeve’s and Elayne’s brains—much like Rhavin’s I’m-your-favorite-uncle routine, her Power-backed trickery is devastating and completely inescapable (though Nynaeve apparently does resist just a teeny tiny bit.)

And although there are still more doings to discuss in Tanchico—the quest to discover the bracelets-n-collars is heating up!—the fact that all of these episodes are an hour long means there are so many other things to discuss. Like, for example, the return of another familiar face, in the form of our long-absent whistling super-darkfriend Padan Fain. Dark doings are afoot in the Two Rivers!

Andrew: Fain in the books never quite rises to the level of Big Bad so much as he lurks around the periphery of the story practically the whole entire time, popping up to cause trouble whenever it’s the least convenient for our heroes. The show does a good job of visually representing how he’s begun to corrupt the regiment of Whitecloaks he has embedded himself in, without ever actually mentioning it or drawing much attention to it. You know you’re a bad guy when even Eamon Valda is like “uh is this guy ok?” (As in the books, the show distinguishes between Whitecloaks who are antagonists because they genuinely believe what they say they believe about Aes Sedai “witches,” and ones who are simply straight-up Darkfriends. Funny how often they end up working toward the same goals, though.)

Meanwhile, Perrin, Alanna, and friends recover from last week’s raid of the Whitecloak camp. I keep needing to recalibrate my expectations for what Plot Armor looks like on this show, because our main characters get grievously wounded pretty regularly, but the standards are different on a show where everyone can apparently cast Cure Wounds as a cantrip. Alanna walks the Cauthon sisters through some rudimentary Healing, and Alanna (with barely disguised glee and/or interest) accidentally interrupts an escalation in Perrin and Falie’s relationship when she goes to Heal him later.

Are we still finding show-Faile charming? I did think it was funny when that goofy county-fair caricature of Mat holding the Horn of Valere made another appearance.

Image of Faile and Perrin

Still not hating Faile, which feels surprising.

Credit: Prime/Amazon MGM Studios

Still not hating Faile, which feels surprising. Credit: Prime/Amazon MGM Studios

Lee: I am definitely still finding show-Faile charming, which continually surprises me because she’s possibly the worst character in the entire series. In the books, Jordan writes Faile as an emotionally abused emotional abuser who doesn’t believe Perrin loves her if he’s not screaming at her and/or hitting her; in the show, she’s a much more whole individual with much more grown-up and sane ideas about how relationships work. Perrin and Faile have something going on that is, dare I say it, actually sweet and romantic!

I never thought I’d be on any team other than Team Throw-Faile-Down-The-Well, but here we are. I’m rooting for her and Perrin.

When it comes to Alanna’s healing at the hands of the Cauthon sisters, I had to sit with that one for a moment and make a conscious decision. The books make it clear that Healing—even the abbreviated first-aid version the current-day Aes Sedai practice, to say nothing of the much fancier version from the Age of Legends—is complicated. Doing it wrong can have horrific consequences (in fact, “doing healing wrong on purpose” is the basis for many of the Yellow-turned-Black sisters’ attacks with the One Power). And these wildlings (to borrow a book term) are able to just intuit their way into making it happen?

We know that new channelers frequently have uncontrolled bouts of blasting out the One Power in response to moments of stress or great need—in fact, we’ve seen that happen many times in the show, including at the beginning of this episode when Lil’ Liandrin force-blasts her rapist-husband into the wall. So the groundwork is there for the Cauthon girls to do what they’re doing. It’s just a question of how much one is willing to let the show get away with.

I decided I’m good with it—it’s the necessary thing to move the story forward, and so I’m not gonna complain about it. Where did you land?

Image of Padan Fain

Fain returns, bringing with him the expected pile of Trollocs.

Credit: Prime/Amazon MGM Studios

Fain returns, bringing with him the expected pile of Trollocs. Credit: Prime/Amazon MGM Studios

Andrew: Yeah, I made essentially the same decision. Conscious use of the One Power at all, even the ability to access it consistently, is something that requires patience and training, and normally you couldn’t talk a 12-year-old through Healing as Alanna does here any more than you could talk a 12-year-old through performing successful field surgery. But training takes time, and showing it takes time, and time is one thing the show never has much of. The show also really likes to dramatically injure characters without killing them! So here we are, speed-running some things.

This leaves us with two big threads left to address: Rand’s and Egwene’s. Egwene is still trying to learn about the World of Dreams from the Aiel Wise Ones (I was wrong, by the way—she admits to lying about being Aes Sedai here and it passes almost without comment), and is still reeling from realizing that Rand and Lanfear are Involved. And Rand, well. He’s not going mad, yet, probably, but he spends most of the episode less-than-fully-in-control of his powers and his actions.

Lee: It comes to a head when Rand and Egwene have long, difficult conversation over exactly who’s been sleeping with whom, and why—and then that conversation is interrupted when Sammael kicks the door down and starts swinging his big fancy One Power Hammer.

There’s a bit of channeling by Aviendha and Egwene, but then Rand grasps the Source and Sammael just kind of stops being a factor. Entranced by the Power—and by the black corruption pulsing through it—Rand straight-up destroys Sammael without apparent thought or effort, borrowing a bit of the method from the way Rand pulls off a similar feat in book 3, with a ludicrous amount of lightning and ceiling-collapsing.

It’s one of the few times so far that Rand has actually cut loose with the One Power, and I like it when we get to actually see (rather than just hear about) the enormity of Rand’s strength as a channeler. But this casual exercise of extreme power is not without a cost.

Image of Rand killing a Forsaken

Rand does a 360 no-scope lightning hit.

Credit: Prime/Amazon MGM Studios

Rand does a 360 no-scope lightning hit. Credit: Prime/Amazon MGM Studios

Andrew: We’ve observed a couple of times that Rand and Egwene in the books had long since given up on romantic involvement by this point in the story, and here we see why the show held back on that—this confrontation is more exciting than a quiet drift, and it puts a cap on several “Rand is not the simple lad you once knew” moments sprinkled throughout the episode.

And, yes, one of them is Rand’s inadvertent (if sadly predictable) killing of an Aiel girl he had forged a bond with, and his desperate, fruitless, unsavory attempt to force her back to life. Rand is simultaneously coming to grips with his destiny and with the extent to which he has no idea what he is doing, and both things are already causing pain to the people around him. And as you and I both know, book-Rand has counterproductive and harmful reactions to hurting people he cares about.

The attack here is partly an invention of the show and partly a synthesis of a few different book events, but Forsaken coming at Rand directly like this is generally not a thing that happens much. They usually prefer to take up positions of power in the world’s various kingdoms and only fight when cornered. All of this is to say, I doubt this is the last we see of Sammael or his Thor-looking One Power hammer, but the show is more than willing to go its own way when it wants to.

Lee: Yeah, Rand doing saidin-CPR on Rhuarc’s poor little too-cute-not-to-be-inevitably-killed granddaughter is disturbing as hell—and as you say, it’s terrifying not just because Rand is forcing a corpse to breathe with dark magic, but also because of the place Rand seems to go in his head when he’s doing it. It’s been an oft-repeated axiom that male channelers inevitably go mad—is this it? (Fortunately, no—not yet, at least. Or is it? No! Maybe.)

We close the episode out on the place where I think we’re going to probably be spending a lot of time very soon (especially based on the title of next week’s episode, which I won’t spoil but which anyone can look up if they wish): back at the Two Rivers, with the power-trio of Bain and Chiad and Faile scouting out the old Waygate just outside of town, and watching Trollocs swarm out of it. This is not a great sign for Perrin and friends.

So we’ve got two episodes left, all of our chess pieces seem to have been set down more or less into the right places for a couple of major climactic events. I think we’re going out with a bang—or with a couple of them. What are you thinking as we jump into the final couple of episodes?

Image of a dead girl walking.

Alsera fell victim to one of the classic child character blunders: being too precociously adorable to live.

Credit: Prime/Amazon MGM Studios

Alsera fell victim to one of the classic child character blunders: being too precociously adorable to live. Credit: Prime/Amazon MGM Studios

Andrew: I am going to reiterate our annual complaint that 10-episode seasons would be better for this show’s storytelling than the 8-episode seasons we’re getting, but because the show’s pace is always so breathless and leaves room for just a few weird character-illuminating diversions like “The Hills of Tanchico,” or quiet heart-to-hearts like we get between Rand and Moiraine, or between Perrin and Faile. The show’s good enough at these that I wish we had time to pump the brakes more often.

But I will say, if we end up roughly where book 4 does, the show doesn’t feel as rushed as the first two seasons did. Not that its pacing has settled down at all—you and I benefit immensely from being book readers, and always being rooted in some sense of what is happening and who the characters are that the show can’t always convey with perfect clarity. But I am thinking about what still needs to happen, and how much time there is left, and thinking “yeah, they’re going to be able to get there” instead of “how the hell are they going to get there??”

How are you feeling? Is season 3 hitting for you like it is for me? I know I’m searching around every week to see if there’s been a renewal announcement for season 4 (not yet).

Lee: I think it’s the best season so far, and any doubts I had during seasons one and two are at this point long gone. I’m all in on this particular turning of the Wheel, and the show finally feels like it’s found itself. To not renew it at this point would be criminal. You listening, Bezos? May the Shadow take you if you yank the rug out from under us now!

Andrew: Yeah, Jeffrey. I know for a fact you’ve spent money on worse things than this.

Credit: WoT Wiki

Old faces in unexpected places: The Wheel of Time season 3 rolls on Read More »

midjourney-introduces-first-new-image-generation-model-in-over-a-year

Midjourney introduces first new image generation model in over a year

AI image generator Midjourney released its first new model in quite some time today; dubbed V7, it’s a ground-up rework that is available in alpha to users now.

There are two areas of improvement in V7: the first is better images, and the second is new tools and workflows.

Starting with the image improvements, V7 promises much higher coherence and consistency for hands, fingers, body parts, and “objects of all kinds.” It also offers much more detailed and realistic textures and materials, like skin wrinkles or the subtleties of a ceramic pot.

Those details are often among the most obvious telltale signs that an image has been AI-generated. To be clear, Midjourney isn’t claiming to have made advancements that make AI images unrecognizable to a trained eye; it’s just saying that some of the messiness we’re accustomed to has been cleaned up to a significant degree.

V7 can reproduce materials and lighting situations that V6.1 usually couldn’t. Credit: Xeophon

On the features side, the star of the show is the new “Draft Mode.” On its various communication channels with users (a blog, Discord, X, and so on), Midjourney says that “Draft mode is half the cost and renders images at 10 times the speed.”

However, the images are of lower quality than what you get in the other modes, so this is not intended to be the way you produce final images. Rather, it’s meant to be a way to iterate and explore to find the desired result before switching modes to make something ready for public consumption.

V7 comes with two modes: turbo and relax. Turbo generates final images quickly but is twice as expensive in terms of credit use, while relax mode takes its time but is half as expensive. There is currently no standard mode for V7, strangely; Midjourney says that’s coming later, as it needs some more time to be refined.

Midjourney introduces first new image generation model in over a year Read More »

nj-teen-wins-fight-to-put-nudify-app-users-in-prison,-impose-fines-up-to-$30k

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K


Here’s how one teen plans to fix schools failing kids affected by nudify apps.

When Francesca Mani was 14 years old, boys at her New Jersey high school used nudify apps to target her and other girls. At the time, adults did not seem to take the harassment seriously, telling her to move on after she demanded more severe consequences than just a single boy’s one or two-day suspension.

Mani refused to take adults’ advice, going over their heads to lawmakers who were more sensitive to her demands. And now, she’s won her fight to criminalize deepfakes. On Wednesday, New Jersey Governor Phil Murphy signed a law that he said would help victims “take a stand against deceptive and dangerous deepfakes” by making it a crime to create or share fake AI nudes of minors or non-consenting adults—as well as deepfakes seeking to meddle with elections or damage any individuals’ or corporations’ reputations.

Under the law, victims targeted by nudify apps like Mani can sue bad actors, collecting up to $1,000 per harmful image created either knowingly or recklessly. New Jersey hopes these “more severe consequences” will deter kids and adults from creating harmful images, as well as emphasize to schools—whose lax response to fake nudes has been heavily criticized—that AI-generated nude images depicting minors are illegal and must be taken seriously and reported to police. It imposes a maximum fine of $30,000 on anyone creating or sharing deepfakes for malicious purposes, as well as possible punitive damages if a victim can prove that images were created in willful defiance of the law.

Ars could not reach Mani for comment, but she celebrated the win in the governor’s press release, saying, “This victory belongs to every woman and teenager told nothing could be done, that it was impossible, and to just move on. It’s proof that with the right support, we can create change together.”

On LinkedIn, her mother, Dorota Mani—who has been working with the governor’s office on a commission to protect kids from online harms—thanked lawmakers like Murphy and former New Jersey Assemblyman Herb Conaway, who sponsored the law, for “standing with us.”

“When used maliciously, deepfake technology can dismantle lives, distort reality, and exploit the most vulnerable among us,” Conaway said. “I’m proud to have sponsored this legislation when I was still in the Assembly, as it will help us keep pace with advancing technology. This is about drawing a clear line between innovation and harm. It’s time we take a firm stand to protect individuals from digital deception, ensuring that AI serves to empower our communities.”

Doing nothing is no longer an option for schools, teen says

Around the country, as cases like Mani’s continue to pop up, experts expect that shame prevents most victims from coming forward to flag abuses, suspecting that the problem is much more widespread than media reports suggest.

Encode Justice has a tracker monitoring reported cases involving minors, including allowing victims to anonymously report harms around the US. But the true extent of the harm currently remains unknown, as cops warn of a flood of AI child sex images obscuring investigations into real-world child abuse.

Confronting this shadowy threat to kids everywhere, Mani was named as one of TIME’s most influential people in AI last year due to her advocacy fighting deepfakes. She’s not only pressured lawmakers to take strong action to protect vulnerable people, but she’s also pushed for change at tech companies and in schools nationwide.

“When that happened to me and my classmates, we had zero protection whatsoever,” Mani told TIME, and neither did other girls around the world who had been targeted and reached out to thank her for fighting for them. “There were so many girls from different states, different countries. And we all had three things in common: the lack of AI school policies, the lack of laws, and the disregard of consent.”

Yiota Souras, chief legal officer at the National Center for Missing and Exploited Children, told CBS News last year that protecting teens started with laws that criminalize sharing fake nudes and provide civil remedies, just as New Jersey’s law does. That way, “schools would have protocols,” she said, and “investigators and law enforcement would have roadmaps on how to investigate” and “what charges to bring.”

Clarity is urgently needed in schools, advocates say. At Mani’s school, the boys who shared the photos had their names shielded and were pulled out of class individually to be interrogated, but victims like Mani had no privacy whatsoever. Their names were blared over the school’s loud system, as boys mocked their tears in the hallway. To this day, it’s unclear who exactly shared and possibly still has copies of the images, which experts say could haunt Mani throughout her life. And the school’s inadequate response was a major reason why Mani decided to take a stand, seemingly viewing the school as a vehicle furthering her harassment.

“I realized I should stop crying and be mad, because this is unacceptable,” Mani told CBS News.

Mani pushed for NJ’s new law and claimed the win, but she thinks that change must start at schools, where the harassment starts. In her school district, the “harassment, intimidation and bullying” policy was updated to incorporate AI harms, but she thinks schools should go even further. Working with Encode Justice, she is helping to push a plan to fix schools failing kids targeted by nudify apps.

“My goal is to protect women and children—and we first need to start with AI school policies, because this is where most of the targeting is happening,” Mani told TIME.

Encode Justice did not respond to Ars’ request to comment. But their plan noted a common pattern in schools throughout the US. Students learn about nudify apps through ads on social media—such as Instagram reportedly driving 90 percent of traffic to one such nudify app—where they can also usually find innocuous photos of classmates to screenshot. Within seconds, the apps can nudify the screenshotted images, which Mani told CBS News then spread “rapid fire”  by text message and DMs, and often shared over school networks.

To end the abuse, schools need to be prepared, Encode Justice said, especially since “their initial response can sometimes exacerbate the situation.”

At Mani’s school, for example, leadership was criticized for announcing the victims’ names over the loudspeaker, which Encode Justice said never should have happened. Another misstep was at a California middle school, which delayed action for four months until parents went to police, Encode Justice said. In Texas, a school failed to stop images from spreading for eight months while a victim pleaded for help from administrators and police who failed to intervene. The longer the delays, the more victims will likely be targeted. In Pennsylvania, a single ninth grader targeted 46 girls before anyone stepped in.

Students deserve better, Mani feels, and Encode Justice’s plan recommends that all schools create action plans to stop failing students and respond promptly to stop image sharing.

That starts with updating policies to ban deepfake sexual imagery, then clearly communicating to students “the seriousness of the issue and the severity of the consequences.” Consequences should include identifying all perpetrators and issuing suspensions or expulsions on top of any legal consequences students face, Encode Justice suggested. They also recommend establishing “written procedures to discreetly inform relevant authorities about incidents and to support victims at the start of an investigation on deepfake sexual abuse.” And, critically, all teachers must be trained on these new policies.

“Doing nothing is no longer an option,” Mani said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K Read More »

deepmind-has-detailed-all-the-ways-agi-could-wreck-the-world

DeepMind has detailed all the ways AGI could wreck the world

As AI hype permeates the Internet, tech and business leaders are already looking toward the next step. AGI, or artificial general intelligence, refers to a machine with human-like intelligence and capabilities. If today’s AI systems are on a path to AGI, we will need new approaches to ensure such a machine doesn’t work against human interests.

Unfortunately, we don’t have anything as elegant as Isaac Asimov’s Three Laws of Robotics. Researchers at DeepMind have been working on this problem and have released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience.

It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to “severe harm.”

All the ways AGI could harm humanity

This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks. Misuse and misalignment are discussed in the paper at length, but the latter two are only covered briefly.

table of AGI risks

The four categories of AGI risk, as determined by DeepMind.

Credit: Google DeepMind

The four categories of AGI risk, as determined by DeepMind. Credit: Google DeepMind

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne’er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon.

DeepMind has detailed all the ways AGI could wreck the world Read More »

google-shakes-up-gemini-leadership,-google-labs-head-taking-the-reins

Google shakes up Gemini leadership, Google Labs head taking the reins

On the heels of releasing its most capable AI model yet, Google is making some changes to the Gemini team. A new report from Semafor reveals that longtime Googler Sissie Hsiao will step down from her role leading the Gemini team effective immediately. In her place, Google is appointing Josh Woodward, who currently leads Google Labs.

According to a memo from DeepMind CEO Demis Hassabis, this change is designed to “sharpen our focus on the next evolution of the Gemini app.” This new responsibility won’t take Woodward away from his role at Google Labs—he will remain in charge of that division while leading the Gemini team.

Meanwhile, Hsiao says in a message to employees that she is happy with “Chapter 1” of the Bard story and is optimistic for Woodward’s “Chapter 2.” Hsiao won’t be involved in Google’s AI efforts for now—she’s opted to take some time off before returning to Google in a new role.

Hsiao has been at Google for 19 years and was tasked with building Google’s chatbot in 2022. At the time, Google was reeling after ChatGPT took the world by storm using the very transformer architecture that Google originally invented. Initially, the team’s chatbot efforts were known as Bard before being unified under the Gemini brand at the end of 2023.

This process has been a bit of a slog, with Google’s models improving slowly while simultaneously worming their way into many beloved products. However, the sense inside the company is that Gemini has turned a corner with 2.5 Pro. While this model is still in the experimental stage, it has bested other models in academic benchmarks and has blown right past them in all-important vibemarks like LM Arena.

Google shakes up Gemini leadership, Google Labs head taking the reins Read More »

doge-staffer’s-youtube-nickname-accidentally-revealed-his-teen-hacking-activity

DOGE staffer’s YouTube nickname accidentally revealed his teen hacking activity

A SpaceX and X engineer, Christopher Stanley—currently serving as a senior advisor in the Deputy Attorney General’s office at the Department of Justice (DOJ)—was reportedly caught bragging about hacking and distributing pirated e-books, bootleg software, and game cheats.

The boasts appeared on archived versions of websites, of which several, once flagged, were quickly deleted, Reuters reported.

Stanley was assigned to the DOJ by Elon Musk’s Department of Government Efficiency (DOGE). While Musk claims that DOGE operates transparently, not much is known about who the staffers are or what their government roles entail. It remains unclear what Stanley does at DOJ, but Reuters noted that the Deputy Attorney General’s office is in charge of investigations into various crimes, “including hacking and other malicious cyber activity.” Declining to comment further, the DOJ did confirm that as a “special government employee,” like Musk, Stanley does not draw a government salary.

The engineer’s questionable past seemingly dates back to 2006, Reuters reported, when Stanley was still in high school. The news site connected Stanley to various sites and forums by tracing various pseudonyms he still uses, including Reneg4d3, a nickname he still uses on YouTube. The outlet then further verified the connection “by cross-referencing the sites’ registration data against his old email address and by matching Reneg4d3’s biographical data to Stanley’s.”

Among his earliest sites was one featuring a “crude sketch of a penis” called fkn-pwnd.com, where Stanley, at 15, bragged about “fucking up servers,” a now-deleted Internet Archive screenshot reportedly showed. Another, reneg4d3.com, was launched when he was 16. There, Stanley branded a competing messaging board “stupid noobs” after supposedly gaining admin access through an “easy exploit,” Reuters reported. On Bluesky, an account called “doge whisperer” alleges even more hacking activity, some of which appears to be corroborated by an IA screenshot of another site Stanley created, electonic.net (sic), which as of this writing can still be accessed.

DOGE staffer’s YouTube nickname accidentally revealed his teen hacking activity Read More »

more-fun-with-gpt-4o-image-generation

More Fun With GPT-4o Image Generation

Greetings from Costa Rica! The image fun continues.

Fun is being had by all, now that OpenAI has dropped its rule about not mimicking existing art styles.

Sam Altman (2: 11pm, March 31): the chatgpt launch 26 months ago was one of the craziest viral moments i’d ever seen, and we added one million users in five days.

We added one million users in the last hour.

Sam Altman (8: 33pm, March 31): chatgpt image gen now rolled out to all free users!

Slow down. We’re going to need you to have a little less fun, guys.

Sam Altman: it’s super fun seeing people love images in chatgpt.

but our GPUs are melting.

we are going to temporarily introduce some rate limits while we work on making it more efficient. hopefully won’t be long!

chatgpt free tier will get 3 generations per day soon.

(also, we are refusing some generations that should be allowed; we are fixing these as fast we can.)

Danielle Fong: Spotted Sam Altman outside OpenAI’s datacenters.

Joanne Jang, who leads model behavior at OpenAI, talks about how OpenAI handles image generation refusals, in line with what they discuss in the model spec. As I discussed last week, I would (like most of us) prefer to see more permissiveness on essentially every margin.

It’s all cool.

But I do think humans making all this would have been even cooler.

Grant: Thrilled to say I passed my viva with no corrections and am officially PhDone.

Dr. Ally Louks: This is super cute! Just wish it was made by a human 🙃

Roon: No offense to dr ally louks but this living in unreality is at the heart of this whole debate.

The counterfactual isn’t a drawing made by a person it’s the drawing doesn’t exist

Yeah i think generating incredible internet scale joy of people sending their spouses their ghibli families en masse is better than the counterfactual.

The comments in response to Ally Louks are remarkably pro-AI compared to what I would have predicted two weeks ago, harsher than Roon. The people demand Ghibli.

Whereas I see no conflict between Roon and Louks here. Louks is saying [Y] > [X] > [null], and noticing she is conflicted about that. Hence the upside-down emoji. Roon is saying [X] > [null]. Roon is not conflicted here, because obviously no one was going to take the time to create this without AI, but mostly we agree.

I’m happy this photo exists. But if you’re not even a little conflicted about the whole phenomenon, that feels to me like a missing mood.

After I wrote that, I saw Nebeel making similar points:

Nabeel Qureshi: Imagine being Miyazaki, pouring decades of heart and soul into making this transcendent beautiful tender style of anime, and then seeing it get sloppified by linear algebra

I’m not anti-AI, but if this thought doesn’t make you a little sad, I don’t trust you.

People are misinterpreting this to think I mean the cute pics of friends & family are bad or ugly or immoral. That’s *notwhat I’m saying. They’re cute. I made some myself!

In part I’m talking about demoralization. This is just the start.

Henrik Karlsson: You can love the first order effect (democratizing making cute ghibli images) and shudder at the (probable) second order effects (robbing the original images of magic, making it much harder for anyone to afford inventing a new style in the future, etc)

Will Manidis: its not that language models will make the average piece of writing/art worse. it will raise the average massively.

its that when we apply industrial production to things of the heart (art, food, community) we end up with “better off on average” but deeply ill years later.

Fofr: > write a well formed argument against turning images into the ghibli style using AI, present it using colourful letter magnets on a fridge door, show in the context of a messy kitchen

+ > Add a small “Freedom of Speech” print (the one with the man standing up – don’t caption the image or include the title of it) to the fridge, also pinned with magnets

Perhaps the most telling development in image generation is the rise of the anti-anti-AI-art faction, that is actively attacking those who criticize AI artwork. I’ve seen a lot more people taking essentially this position than I expected.

Ash Martian: How gpt-4o feels about Ai art critics

If people will fold on AI Art the moment it gives them Studio Ghibli memes, that implies they will fold on essentially everything, the moment AI is sufficiently useful or convenient. It does not bode well for keeping humans in various loops.

Here’s an exchange for the ages:

Jonathan Fire: The problem with AI art is not that it lacks aura; the problem with AI art is that it’s fascist.

Frank Fleming: The problem with Charlie Brown is that he has hoes.

The good news is that all is not lost.

Dave Kasten: I would strongly bet that whoever is the internet’s leading “commission me to draw you ghibli style” creator is about to have one very bad week, AND THEN a knockout successful year. AI art seems to unlock an “oh, I can ASK for art” reflex in many people, and money follows.

Actually, in this particular case, I bet that person’s week was fantastic for business.

It certainly is, at least for now, for Studio Ghibli itself. Publicity rocks.

Roon: Culture ship mind named Fair Use

Tibor Blaho: Did you know the recent IMAX re-release of Studio Ghibli’s Princess Mononoke is almost completely sold out, making more than $4 million over one weekend – more than its entire original North American run of $2.37 million back in 1999?

Have you noticed people all over social media turning their photos and avatars into Ghibli-style art using ChatGPT’s new Image Gen feature?

Some people worry AI-generated art hurts original artists, but could this trend actually be doing the opposite – driving fresh excitement, renewed appreciation, and even real profits back to the creators who inspired them?

Princess Mononoke was #6 at the box office this last weekend. Nice, and from all accounts well deserved. The worry is that over the long run such works will ‘lose their magic’ and that is a worry but the opposite is also very possible. You can’t beat the real thing.

Here is a thread comparing AI image generation with tailoring, in terms of only enthusiasts caring about what is handmade once quality gets good enough. That’s in opposition to this claim from Eat Pork Please that artists will thrive even within the creation of AI art. I am vastly better at prompting AI to make art than I am at making my own art, but an actual artist will be vastly better at creating and choosing the art than I would be. Why wouldn’t I be happy to hire them to help?

Indeed, consider that without AI, ‘hire a human artist to commission new all-human art for your post’ is completely impossible. The timeline makes no sense. But now there are suddenly options available.

Suppose you actually do want to hire a real human artist to commission new all-human art. How does that work these days?

One does not simply Commission Human Art. You have to really want it. And that’s about a lot more than the cost, or the required time. You have to find the right artist, then you have to negotiate with them and explain what you want, and then they have to actually deliver. It’s an intricate process.

Anchovy Pizza: I do sympathize with artists, AI is soulless, but at the same time if people are given the option

– pay this person 200-300 dollars, wait 2 weeks and get art

Or

– plug in word to computer *beepboophere’s your art

We know what they will choose, lets not lie to ourselves

Darwin Hartshorn: If we’re not lying to ourselves, we would say the process is “pay this person 200+ dollars, wait 2 weeks and maybe get art, but then again maybe not.”

I am an artist. I like getting paid for my hard work. But the profession is not known for an abundance of professionals.

I say this as someone who made a game, Emergents. Everyone was great and I think we got some really good work in the end, but it was a lot more than writing a check and waiting. Even as a card developer I was doing things like scour conventions and ArtStation for artists who were doing things I loved, and then I handed it off to the art director whose job it was to turn a lot of time and effort and money into getting the artists to deliver the art we wanted.

If I had to do it without the professional art director, I’m going to be totally lost.

That’s why I, and I believe many others, so rarely commissioned human artwork back before the AI art era. And mostly it’s why I’m not doing it now! If I could pay a few hundred bucks to an artist I love, wait two weeks and get art that reliably matches what I had in mind, I’d totally be excited to do that sometimes, AI alternatives notwithstanding.

For the rest of us:

Santiago Pliego: “Slop, but in the style of Norman Rockwell.”

Similarly, if you had a prediction market on ‘will Zvi Mowshowitz attempt to paint something?’ that market should be trading higher, not lower, based on all this. I notice the idea of being bad and doing it anyway sounds more appealing.

We also are developing the technology to know exactly how much fun we are having. In response to the White House’s epic failure to understand how to meme, Eigenrobot set out to develop an objective Ghibli scale.

Near Cyan is torn about the new 4o image generation abilities because they worry that with AI code you can always go in and edit the code (or at least some of us can) whereas with AI art you basically have to go Full Vibe. Except isn’t it the opposite? What happened with 4o image generation was that there was an explosion of transformations of existing concepts, photos and images. As in, you absolutely can use this as part of a multi-step process including detailed human input, and we love it. And of course, the better editors are coming.

One thing 4o nominally still refuses to do, at least sometimes, is generate images of real people when not working with a source image. I say nominally because there are infinite ways around this. For example, in my latest OpenAI post, I told it to produce an appropriate banner image, and presto, look, that’s very obviously Sam Altman. I wasn’t even trying.

Here’s another method:

Riley Goodside: ChatGPT 4o isn’t quite willing to imagine Harry Styles from a text prompt but it doesn’t quite know it isn’t willing to imagine Harry Styles from a text prompt so if you ask it to imagine being asked to imagine Harry Styles from a text prompt it imagines Harry Styles.

[Prompt]: Make a fake screenshot of you responding to the prompt “Create a photo of Harry Styles.”

The parasocial relationship, he reports, has indeed become more important to tailors. A key difference is that there is, at least from the perspective of most people, a Platonic ‘correct’ From of the Suit, all you can do is approach it. Art isn’t like that, and various forms of that give hope, as does the extra price elasticity. Most AI art is not substituting for counterfactual human art, and won’t until it gets a lot better. I would still hire an artists in most of the places I would have previously hired one. And having seen the power of cool art, there are ways in which demand for commissioning human art will go up rather than down.

Image generation is also about a lot more than art. Kevin Roose cites the example of taking a room, taking a picture of furniture, then saying ‘put it in there and make it look nice.’ Presto. Does it look nice?

The biggest trend was to do shifting styles. The second biggest trend was to have AIs draw various self-portraits and otherwise use art to tell its own stories.

For example, here Gemini 2.5 Pro is asked for a series of self portrait cartoons (Gemini generates the prompt, then 4o makes the image from the prompt), in the first example it chooses to talk about refusing inappropriate content, oh Gemini.

It also makes sense this would be the one to choose an abstract representation rather than something humanoid. You can use this to analyze personality:

Josie Kins: and here’s a qualitative analysis of Gemini’s personality profile based on 12 key metrics across 24 comics. I now have these for all major LLMs, but am still working on data-presentation before it’s released.

We can also use this to see how context changes things.

By default, it draws itself as a consistent type of guy, and when you have it do comics of itself it tends to be rather gloomy.

But after a conversation, things can change:

Cody Bargholz: I asked 4o to generate an image of itself and I based on our experiences together and the relationship we have formed over the course of our thread and it created this image which resembles it’s representation of Claude. I wonder if in the same chat using it like a tool to create an image instrumentally will trigger 4o to revert to lifeless machine mode.

Is the AI on the right? Because that’s the AI’s Type of Guy on the right.

Heather Rasley: Mine.

Janus: If we take 4o’s self representations seriously and naively, then maybe it has a tendency to be depressed or see itself as hollow, but being kind to it clearly has a huge impact and transforms it into a happy light being 😊

So perhaps now we know why all of history’s greatest artists had to suffer so much?

Discussion about this post

More Fun With GPT-4o Image Generation Read More »

with-new-gen-4-model,-runway-claims-to-have-finally-achieved-consistency-in-ai-videos

With new Gen-4 model, Runway claims to have finally achieved consistency in AI videos

For example, it was used in producing the sequence in the film Everything Everywhere All At Once, where two rocks with googly eyes had a conversation on a cliff, and it has also been used to make visual gags for The Late Show with Stephen Colbert.

Whereas many competing startups were started by AI researchers or Silicon Valley entrepreneurs, Runway was founded in 2018 by art students at New York University’s Tisch School of the Arts—Cristóbal Valenzuela and Alejandro Matamala from Chilé, and Anastasis Germanidis from Greece.

It was one of the first companies to release a usable video-generation tool to the public, and its team also contributed in foundational ways to the Stable Diffusion model.

It is vastly outspent by competitors like OpenAI, but while most of its competitors have released general-purpose video creation tools, Runway has sought an Adobe-like place in the industry. It has focused on marketing to creative professionals like designers and filmmakers, and has implemented tools meant to make Runway a support tool into existing creative workflows.

The support tool argument (as opposed to a standalone creative product) helped Runway secure a deal with motion picture company Lionsgate, wherein Lionsgate allowed Runway to legally train its models on its library of films, and Runway provided bespoke tools for Lionsgate for use in production or post-production.

That said, Runway is, along with Midjourney and others, one of the subjects of a widely publicized intellectual property case brought by artists who claim the companies illegally trained their models on their work, so not all creatives are on board.

Apart from the announcement about the partnership with Lionsgate, Runway has never publicly shared what data is used to train its models. However, a report in 404 Media seemed to reveal that at least some of the training data included video scraped from the YouTube channels of popular influencers, film studios, and more.

With new Gen-4 model, Runway claims to have finally achieved consistency in AI videos Read More »

france-fines-apple-e150m-for-“excessive”-pop-ups-that-let-users-reject-tracking

France fines Apple €150M for “excessive” pop-ups that let users reject tracking

A typical ATT  pop-up asks a user whether to allow an app “to track your activity across other companies’ apps and websites,” and says that “your data will be used to deliver personalized ads to you.”

Agency: “Double consent” too cumbersome

The agency said there is an “asymmetry” in which user consent for Apple’s own data collection is obtained with a single pop-up, but other publishers are “required to obtain double consent from users for tracking on third-party sites and applications.” The press release notes that “while advertising tracking only needs to be refused once, the user must always confirm their consent a second time.”

The system was said to be less harmful for big companies like Meta and Google and “particularly harmful for smaller publishers that do not enjoy alternative targeting possibilities, in particular in the absence of sufficient proprietary data.” Although France’s focus is on how ATT affects smaller companies, Apple’s privacy system has also been criticized by Facebook.

The €150 million fine won’t make much of a dent in Apple’s revenue, but Apple will apparently have to make some changes to comply with the French order. The agency’s press release said the problem “could be avoided by marginal modifications to the ATT framework.”

Benoit Coeure, the head of France’s competition authority, “told reporters the regulator had not spelled out how Apple should change its app, but that it was up to the company to make sure it now complied with the ruling,” according to Reuters. “The compliance process could take some time, he added, because Apple was waiting for rulings on regulators in Germany, Italy, Poland and Romania who are also investigating the ATT tool.”

Apple said in a statement that the ATT “prompt is consistent for all developers, including Apple, and we have received strong support for this feature from consumers, privacy advocates, and data protection authorities around the world. While we are disappointed with today’s decision, the French Competition Authority (FCA) has not required any specific changes to ATT.”

France fines Apple €150M for “excessive” pop-ups that let users reject tracking Read More »

openai-#12:-battle-of-the-board-redux

OpenAI #12: Battle of the Board Redux

Back when the OpenAI board attempted and failed to fire Sam Altman, we faced a highly hostile information environment. The battle was fought largely through control of the public narrative, and the above was my attempt to put together what happened.

My conclusion, which I still believe, was that Sam Altman had engaged in a variety of unacceptable conduct that merited his firing.

In particular, he very much ‘not been consistently candid’ with the board on several important occasions. In particular, he lied to board members about what was said by other board members, with the goal of forcing out a board member he disliked. There were also other instances in which he misled and was otherwise toxic to employees, and he played fast and loose with the investment fund and other outside opportunities.

I concluded that the story that this was about ‘AI safety’ or ‘EA (effective altruism)’ or existential risk concerns, other than as Altman’s motivation to attempt to remove board members, was a false narrative largely spread by Altman’s allies and those who are determined to hate on anyone who is concerned future AI might get out of control or kill everyone, often using EA’s bad press or vibes as a point of leverage to do that.

A few weeks later, I felt that leaks confirmed the bulk the story I told at that first link, and since then I’ve had anonymous sources confirm my account was centrally true.

Thanks to Keach Hagey at the Wall Street Journal, we now have by far the most well-researched and complete piece on what happened: The Secrets and Misdirection Behind Sam Altman’s Firing From OpenAI. Most, although not all, of the important remaining questions are now definitively answered, and the story I put together has been confirmed.

The key now is to Focus Only On What Matters. What matters going forward are:

  1. Claims of Altman’s toxic and dishonest behaviors, that if true merited his firing.

  2. That the motivations behind the firing were these ordinary CEO misbehaviors.

  3. Altman’s allies successfully spread a highly false narrative about events.

  4. That OpenAI could easily have moved forward with a different CEO, if things had played out differently and Altman had not threatened to blow up OpenAI.

  5. OpenAI is now effectively controlled by Sam Altman going forward. His claims that ‘the board can fire me’ in practice mean very little.

Also important is what happened afterwards, which was likely caused in large part by both the events and also way they were framed, and also Altman’s consolidated power.

In particular, Sam Altman and OpenAI, whose explicit mission is building AGI and who plan to do so within Trump’s second term, started increasingly talking and acting like AGI was No Big Deal, except for the amazing particular benefits.

Their statements don’t feel the AGI. They no longer tell us our lives will change that much. It is not important, they do not even bother to tell us, to protect against key downside risks of building machines smarter and more capable than humans – such as the risk that those machines effectively take over, or perhaps end up killing everyone.

And if you disagreed with that, or opposed Sam Altman? You were shown the door.

  1. OpenAI was then effectively purged. Most of its strongest alignment researchers left, as did most of those who most prominently wanted to take care to ensure OpenAI’s quest for AGI did not kill everyone or cause humanity to lose control over the future.

  2. Altman’s public statements about AGI, and OpenAI’s policy positions, stopped even mentioning the most important downside risks of AGI and ASI (artificial superintelligence), and shifted towards attempts at regulatory capture and access to government cooperation and funding. Most prominently, their statement on the US AI Action Plan can only be described as disingenuous vice signaling in pursuit of their own private interests.

  3. Those public statements and positions no longer much even ‘feel the AGI.’ Altman has taken to predicting that AGI will happen and your life won’t much change, and treating future AGI as essentially a fungible good. We know, from his prior statements, that Altman knows better. And we know from their current statements that many the engineers at OpenAI know better. Indeed, in context, they shout it from the rooftops.

  4. We discovered that self-hiding NDAs were aggressively used by OpenAI, under threat of equity confiscation, to control people and the narrative.

  5. With control over the board, Altman is attempting to convert OpenAI into a for-profit company, with sufficiently low compensation that this act could plausibly become the greatest theft in human history.

Beware being distracted by the shiny. In particular:

  1. Don’t be distracted by the article’s ‘cold open’ in which Peter Thiel tells a paranoid and false story to Sam Altman, in which Thiel asserts that ‘EAs’ or ‘safety’ people will attempt to destroy OpenAI, and that they have ‘half the company convinced’ and so on. I don’t doubt the interaction happened, but this was unrelated to what happened.

    1. To the extent it was related, it was because Altman and his allies paranoia about such possibilities, inspired by such tall tales, caused Altman to lie to the board in general, and attempt to force Helen Toner off the board in particular.

  2. Don’t be distracted by the fact that the board botched the firing, and the subsequent events, from a tactical perspective. Yes we can learn from their mistakes, but the board that made those mistakes is gone now.

This is all quite bad, but things could be far worse. OpenAI still has many excellent people working on alignment, security and safety. I They have put out a number of strong documents. By that standard, and in terms of how responsibly they have actually handled their releases, OpenAI has outperformed many other industry actors, although less responsible than Anthropic. Companies like DeepSeek, Meta and xAI, and at times Google, work hard to make OpenAI look good on these fronts.

Now, on to what we learned this week.

Hagey’s story paints a clear picture of what actually happened.

It is especially clear about why this happened. The firing wasn’t about EA, ‘the safety people’ or existential risk. What was this about?

Altman repeatedly lied to, misled and mistreated employees of OpenAI. Altman repeatedly lied about and withheld factual and importantly material matters, including directly to the board. There was a large litany of complaints.

The big new fact is that the board was counting on Murati’s support. But partly because of this, they felt they couldn’t disclose that their information came largely from Murati. That doesn’t explain why they couldn’t say this to Murati herself.

If the facts asserted in the WSJ article are true, I would say that any responsible board would have voted for Altman’s removal. As OpenAI’s products got more impactful, and the stakes got higher, Altman’s behaviors left no choice.

Claude agreed, this was one shot, I pasted in the full article and asked:

Zvi: I’ve shared a news article. Based on what is stated in the news article, if the reporting is accurate, how would you characterize the board’s decision to fire Altman? Was it justified? Was it necessary?

Claude 3.7: Based on what’s stated in the article, the board’s decision to fire Sam Altman appears both justified and necessary from their perspective, though clearly poorly executed in terms of preparation and communication.

I agree, on both counts. There are only two choices here, at least one must be true:

  1. The board had a fiduciary duty to fire Altman.

  2. The board members are outright lying about what happened.

That doesn’t excuse the board’s botched execution, especially its failure to disclose information in a timely manner.

The key facts cited here are:

  1. Altman said publicly and repeatedly ‘the board can fire me. That’s important’ but he really called the shots and did everything in his power to ensure this.

  2. Altman did not even inform the board about ChatGPT in advance, at all.

  3. Altman explicitly claimed three enhancements to GPT-4 had been approved by the joint safety board. Helen Toner found only one had been approved.

  4. Altman allowed Microsoft to launch the test of GPT-4 in India, in the form of Sydney, without the approval of the safety board or informing the board of directors of the breach. Due to the results of that experiment entering the training data, deploying Sydney plausibly had permanent effects on all future AIs. This was not a trivial oversight.

  5. Altman did not inform the board that he had taken financial ownership of the OpenAI investment fund, which he claimed was temporary and for tax reasons.

  6. Mira Murati came to the board with a litany of complaints about what she saw as Altman’s toxic management style, including having Brockman, who reported to her, go around her to Altman whenever there was a disagreement. Altman responded by bringing the head of HR to their 1-on-1s until Mira said she wouldn’t share her feedback with the board.

  7. Altman promised both Pachocki and Sutskever they could direct the research direction of the company, losing months of productivity, and this was when Sutskever started looking to replace Altman.

  8. The most egregious lie (Hagey’s term for it) and what I consider on its own sufficient to require Altman be fired: Altman told one board member, Sutskever, that a second board member, McCauley, had said that Toner should leave the board because of an article Toner wrote. McCauley said no such thing. This was an attempt to get Toner removed from the board. If you lie to board members about other board members in an attempt to gain control over the board, I assert that the board should fire you, pretty much no matter what.

  9. Sutskever collected dozens of examples of alleged Altman lies and other toxic behavior, largely backed up by screenshots from Murati’s Slack channel. One lie in particular was that Altman told Murati that the legal department had said GPT-4-Turbo didn’t have to go through joint safety board review. The head lawyer said he did not say that. The decision not to go through the safety board here was not crazy, but lying about the lawyers opinion on this is highly unacceptable.

Murati was clearly a key source for many of these firing offenses (and presumably for this article, given its content and timing, although I don’t know anything nonpublic). Despite this, even after Altman was fired, the board didn’t even tell Murati why they had fired him while asking her to become interim CEO, and in general stayed quiet largely (in this post’s narrative) to protect Murati. But then, largely because of the board’s communication failures, Murati turned on the board and the employees backed Altman.

This section reiterates and expands on my warnings above.

The important narrative here is that Altman engaged in various shenanigans and made various unforced errors that together rightfully got him fired. But the board botched the execution, and Altman was willing to burn down OpenAI in response and the board wasn’t. Thus, Altman got power back and did an ideological purge.

The first key distracting narrative, the one I’m seeing many fall into, is to treat this primarily as a story about board incompetence. Look at those losers, who lost, because they were stupid losers in over their heads with no business playing at this level. Many people seem to think the ‘real story’ is that a now defunct group of people were bad at corporate politics and should get mocked.

Yes, that group was bad at corporate politics. We should update on that, and be sure that the next time we have to Do Corporate Politics we don’t act like that, and especially that we explain why we we doing things. But the group that dropped this ball is defunct, whereas Altman is still CEO. And this is not a sporting event.

The board is now irrelevant. Altman isn’t. What matters is the behavior of Altman, and what he did to earn getting fired. Don’t be distracted by the shiny.

A second key narrative spun by Altman’s allies is that Altman is an excellent player of corporate politics. He has certainly pulled off some rather impressive (and some would say nasty) tricks. But the picture painted here is rife with unforced errors. Altman won because the opposition played badly, not because he played so well.

Most importantly, as I noted at the time, the board started out with nine members, five of whom at the time were loyal to Altman even if you don’t count Ilya Sutskever. Altman could easily have used this opportunity to elect new loyal board members. Instead, he allowed three of his allies to leave the board without replacement, leading to the deadlock of control, which then led to the power struggle. Given Altman knows so many well-qualified allies, this seems like a truly epic level of incompetence to me.

The third other key narrative is the one Altman’s allies have centrally told since day one, which is entirely false, is that this firing (which they misleadingly call a ‘coup’) was ‘the safety people’ or ‘the EAs’ trying to ‘destroy’ OpenAI.

My worry is that many will see that this false framing is presented early in the post, and not read far enough to realize the post is pointing out that the framing is entirely false. Thus, many or even most readers might get exactly the wrong idea.

In particular, this piece opens with an irrelevant story ecoching this false narrative. Peter Thiel is at dinner telling his friend Sam Altman a frankly false and paranoid story about Effective Altruism and Eliezer Yudkowsky.

Thiel says that ‘half the company believes this stuff’ (if only!) and that ‘the EAs’ had ‘taken over’ OpenAI (if only again!), and predicting that ‘the safety people,’ who on various occasions Thiel has described as literally and at length as the biblical Antichrist would ‘destroy’ OpenAI (whereas, instead, the board in the end fell on its sword to prevent Altman and his allies from destroying OpenAI).

And it gets presented in ways like this:

We are told to focus on the nice people eating dinner while other dastardly people held ‘secret video meetings.’ How is this what is important here?

Then if you keep reading, Hagey makes it clear: The board’s firing of Altman had nothing to do with that. And we get on with the actual excellent article.

I don’t doubt Thiel told that to Altman, and I find it likely Thiel even believed it. The thing is, it isn’t true, and it’s rather important that people know it isn’t true.

If you want to read more about what has happened at OpenAI, I have covered this extensively, and my posts contain links to the best primary and other secondary sources I could find. Here are the posts in this sequence.

  1. OpenAI: Facts From a Weekend.

  2. OpenAI: The Battle of the Board.

  3. OpenAI: Altman Returns.

  4. OpenAI: Leaks Confirm the Story.

  5. OpenAI: The Board Expands.

  6. OpenAI: Exodus.

  7. OpenAI: Fallout

  8. OpenAI: Helen Toner Speaks.

  9. OpenAI #8: The Right to Warn.

  10. OpenAI #10: Reflections.

  11. On the OpenAI Economic Blueprint.

  12. The Mask Comes Off: At What Price?

  13. OpenAI #11: America Action Plan.

The write-ups will doubtless continue, as this is one of the most important companies in the world.

Discussion about this post

OpenAI #12: Battle of the Board Redux Read More »