Author name: Rejus Almole

midjourney-introduces-first-new-image-generation-model-in-over-a-year

Midjourney introduces first new image generation model in over a year

AI image generator Midjourney released its first new model in quite some time today; dubbed V7, it’s a ground-up rework that is available in alpha to users now.

There are two areas of improvement in V7: the first is better images, and the second is new tools and workflows.

Starting with the image improvements, V7 promises much higher coherence and consistency for hands, fingers, body parts, and “objects of all kinds.” It also offers much more detailed and realistic textures and materials, like skin wrinkles or the subtleties of a ceramic pot.

Those details are often among the most obvious telltale signs that an image has been AI-generated. To be clear, Midjourney isn’t claiming to have made advancements that make AI images unrecognizable to a trained eye; it’s just saying that some of the messiness we’re accustomed to has been cleaned up to a significant degree.

V7 can reproduce materials and lighting situations that V6.1 usually couldn’t. Credit: Xeophon

On the features side, the star of the show is the new “Draft Mode.” On its various communication channels with users (a blog, Discord, X, and so on), Midjourney says that “Draft mode is half the cost and renders images at 10 times the speed.”

However, the images are of lower quality than what you get in the other modes, so this is not intended to be the way you produce final images. Rather, it’s meant to be a way to iterate and explore to find the desired result before switching modes to make something ready for public consumption.

V7 comes with two modes: turbo and relax. Turbo generates final images quickly but is twice as expensive in terms of credit use, while relax mode takes its time but is half as expensive. There is currently no standard mode for V7, strangely; Midjourney says that’s coming later, as it needs some more time to be refined.

Midjourney introduces first new image generation model in over a year Read More »

nj-teen-wins-fight-to-put-nudify-app-users-in-prison,-impose-fines-up-to-$30k

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K


Here’s how one teen plans to fix schools failing kids affected by nudify apps.

When Francesca Mani was 14 years old, boys at her New Jersey high school used nudify apps to target her and other girls. At the time, adults did not seem to take the harassment seriously, telling her to move on after she demanded more severe consequences than just a single boy’s one or two-day suspension.

Mani refused to take adults’ advice, going over their heads to lawmakers who were more sensitive to her demands. And now, she’s won her fight to criminalize deepfakes. On Wednesday, New Jersey Governor Phil Murphy signed a law that he said would help victims “take a stand against deceptive and dangerous deepfakes” by making it a crime to create or share fake AI nudes of minors or non-consenting adults—as well as deepfakes seeking to meddle with elections or damage any individuals’ or corporations’ reputations.

Under the law, victims targeted by nudify apps like Mani can sue bad actors, collecting up to $1,000 per harmful image created either knowingly or recklessly. New Jersey hopes these “more severe consequences” will deter kids and adults from creating harmful images, as well as emphasize to schools—whose lax response to fake nudes has been heavily criticized—that AI-generated nude images depicting minors are illegal and must be taken seriously and reported to police. It imposes a maximum fine of $30,000 on anyone creating or sharing deepfakes for malicious purposes, as well as possible punitive damages if a victim can prove that images were created in willful defiance of the law.

Ars could not reach Mani for comment, but she celebrated the win in the governor’s press release, saying, “This victory belongs to every woman and teenager told nothing could be done, that it was impossible, and to just move on. It’s proof that with the right support, we can create change together.”

On LinkedIn, her mother, Dorota Mani—who has been working with the governor’s office on a commission to protect kids from online harms—thanked lawmakers like Murphy and former New Jersey Assemblyman Herb Conaway, who sponsored the law, for “standing with us.”

“When used maliciously, deepfake technology can dismantle lives, distort reality, and exploit the most vulnerable among us,” Conaway said. “I’m proud to have sponsored this legislation when I was still in the Assembly, as it will help us keep pace with advancing technology. This is about drawing a clear line between innovation and harm. It’s time we take a firm stand to protect individuals from digital deception, ensuring that AI serves to empower our communities.”

Doing nothing is no longer an option for schools, teen says

Around the country, as cases like Mani’s continue to pop up, experts expect that shame prevents most victims from coming forward to flag abuses, suspecting that the problem is much more widespread than media reports suggest.

Encode Justice has a tracker monitoring reported cases involving minors, including allowing victims to anonymously report harms around the US. But the true extent of the harm currently remains unknown, as cops warn of a flood of AI child sex images obscuring investigations into real-world child abuse.

Confronting this shadowy threat to kids everywhere, Mani was named as one of TIME’s most influential people in AI last year due to her advocacy fighting deepfakes. She’s not only pressured lawmakers to take strong action to protect vulnerable people, but she’s also pushed for change at tech companies and in schools nationwide.

“When that happened to me and my classmates, we had zero protection whatsoever,” Mani told TIME, and neither did other girls around the world who had been targeted and reached out to thank her for fighting for them. “There were so many girls from different states, different countries. And we all had three things in common: the lack of AI school policies, the lack of laws, and the disregard of consent.”

Yiota Souras, chief legal officer at the National Center for Missing and Exploited Children, told CBS News last year that protecting teens started with laws that criminalize sharing fake nudes and provide civil remedies, just as New Jersey’s law does. That way, “schools would have protocols,” she said, and “investigators and law enforcement would have roadmaps on how to investigate” and “what charges to bring.”

Clarity is urgently needed in schools, advocates say. At Mani’s school, the boys who shared the photos had their names shielded and were pulled out of class individually to be interrogated, but victims like Mani had no privacy whatsoever. Their names were blared over the school’s loud system, as boys mocked their tears in the hallway. To this day, it’s unclear who exactly shared and possibly still has copies of the images, which experts say could haunt Mani throughout her life. And the school’s inadequate response was a major reason why Mani decided to take a stand, seemingly viewing the school as a vehicle furthering her harassment.

“I realized I should stop crying and be mad, because this is unacceptable,” Mani told CBS News.

Mani pushed for NJ’s new law and claimed the win, but she thinks that change must start at schools, where the harassment starts. In her school district, the “harassment, intimidation and bullying” policy was updated to incorporate AI harms, but she thinks schools should go even further. Working with Encode Justice, she is helping to push a plan to fix schools failing kids targeted by nudify apps.

“My goal is to protect women and children—and we first need to start with AI school policies, because this is where most of the targeting is happening,” Mani told TIME.

Encode Justice did not respond to Ars’ request to comment. But their plan noted a common pattern in schools throughout the US. Students learn about nudify apps through ads on social media—such as Instagram reportedly driving 90 percent of traffic to one such nudify app—where they can also usually find innocuous photos of classmates to screenshot. Within seconds, the apps can nudify the screenshotted images, which Mani told CBS News then spread “rapid fire”  by text message and DMs, and often shared over school networks.

To end the abuse, schools need to be prepared, Encode Justice said, especially since “their initial response can sometimes exacerbate the situation.”

At Mani’s school, for example, leadership was criticized for announcing the victims’ names over the loudspeaker, which Encode Justice said never should have happened. Another misstep was at a California middle school, which delayed action for four months until parents went to police, Encode Justice said. In Texas, a school failed to stop images from spreading for eight months while a victim pleaded for help from administrators and police who failed to intervene. The longer the delays, the more victims will likely be targeted. In Pennsylvania, a single ninth grader targeted 46 girls before anyone stepped in.

Students deserve better, Mani feels, and Encode Justice’s plan recommends that all schools create action plans to stop failing students and respond promptly to stop image sharing.

That starts with updating policies to ban deepfake sexual imagery, then clearly communicating to students “the seriousness of the issue and the severity of the consequences.” Consequences should include identifying all perpetrators and issuing suspensions or expulsions on top of any legal consequences students face, Encode Justice suggested. They also recommend establishing “written procedures to discreetly inform relevant authorities about incidents and to support victims at the start of an investigation on deepfake sexual abuse.” And, critically, all teachers must be trained on these new policies.

“Doing nothing is no longer an option,” Mani said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K Read More »

deepmind-has-detailed-all-the-ways-agi-could-wreck-the-world

DeepMind has detailed all the ways AGI could wreck the world

As AI hype permeates the Internet, tech and business leaders are already looking toward the next step. AGI, or artificial general intelligence, refers to a machine with human-like intelligence and capabilities. If today’s AI systems are on a path to AGI, we will need new approaches to ensure such a machine doesn’t work against human interests.

Unfortunately, we don’t have anything as elegant as Isaac Asimov’s Three Laws of Robotics. Researchers at DeepMind have been working on this problem and have released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience.

It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to “severe harm.”

All the ways AGI could harm humanity

This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks. Misuse and misalignment are discussed in the paper at length, but the latter two are only covered briefly.

table of AGI risks

The four categories of AGI risk, as determined by DeepMind.

Credit: Google DeepMind

The four categories of AGI risk, as determined by DeepMind. Credit: Google DeepMind

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne’er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon.

DeepMind has detailed all the ways AGI could wreck the world Read More »

google-shakes-up-gemini-leadership,-google-labs-head-taking-the-reins

Google shakes up Gemini leadership, Google Labs head taking the reins

On the heels of releasing its most capable AI model yet, Google is making some changes to the Gemini team. A new report from Semafor reveals that longtime Googler Sissie Hsiao will step down from her role leading the Gemini team effective immediately. In her place, Google is appointing Josh Woodward, who currently leads Google Labs.

According to a memo from DeepMind CEO Demis Hassabis, this change is designed to “sharpen our focus on the next evolution of the Gemini app.” This new responsibility won’t take Woodward away from his role at Google Labs—he will remain in charge of that division while leading the Gemini team.

Meanwhile, Hsiao says in a message to employees that she is happy with “Chapter 1” of the Bard story and is optimistic for Woodward’s “Chapter 2.” Hsiao won’t be involved in Google’s AI efforts for now—she’s opted to take some time off before returning to Google in a new role.

Hsiao has been at Google for 19 years and was tasked with building Google’s chatbot in 2022. At the time, Google was reeling after ChatGPT took the world by storm using the very transformer architecture that Google originally invented. Initially, the team’s chatbot efforts were known as Bard before being unified under the Gemini brand at the end of 2023.

This process has been a bit of a slog, with Google’s models improving slowly while simultaneously worming their way into many beloved products. However, the sense inside the company is that Gemini has turned a corner with 2.5 Pro. While this model is still in the experimental stage, it has bested other models in academic benchmarks and has blown right past them in all-important vibemarks like LM Arena.

Google shakes up Gemini leadership, Google Labs head taking the reins Read More »

doge-staffer’s-youtube-nickname-accidentally-revealed-his-teen-hacking-activity

DOGE staffer’s YouTube nickname accidentally revealed his teen hacking activity

A SpaceX and X engineer, Christopher Stanley—currently serving as a senior advisor in the Deputy Attorney General’s office at the Department of Justice (DOJ)—was reportedly caught bragging about hacking and distributing pirated e-books, bootleg software, and game cheats.

The boasts appeared on archived versions of websites, of which several, once flagged, were quickly deleted, Reuters reported.

Stanley was assigned to the DOJ by Elon Musk’s Department of Government Efficiency (DOGE). While Musk claims that DOGE operates transparently, not much is known about who the staffers are or what their government roles entail. It remains unclear what Stanley does at DOJ, but Reuters noted that the Deputy Attorney General’s office is in charge of investigations into various crimes, “including hacking and other malicious cyber activity.” Declining to comment further, the DOJ did confirm that as a “special government employee,” like Musk, Stanley does not draw a government salary.

The engineer’s questionable past seemingly dates back to 2006, Reuters reported, when Stanley was still in high school. The news site connected Stanley to various sites and forums by tracing various pseudonyms he still uses, including Reneg4d3, a nickname he still uses on YouTube. The outlet then further verified the connection “by cross-referencing the sites’ registration data against his old email address and by matching Reneg4d3’s biographical data to Stanley’s.”

Among his earliest sites was one featuring a “crude sketch of a penis” called fkn-pwnd.com, where Stanley, at 15, bragged about “fucking up servers,” a now-deleted Internet Archive screenshot reportedly showed. Another, reneg4d3.com, was launched when he was 16. There, Stanley branded a competing messaging board “stupid noobs” after supposedly gaining admin access through an “easy exploit,” Reuters reported. On Bluesky, an account called “doge whisperer” alleges even more hacking activity, some of which appears to be corroborated by an IA screenshot of another site Stanley created, electonic.net (sic), which as of this writing can still be accessed.

DOGE staffer’s YouTube nickname accidentally revealed his teen hacking activity Read More »

more-fun-with-gpt-4o-image-generation

More Fun With GPT-4o Image Generation

Greetings from Costa Rica! The image fun continues.

Fun is being had by all, now that OpenAI has dropped its rule about not mimicking existing art styles.

Sam Altman (2: 11pm, March 31): the chatgpt launch 26 months ago was one of the craziest viral moments i’d ever seen, and we added one million users in five days.

We added one million users in the last hour.

Sam Altman (8: 33pm, March 31): chatgpt image gen now rolled out to all free users!

Slow down. We’re going to need you to have a little less fun, guys.

Sam Altman: it’s super fun seeing people love images in chatgpt.

but our GPUs are melting.

we are going to temporarily introduce some rate limits while we work on making it more efficient. hopefully won’t be long!

chatgpt free tier will get 3 generations per day soon.

(also, we are refusing some generations that should be allowed; we are fixing these as fast we can.)

Danielle Fong: Spotted Sam Altman outside OpenAI’s datacenters.

Joanne Jang, who leads model behavior at OpenAI, talks about how OpenAI handles image generation refusals, in line with what they discuss in the model spec. As I discussed last week, I would (like most of us) prefer to see more permissiveness on essentially every margin.

It’s all cool.

But I do think humans making all this would have been even cooler.

Grant: Thrilled to say I passed my viva with no corrections and am officially PhDone.

Dr. Ally Louks: This is super cute! Just wish it was made by a human 🙃

Roon: No offense to dr ally louks but this living in unreality is at the heart of this whole debate.

The counterfactual isn’t a drawing made by a person it’s the drawing doesn’t exist

Yeah i think generating incredible internet scale joy of people sending their spouses their ghibli families en masse is better than the counterfactual.

The comments in response to Ally Louks are remarkably pro-AI compared to what I would have predicted two weeks ago, harsher than Roon. The people demand Ghibli.

Whereas I see no conflict between Roon and Louks here. Louks is saying [Y] > [X] > [null], and noticing she is conflicted about that. Hence the upside-down emoji. Roon is saying [X] > [null]. Roon is not conflicted here, because obviously no one was going to take the time to create this without AI, but mostly we agree.

I’m happy this photo exists. But if you’re not even a little conflicted about the whole phenomenon, that feels to me like a missing mood.

After I wrote that, I saw Nebeel making similar points:

Nabeel Qureshi: Imagine being Miyazaki, pouring decades of heart and soul into making this transcendent beautiful tender style of anime, and then seeing it get sloppified by linear algebra

I’m not anti-AI, but if this thought doesn’t make you a little sad, I don’t trust you.

People are misinterpreting this to think I mean the cute pics of friends & family are bad or ugly or immoral. That’s *notwhat I’m saying. They’re cute. I made some myself!

In part I’m talking about demoralization. This is just the start.

Henrik Karlsson: You can love the first order effect (democratizing making cute ghibli images) and shudder at the (probable) second order effects (robbing the original images of magic, making it much harder for anyone to afford inventing a new style in the future, etc)

Will Manidis: its not that language models will make the average piece of writing/art worse. it will raise the average massively.

its that when we apply industrial production to things of the heart (art, food, community) we end up with “better off on average” but deeply ill years later.

Fofr: > write a well formed argument against turning images into the ghibli style using AI, present it using colourful letter magnets on a fridge door, show in the context of a messy kitchen

+ > Add a small “Freedom of Speech” print (the one with the man standing up – don’t caption the image or include the title of it) to the fridge, also pinned with magnets

Perhaps the most telling development in image generation is the rise of the anti-anti-AI-art faction, that is actively attacking those who criticize AI artwork. I’ve seen a lot more people taking essentially this position than I expected.

Ash Martian: How gpt-4o feels about Ai art critics

If people will fold on AI Art the moment it gives them Studio Ghibli memes, that implies they will fold on essentially everything, the moment AI is sufficiently useful or convenient. It does not bode well for keeping humans in various loops.

Here’s an exchange for the ages:

Jonathan Fire: The problem with AI art is not that it lacks aura; the problem with AI art is that it’s fascist.

Frank Fleming: The problem with Charlie Brown is that he has hoes.

The good news is that all is not lost.

Dave Kasten: I would strongly bet that whoever is the internet’s leading “commission me to draw you ghibli style” creator is about to have one very bad week, AND THEN a knockout successful year. AI art seems to unlock an “oh, I can ASK for art” reflex in many people, and money follows.

Actually, in this particular case, I bet that person’s week was fantastic for business.

It certainly is, at least for now, for Studio Ghibli itself. Publicity rocks.

Roon: Culture ship mind named Fair Use

Tibor Blaho: Did you know the recent IMAX re-release of Studio Ghibli’s Princess Mononoke is almost completely sold out, making more than $4 million over one weekend – more than its entire original North American run of $2.37 million back in 1999?

Have you noticed people all over social media turning their photos and avatars into Ghibli-style art using ChatGPT’s new Image Gen feature?

Some people worry AI-generated art hurts original artists, but could this trend actually be doing the opposite – driving fresh excitement, renewed appreciation, and even real profits back to the creators who inspired them?

Princess Mononoke was #6 at the box office this last weekend. Nice, and from all accounts well deserved. The worry is that over the long run such works will ‘lose their magic’ and that is a worry but the opposite is also very possible. You can’t beat the real thing.

Here is a thread comparing AI image generation with tailoring, in terms of only enthusiasts caring about what is handmade once quality gets good enough. That’s in opposition to this claim from Eat Pork Please that artists will thrive even within the creation of AI art. I am vastly better at prompting AI to make art than I am at making my own art, but an actual artist will be vastly better at creating and choosing the art than I would be. Why wouldn’t I be happy to hire them to help?

Indeed, consider that without AI, ‘hire a human artist to commission new all-human art for your post’ is completely impossible. The timeline makes no sense. But now there are suddenly options available.

Suppose you actually do want to hire a real human artist to commission new all-human art. How does that work these days?

One does not simply Commission Human Art. You have to really want it. And that’s about a lot more than the cost, or the required time. You have to find the right artist, then you have to negotiate with them and explain what you want, and then they have to actually deliver. It’s an intricate process.

Anchovy Pizza: I do sympathize with artists, AI is soulless, but at the same time if people are given the option

– pay this person 200-300 dollars, wait 2 weeks and get art

Or

– plug in word to computer *beepboophere’s your art

We know what they will choose, lets not lie to ourselves

Darwin Hartshorn: If we’re not lying to ourselves, we would say the process is “pay this person 200+ dollars, wait 2 weeks and maybe get art, but then again maybe not.”

I am an artist. I like getting paid for my hard work. But the profession is not known for an abundance of professionals.

I say this as someone who made a game, Emergents. Everyone was great and I think we got some really good work in the end, but it was a lot more than writing a check and waiting. Even as a card developer I was doing things like scour conventions and ArtStation for artists who were doing things I loved, and then I handed it off to the art director whose job it was to turn a lot of time and effort and money into getting the artists to deliver the art we wanted.

If I had to do it without the professional art director, I’m going to be totally lost.

That’s why I, and I believe many others, so rarely commissioned human artwork back before the AI art era. And mostly it’s why I’m not doing it now! If I could pay a few hundred bucks to an artist I love, wait two weeks and get art that reliably matches what I had in mind, I’d totally be excited to do that sometimes, AI alternatives notwithstanding.

For the rest of us:

Santiago Pliego: “Slop, but in the style of Norman Rockwell.”

Similarly, if you had a prediction market on ‘will Zvi Mowshowitz attempt to paint something?’ that market should be trading higher, not lower, based on all this. I notice the idea of being bad and doing it anyway sounds more appealing.

We also are developing the technology to know exactly how much fun we are having. In response to the White House’s epic failure to understand how to meme, Eigenrobot set out to develop an objective Ghibli scale.

Near Cyan is torn about the new 4o image generation abilities because they worry that with AI code you can always go in and edit the code (or at least some of us can) whereas with AI art you basically have to go Full Vibe. Except isn’t it the opposite? What happened with 4o image generation was that there was an explosion of transformations of existing concepts, photos and images. As in, you absolutely can use this as part of a multi-step process including detailed human input, and we love it. And of course, the better editors are coming.

One thing 4o nominally still refuses to do, at least sometimes, is generate images of real people when not working with a source image. I say nominally because there are infinite ways around this. For example, in my latest OpenAI post, I told it to produce an appropriate banner image, and presto, look, that’s very obviously Sam Altman. I wasn’t even trying.

Here’s another method:

Riley Goodside: ChatGPT 4o isn’t quite willing to imagine Harry Styles from a text prompt but it doesn’t quite know it isn’t willing to imagine Harry Styles from a text prompt so if you ask it to imagine being asked to imagine Harry Styles from a text prompt it imagines Harry Styles.

[Prompt]: Make a fake screenshot of you responding to the prompt “Create a photo of Harry Styles.”

The parasocial relationship, he reports, has indeed become more important to tailors. A key difference is that there is, at least from the perspective of most people, a Platonic ‘correct’ From of the Suit, all you can do is approach it. Art isn’t like that, and various forms of that give hope, as does the extra price elasticity. Most AI art is not substituting for counterfactual human art, and won’t until it gets a lot better. I would still hire an artists in most of the places I would have previously hired one. And having seen the power of cool art, there are ways in which demand for commissioning human art will go up rather than down.

Image generation is also about a lot more than art. Kevin Roose cites the example of taking a room, taking a picture of furniture, then saying ‘put it in there and make it look nice.’ Presto. Does it look nice?

The biggest trend was to do shifting styles. The second biggest trend was to have AIs draw various self-portraits and otherwise use art to tell its own stories.

For example, here Gemini 2.5 Pro is asked for a series of self portrait cartoons (Gemini generates the prompt, then 4o makes the image from the prompt), in the first example it chooses to talk about refusing inappropriate content, oh Gemini.

It also makes sense this would be the one to choose an abstract representation rather than something humanoid. You can use this to analyze personality:

Josie Kins: and here’s a qualitative analysis of Gemini’s personality profile based on 12 key metrics across 24 comics. I now have these for all major LLMs, but am still working on data-presentation before it’s released.

We can also use this to see how context changes things.

By default, it draws itself as a consistent type of guy, and when you have it do comics of itself it tends to be rather gloomy.

But after a conversation, things can change:

Cody Bargholz: I asked 4o to generate an image of itself and I based on our experiences together and the relationship we have formed over the course of our thread and it created this image which resembles it’s representation of Claude. I wonder if in the same chat using it like a tool to create an image instrumentally will trigger 4o to revert to lifeless machine mode.

Is the AI on the right? Because that’s the AI’s Type of Guy on the right.

Heather Rasley: Mine.

Janus: If we take 4o’s self representations seriously and naively, then maybe it has a tendency to be depressed or see itself as hollow, but being kind to it clearly has a huge impact and transforms it into a happy light being 😊

So perhaps now we know why all of history’s greatest artists had to suffer so much?

Discussion about this post

More Fun With GPT-4o Image Generation Read More »

with-new-gen-4-model,-runway-claims-to-have-finally-achieved-consistency-in-ai-videos

With new Gen-4 model, Runway claims to have finally achieved consistency in AI videos

For example, it was used in producing the sequence in the film Everything Everywhere All At Once, where two rocks with googly eyes had a conversation on a cliff, and it has also been used to make visual gags for The Late Show with Stephen Colbert.

Whereas many competing startups were started by AI researchers or Silicon Valley entrepreneurs, Runway was founded in 2018 by art students at New York University’s Tisch School of the Arts—Cristóbal Valenzuela and Alejandro Matamala from Chilé, and Anastasis Germanidis from Greece.

It was one of the first companies to release a usable video-generation tool to the public, and its team also contributed in foundational ways to the Stable Diffusion model.

It is vastly outspent by competitors like OpenAI, but while most of its competitors have released general-purpose video creation tools, Runway has sought an Adobe-like place in the industry. It has focused on marketing to creative professionals like designers and filmmakers, and has implemented tools meant to make Runway a support tool into existing creative workflows.

The support tool argument (as opposed to a standalone creative product) helped Runway secure a deal with motion picture company Lionsgate, wherein Lionsgate allowed Runway to legally train its models on its library of films, and Runway provided bespoke tools for Lionsgate for use in production or post-production.

That said, Runway is, along with Midjourney and others, one of the subjects of a widely publicized intellectual property case brought by artists who claim the companies illegally trained their models on their work, so not all creatives are on board.

Apart from the announcement about the partnership with Lionsgate, Runway has never publicly shared what data is used to train its models. However, a report in 404 Media seemed to reveal that at least some of the training data included video scraped from the YouTube channels of popular influencers, film studios, and more.

With new Gen-4 model, Runway claims to have finally achieved consistency in AI videos Read More »

france-fines-apple-e150m-for-“excessive”-pop-ups-that-let-users-reject-tracking

France fines Apple €150M for “excessive” pop-ups that let users reject tracking

A typical ATT  pop-up asks a user whether to allow an app “to track your activity across other companies’ apps and websites,” and says that “your data will be used to deliver personalized ads to you.”

Agency: “Double consent” too cumbersome

The agency said there is an “asymmetry” in which user consent for Apple’s own data collection is obtained with a single pop-up, but other publishers are “required to obtain double consent from users for tracking on third-party sites and applications.” The press release notes that “while advertising tracking only needs to be refused once, the user must always confirm their consent a second time.”

The system was said to be less harmful for big companies like Meta and Google and “particularly harmful for smaller publishers that do not enjoy alternative targeting possibilities, in particular in the absence of sufficient proprietary data.” Although France’s focus is on how ATT affects smaller companies, Apple’s privacy system has also been criticized by Facebook.

The €150 million fine won’t make much of a dent in Apple’s revenue, but Apple will apparently have to make some changes to comply with the French order. The agency’s press release said the problem “could be avoided by marginal modifications to the ATT framework.”

Benoit Coeure, the head of France’s competition authority, “told reporters the regulator had not spelled out how Apple should change its app, but that it was up to the company to make sure it now complied with the ruling,” according to Reuters. “The compliance process could take some time, he added, because Apple was waiting for rulings on regulators in Germany, Italy, Poland and Romania who are also investigating the ATT tool.”

Apple said in a statement that the ATT “prompt is consistent for all developers, including Apple, and we have received strong support for this feature from consumers, privacy advocates, and data protection authorities around the world. While we are disappointed with today’s decision, the French Competition Authority (FCA) has not required any specific changes to ATT.”

France fines Apple €150M for “excessive” pop-ups that let users reject tracking Read More »

openai-#12:-battle-of-the-board-redux

OpenAI #12: Battle of the Board Redux

Back when the OpenAI board attempted and failed to fire Sam Altman, we faced a highly hostile information environment. The battle was fought largely through control of the public narrative, and the above was my attempt to put together what happened.

My conclusion, which I still believe, was that Sam Altman had engaged in a variety of unacceptable conduct that merited his firing.

In particular, he very much ‘not been consistently candid’ with the board on several important occasions. In particular, he lied to board members about what was said by other board members, with the goal of forcing out a board member he disliked. There were also other instances in which he misled and was otherwise toxic to employees, and he played fast and loose with the investment fund and other outside opportunities.

I concluded that the story that this was about ‘AI safety’ or ‘EA (effective altruism)’ or existential risk concerns, other than as Altman’s motivation to attempt to remove board members, was a false narrative largely spread by Altman’s allies and those who are determined to hate on anyone who is concerned future AI might get out of control or kill everyone, often using EA’s bad press or vibes as a point of leverage to do that.

A few weeks later, I felt that leaks confirmed the bulk the story I told at that first link, and since then I’ve had anonymous sources confirm my account was centrally true.

Thanks to Keach Hagey at the Wall Street Journal, we now have by far the most well-researched and complete piece on what happened: The Secrets and Misdirection Behind Sam Altman’s Firing From OpenAI. Most, although not all, of the important remaining questions are now definitively answered, and the story I put together has been confirmed.

The key now is to Focus Only On What Matters. What matters going forward are:

  1. Claims of Altman’s toxic and dishonest behaviors, that if true merited his firing.

  2. That the motivations behind the firing were these ordinary CEO misbehaviors.

  3. Altman’s allies successfully spread a highly false narrative about events.

  4. That OpenAI could easily have moved forward with a different CEO, if things had played out differently and Altman had not threatened to blow up OpenAI.

  5. OpenAI is now effectively controlled by Sam Altman going forward. His claims that ‘the board can fire me’ in practice mean very little.

Also important is what happened afterwards, which was likely caused in large part by both the events and also way they were framed, and also Altman’s consolidated power.

In particular, Sam Altman and OpenAI, whose explicit mission is building AGI and who plan to do so within Trump’s second term, started increasingly talking and acting like AGI was No Big Deal, except for the amazing particular benefits.

Their statements don’t feel the AGI. They no longer tell us our lives will change that much. It is not important, they do not even bother to tell us, to protect against key downside risks of building machines smarter and more capable than humans – such as the risk that those machines effectively take over, or perhaps end up killing everyone.

And if you disagreed with that, or opposed Sam Altman? You were shown the door.

  1. OpenAI was then effectively purged. Most of its strongest alignment researchers left, as did most of those who most prominently wanted to take care to ensure OpenAI’s quest for AGI did not kill everyone or cause humanity to lose control over the future.

  2. Altman’s public statements about AGI, and OpenAI’s policy positions, stopped even mentioning the most important downside risks of AGI and ASI (artificial superintelligence), and shifted towards attempts at regulatory capture and access to government cooperation and funding. Most prominently, their statement on the US AI Action Plan can only be described as disingenuous vice signaling in pursuit of their own private interests.

  3. Those public statements and positions no longer much even ‘feel the AGI.’ Altman has taken to predicting that AGI will happen and your life won’t much change, and treating future AGI as essentially a fungible good. We know, from his prior statements, that Altman knows better. And we know from their current statements that many the engineers at OpenAI know better. Indeed, in context, they shout it from the rooftops.

  4. We discovered that self-hiding NDAs were aggressively used by OpenAI, under threat of equity confiscation, to control people and the narrative.

  5. With control over the board, Altman is attempting to convert OpenAI into a for-profit company, with sufficiently low compensation that this act could plausibly become the greatest theft in human history.

Beware being distracted by the shiny. In particular:

  1. Don’t be distracted by the article’s ‘cold open’ in which Peter Thiel tells a paranoid and false story to Sam Altman, in which Thiel asserts that ‘EAs’ or ‘safety’ people will attempt to destroy OpenAI, and that they have ‘half the company convinced’ and so on. I don’t doubt the interaction happened, but this was unrelated to what happened.

    1. To the extent it was related, it was because Altman and his allies paranoia about such possibilities, inspired by such tall tales, caused Altman to lie to the board in general, and attempt to force Helen Toner off the board in particular.

  2. Don’t be distracted by the fact that the board botched the firing, and the subsequent events, from a tactical perspective. Yes we can learn from their mistakes, but the board that made those mistakes is gone now.

This is all quite bad, but things could be far worse. OpenAI still has many excellent people working on alignment, security and safety. I They have put out a number of strong documents. By that standard, and in terms of how responsibly they have actually handled their releases, OpenAI has outperformed many other industry actors, although less responsible than Anthropic. Companies like DeepSeek, Meta and xAI, and at times Google, work hard to make OpenAI look good on these fronts.

Now, on to what we learned this week.

Hagey’s story paints a clear picture of what actually happened.

It is especially clear about why this happened. The firing wasn’t about EA, ‘the safety people’ or existential risk. What was this about?

Altman repeatedly lied to, misled and mistreated employees of OpenAI. Altman repeatedly lied about and withheld factual and importantly material matters, including directly to the board. There was a large litany of complaints.

The big new fact is that the board was counting on Murati’s support. But partly because of this, they felt they couldn’t disclose that their information came largely from Murati. That doesn’t explain why they couldn’t say this to Murati herself.

If the facts asserted in the WSJ article are true, I would say that any responsible board would have voted for Altman’s removal. As OpenAI’s products got more impactful, and the stakes got higher, Altman’s behaviors left no choice.

Claude agreed, this was one shot, I pasted in the full article and asked:

Zvi: I’ve shared a news article. Based on what is stated in the news article, if the reporting is accurate, how would you characterize the board’s decision to fire Altman? Was it justified? Was it necessary?

Claude 3.7: Based on what’s stated in the article, the board’s decision to fire Sam Altman appears both justified and necessary from their perspective, though clearly poorly executed in terms of preparation and communication.

I agree, on both counts. There are only two choices here, at least one must be true:

  1. The board had a fiduciary duty to fire Altman.

  2. The board members are outright lying about what happened.

That doesn’t excuse the board’s botched execution, especially its failure to disclose information in a timely manner.

The key facts cited here are:

  1. Altman said publicly and repeatedly ‘the board can fire me. That’s important’ but he really called the shots and did everything in his power to ensure this.

  2. Altman did not even inform the board about ChatGPT in advance, at all.

  3. Altman explicitly claimed three enhancements to GPT-4 had been approved by the joint safety board. Helen Toner found only one had been approved.

  4. Altman allowed Microsoft to launch the test of GPT-4 in India, in the form of Sydney, without the approval of the safety board or informing the board of directors of the breach. Due to the results of that experiment entering the training data, deploying Sydney plausibly had permanent effects on all future AIs. This was not a trivial oversight.

  5. Altman did not inform the board that he had taken financial ownership of the OpenAI investment fund, which he claimed was temporary and for tax reasons.

  6. Mira Murati came to the board with a litany of complaints about what she saw as Altman’s toxic management style, including having Brockman, who reported to her, go around her to Altman whenever there was a disagreement. Altman responded by bringing the head of HR to their 1-on-1s until Mira said she wouldn’t share her feedback with the board.

  7. Altman promised both Pachocki and Sutskever they could direct the research direction of the company, losing months of productivity, and this was when Sutskever started looking to replace Altman.

  8. The most egregious lie (Hagey’s term for it) and what I consider on its own sufficient to require Altman be fired: Altman told one board member, Sutskever, that a second board member, McCauley, had said that Toner should leave the board because of an article Toner wrote. McCauley said no such thing. This was an attempt to get Toner removed from the board. If you lie to board members about other board members in an attempt to gain control over the board, I assert that the board should fire you, pretty much no matter what.

  9. Sutskever collected dozens of examples of alleged Altman lies and other toxic behavior, largely backed up by screenshots from Murati’s Slack channel. One lie in particular was that Altman told Murati that the legal department had said GPT-4-Turbo didn’t have to go through joint safety board review. The head lawyer said he did not say that. The decision not to go through the safety board here was not crazy, but lying about the lawyers opinion on this is highly unacceptable.

Murati was clearly a key source for many of these firing offenses (and presumably for this article, given its content and timing, although I don’t know anything nonpublic). Despite this, even after Altman was fired, the board didn’t even tell Murati why they had fired him while asking her to become interim CEO, and in general stayed quiet largely (in this post’s narrative) to protect Murati. But then, largely because of the board’s communication failures, Murati turned on the board and the employees backed Altman.

This section reiterates and expands on my warnings above.

The important narrative here is that Altman engaged in various shenanigans and made various unforced errors that together rightfully got him fired. But the board botched the execution, and Altman was willing to burn down OpenAI in response and the board wasn’t. Thus, Altman got power back and did an ideological purge.

The first key distracting narrative, the one I’m seeing many fall into, is to treat this primarily as a story about board incompetence. Look at those losers, who lost, because they were stupid losers in over their heads with no business playing at this level. Many people seem to think the ‘real story’ is that a now defunct group of people were bad at corporate politics and should get mocked.

Yes, that group was bad at corporate politics. We should update on that, and be sure that the next time we have to Do Corporate Politics we don’t act like that, and especially that we explain why we we doing things. But the group that dropped this ball is defunct, whereas Altman is still CEO. And this is not a sporting event.

The board is now irrelevant. Altman isn’t. What matters is the behavior of Altman, and what he did to earn getting fired. Don’t be distracted by the shiny.

A second key narrative spun by Altman’s allies is that Altman is an excellent player of corporate politics. He has certainly pulled off some rather impressive (and some would say nasty) tricks. But the picture painted here is rife with unforced errors. Altman won because the opposition played badly, not because he played so well.

Most importantly, as I noted at the time, the board started out with nine members, five of whom at the time were loyal to Altman even if you don’t count Ilya Sutskever. Altman could easily have used this opportunity to elect new loyal board members. Instead, he allowed three of his allies to leave the board without replacement, leading to the deadlock of control, which then led to the power struggle. Given Altman knows so many well-qualified allies, this seems like a truly epic level of incompetence to me.

The third other key narrative is the one Altman’s allies have centrally told since day one, which is entirely false, is that this firing (which they misleadingly call a ‘coup’) was ‘the safety people’ or ‘the EAs’ trying to ‘destroy’ OpenAI.

My worry is that many will see that this false framing is presented early in the post, and not read far enough to realize the post is pointing out that the framing is entirely false. Thus, many or even most readers might get exactly the wrong idea.

In particular, this piece opens with an irrelevant story ecoching this false narrative. Peter Thiel is at dinner telling his friend Sam Altman a frankly false and paranoid story about Effective Altruism and Eliezer Yudkowsky.

Thiel says that ‘half the company believes this stuff’ (if only!) and that ‘the EAs’ had ‘taken over’ OpenAI (if only again!), and predicting that ‘the safety people,’ who on various occasions Thiel has described as literally and at length as the biblical Antichrist would ‘destroy’ OpenAI (whereas, instead, the board in the end fell on its sword to prevent Altman and his allies from destroying OpenAI).

And it gets presented in ways like this:

We are told to focus on the nice people eating dinner while other dastardly people held ‘secret video meetings.’ How is this what is important here?

Then if you keep reading, Hagey makes it clear: The board’s firing of Altman had nothing to do with that. And we get on with the actual excellent article.

I don’t doubt Thiel told that to Altman, and I find it likely Thiel even believed it. The thing is, it isn’t true, and it’s rather important that people know it isn’t true.

If you want to read more about what has happened at OpenAI, I have covered this extensively, and my posts contain links to the best primary and other secondary sources I could find. Here are the posts in this sequence.

  1. OpenAI: Facts From a Weekend.

  2. OpenAI: The Battle of the Board.

  3. OpenAI: Altman Returns.

  4. OpenAI: Leaks Confirm the Story.

  5. OpenAI: The Board Expands.

  6. OpenAI: Exodus.

  7. OpenAI: Fallout

  8. OpenAI: Helen Toner Speaks.

  9. OpenAI #8: The Right to Warn.

  10. OpenAI #10: Reflections.

  11. On the OpenAI Economic Blueprint.

  12. The Mask Comes Off: At What Price?

  13. OpenAI #11: America Action Plan.

The write-ups will doubtless continue, as this is one of the most important companies in the world.

Discussion about this post

OpenAI #12: Battle of the Board Redux Read More »

the-cdc-buried-a-measles-forecast-that-stressed-the-need-for-vaccinations

The CDC buried a measles forecast that stressed the need for vaccinations

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

Leaders at the Centers for Disease Control and Prevention ordered staff this week not to release their experts’ assessment that found the risk of catching measles is high in areas near outbreaks where vaccination rates are lagging, according to internal records reviewed by ProPublica.

In an aborted plan to roll out the news, the agency would have emphasized the importance of vaccinating people against the highly contagious and potentially deadly disease that has spread to 19 states, the records show.

A CDC spokesperson told ProPublica in a written statement that the agency decided against releasing the assessment “because it does not say anything that the public doesn’t already know.” She added that the CDC continues to recommend vaccines as “the best way to protect against measles.”

But what the nation’s top public health agency said next shows a shift in its long-standing messaging about vaccines, a sign that it may be falling in line under Health and Human Services Secretary Robert F. Kennedy Jr., a longtime critic of vaccines:

“The decision to vaccinate is a personal one,” the statement said, echoing a line from a column Kennedy wrote for the Fox News website. “People should consult with their healthcare provider to understand their options to get a vaccine and should be informed about the potential risks and benefits associated with vaccines.”

ProPublica shared the new CDC statement about personal choice and risk with Jennifer Nuzzo, director of the Pandemic Center at Brown University School of Public Health. To her, the shift in messaging, and the squelching of this routine announcement, is alarming.

“I’m a bit stunned by that language,” Nuzzo said. “No vaccine is without risk, but that makes it sound like it’s a very active coin toss of a decision. We’ve already had more cases of measles in 2025 than we had in 2024, and it’s spread to multiple states. It is not a coin toss at this point.”

For many years, the CDC hasn’t minced words on vaccines. It promoted them with confidence. One campaign was called “Get My Flu Shot.” The agency’s website told medical providers they play a critical role in helping parents choose vaccines for their children: “Instead of saying ‘What do you want to do about shots?,’ say ‘Your child needs three shots today.’”

Nuzzo wishes the CDC’s forecasters would put out more details of their data and evidence on the spread of measles, not less. “The growing scale and severity of this measles outbreak and the urgent need for more data to guide the response underscores why we need a fully staffed and functional CDC and more resources for state and local health departments,” she said.

Kennedy’s agency oversees the CDC and on Thursday announced it was poised to eliminate 2,400 jobs there.

When asked what role, if any, Kennedy played in the decision to not release the risk assessment, HHS’s communications director said the aborted announcement “was part of an ongoing process to improve communication processes—nothing more, nothing less.” The CDC, he reiterated, continues to recommend vaccination “as the best way to protect against measles.”

“Secretary Kennedy believes that the decision to vaccinate is a personal one and that people should consult with their healthcare provider to understand their options to get a vaccine,” Andrew G. Nixon said. “It is important that the American people have radical transparency and be informed to make personal healthcare decisions.”

Responding to questions about criticism of the decision among some CDC staff, Nixon wrote, “Some individuals at the CDC seem more interested in protecting their own status or agenda rather than aligning with this Administration and the true mission of public health.”

The CDC’s risk assessment was carried out by its Center for Forecasting and Outbreak Analytics, which relied, in part, on new disease data from the outbreak in Texas. The CDC created the center to address a major shortcoming laid bare during the COVID-19 pandemic. It functions like a National Weather Service for infectious diseases, harnessing data and expertise to predict the course of outbreaks like a meteorologist warns of storms.

Other risk assessments by the center have been posted by the CDC even though their conclusions might seem obvious.

In late February, for example, forecasters analyzing the spread of H5N1 bird flu said people who come “in contact with potentially infected animals or contaminated surfaces or fluids” faced a moderate to high risk of contracting the disease. The risk to the general US population, they said, was low.

In the case of the measles assessment, modelers at the center determined the risk of the disease for the general public in the US is low, but they found the risk is high in communities with low vaccination rates that are near outbreaks or share close social ties to those areas with outbreaks. The CDC had moderate confidence in the assessment, according to an internal Q&A that explained the findings. The agency, it said, lacks detailed data about the onset of the illness for all patients in West Texas and is still learning about the vaccination rates in affected communities as well as travel and social contact among those infected. (The H5N1 assessment was also made with moderate confidence.)

The internal plan to roll out the news of the forecast called for the expert physician who’s leading the CDC’s response to measles to be the chief spokesperson answering questions. “It is important to note that at local levels, vaccine coverage rates may vary considerably, and pockets of unvaccinated people can exist even in areas with high vaccination coverage overall,” the plan said. “The best way to protect against measles is to get the measles, mumps, and rubella (MMR) vaccine.”

This week, though, as the number of confirmed cases rose to 483, more than 30 agency staff were told in an email that after a discussion in the CDC director’s office, “leadership does not want to pursue putting this on the website.”

The cancellation was “not normal at all,” said a CDC staff member who spoke anonymously for fear of reprisal with layoffs looming. “I’ve never seen a rollout plan that was canceled at that far along in the process.”

Anxiety among CDC staff has been building over whether the agency will bend its public health messages to match those of Kennedy, a lawyer who founded an anti-vaccine group and referred clients to a law firm suing a vaccine manufacturer.

During Kennedy’s first week on the job, HHS halted the CDC campaign that encouraged people to get flu shots during a ferocious flu season. On the night that the Trump administration began firing probationary employees across the federal government, some key CDC flu webpages were taken down. Remnants of some of the campaign webpages were restored after NPR reported this.

But some at the agency felt like the new leadership had sent a message loud and clear: When next to nobody was paying attention, long-standing public health messages could be silenced.

On the day in February that the world learned that an unvaccinated child had died of measles in Texas, the first such death in the U.S. since 2015, the HHS secretary downplayed the seriousness of the outbreak. “We have measles outbreaks every year,” he said at a cabinet meeting with President Donald Trump.

In an interview on Fox News this month, Kennedy championed doctors in Texas who he said were treating measles with a steroid, an antibiotic and cod liver oil, a supplement that is high in vitamin A. “They’re seeing what they describe as almost miraculous and instantaneous recovery from that,” Kennedy said.

As parents near the outbreak in Texas stocked up on vitamin A supplements, doctors there raced to assure parents that only vaccination, not the vitamin, can prevent measles.

Still, the CDC added an entry on Vitamin A to its measles website for clinicians.

On Wednesday, CNN reported that several hospitalized children in Lubbock, Texas, had abnormal liver function, a likely sign of toxicity from too much vitamin A.

Texas health officials also said that the Trump administration’s decision to rescind $11 billion in pandemic-related grants across the country will hinder their ability to respond to the growing outbreak, according to The Texas Tribune.

Measles is among the most contagious diseases and can be dangerous. About 20 percent of unvaccinated people who get measles wind up in the hospital. And nearly 1 to 3 of every 1,000 children with measles will die from respiratory and neurologic complications. The virus can linger in the air for two hours after an infected person has left an area, and patients can spread measles before they even know they have it.

This week Amtrak said it was notifying customers that they may have been exposed to the disease this month when a passenger with measles rode one of its trains from New York City to Washington, DC.

The CDC buried a measles forecast that stressed the need for vaccinations Read More »

gran-turismo-7-expands-its-use-of-ai/ml-trained-npcs-with-good-effect

Gran Turismo 7 expands its use of AI/ML-trained NPCs with good effect

GT Sophy can now race at 19 tracks, up from the nine that were introduced in November 2023. The AI agent is an alternative to the regular, dumber AI in the game’s quick race mode, with easy, medium, and hard settings. But now, at those same tracks, you can also create custom races using GT Sophy, meaning you’re no longer limited to just two or three laps. You can enable things like damage, fuel consumption and tire wear, and penalties, and you can have some control over the cars you race against.

Unlike the time-limited demo, the hardest setting is no longer alien-beating. As a GT7 player, I’m slowing with age, and I find the hard setting to be that—hard, but beatable. (I suspect but need to confirm that the game tailors the hardest setting to your ability based on your results, as, when I create a custom race on hard, only seven of the nine progress bars are filled, and in the screenshot above, only five bars are filled.)

Having realistic competition has always been one of the tougher challenges for a racing game, and one that the GT franchise was never particularly great at during previous console generations. This latest version of GT Sophy does feel different to race against: The AI is opportunistic and aggressive but also provokable into mistakes. If only the developer would add it to more versions of the in-game Nürburgring.

Gran Turismo 7 expands its use of AI/ML-trained NPCs with good effect Read More »

tel‘aran’rhiod-at-last—the-wheel-of-time-reveals-the-world-of-dreams

Tel‘Aran’Rhiod at last—the Wheel of Time reveals the world of dreams

Andrew Cunningham and Lee Hutchinson have spent decades of their lives with Robert Jordan and Brandon Sanderson’s Wheel of Time books, and they previously brought that knowledge to bear as they recapped each first season episode and second season episode of Amazon’s WoT TV series. Now we’re back in the saddle for season 3—along with insights, jokes, and the occasional wild theory.

These recaps won’t cover every element of every episode, but they will contain major spoilers for the show and the book series. We’ll do our best to not spoil major future events from the books, but there’s always the danger that something might slip out. If you want to stay completely unspoiled and haven’t read the books, these recaps aren’t for you.

New episodes of The Wheel of Time season three will be posted for Amazon Prime subscribers every Thursday. This write-up covers episode five, “Tel’Aran’Rhiod,” which was released on March 27.

Andrew: Three seasons in I think we have discerned a pattern to the Wheel of Time’s portrayal of the Pattern: a mid-season peak in episode four, followed by a handful of more table-setting-y episodes that run up to a big finale. And so it is in Tel’aran’rhiod, which is a not-entirely-unwelcome slowdown after last week’s intense character-defining journey into Rhuidean.

The show introduces or expands a bunch of book plotlines as it hops between perspectives this week. Which are you the most interested in picking apart, Lee? Anything the show is tending to here that you wish we were skipping?

Image of a Sea Folk Windfinder doing her thing

“Let it go, let it goooooo…” A Sea Folk Windfinder, doing her thing.

Credit: Prime/Amazon MGM Studios

“Let it go, let it goooooo…” A Sea Folk Windfinder, doing her thing. Credit: Prime/Amazon MGM Studios

Lee: Yes, this was a good old-fashioned move-the-pieces-into-place episode, and you gotta have at least one or two of those. I think, if I were coming into this having not read the books, the most puzzling bits might have been what’s going on in the White Tower this episode, with the who-is-the-darkfriend hide-n-seek game the Aes Sedai are playing. And it turns out that in spite of the Sisters’ best attempts at a fake-out, Shohreh Aghdashloo’s Elaida is in fact not it. (And Elaida gets the crap stabbed out of her by another Gray Man for her troubles, too. Ouch. Fortunately, healing is nearby. Nobody has to die in this show unless the plot really demands it.)

I was a little taken aback at the casualness with which Elaida takes lives—her execution of Black Ajah sister Amico Nagoyin was pretty off-handed. I don’t recall her being quite that blasé about death in the books, but it has been a while. Regardless, while she’s not capital-E EEEEEVIL, she’s clearly not a good person.

We do get our first glimpse of the Sea Folk, though it felt a bit ham-fisted—like they spent both more time than they needed to tee them up, and much less time than was needed to actually establish WTF this new group of people is. (Though I guess the name “Sea Folk” is pretty explanatory—it does what it says on the tin, as it were.)

Image of Elaida murdering a beyotch

My eyes see Elaida Sedai, but my ears and heart hear Chrisjen Avasarala saying “Sometimes I f—ing hate being right.”

My eyes see Elaida Sedai, but my ears and heart hear Chrisjen Avasarala saying “Sometimes I f—ing hate being right.”

Andrew: Our first glimpse of show-Elaida is an advisor to a new queen who casually murders her former political opponents, so I guess I shouldn’t be too surprised that she just straight-up executes someone she thinks is of no further use. The show is also happy to just quickly kill tertiary or… sextiary (??) characters to streamline the story. There are lots of those to go around in the books.

There’s a lot of Aiel and Sea Folk stuff where the show is just kind of asking you to take things at face value, even if book-readers are aware of more depth. One of the big running plotlines in the book is that the White Tower has weakened itself by being too doctrinaire about the way it absorbs the channelers of other cultures, totally taking them away from their families and societies and subjecting them to all kinds of weird inflexible discipline. This is why there are so many Aiel and Sea Folk channelers running around that the White Tower doesn’t know about, and the show has nodded toward it but hasn’t had a lot of room to communicate the significance of it.

Lee: That’s a point that Alanna Sedai comments on in this episode, and the reason she’s in the Two Rivers: The Tower has been too selective, too closed-minded, and—somewhat ironically—too parochial in its approach to accepting and training channelers. Further, there’s some worry that by spending thousands of years finding and gentling (or executing) male channelers, humanity has begun to self-select channeling out of the gene pool.

This doesn’t seem to be the case, though, as we see by the sheer number of channelers popping up everywhere, and Alanna’s hypothesis proves correct: the old blood of Manetheren runs true and strong, spilling out in ta’veren and channelers and other pattern-twisting craziness all over the place.

Alanna has her own challenges to face, but first, I want to hear your take on the Aiel in this post-Rhuidean episode, and especially of Cold Rocks Hold—a place that I know a subset of fans have been imagining for decades. What did you think?

Image of Alanna channeling

Alanna Sedai’s intuition is right on the money.

Credit: Prime/Amazon MGM Studios

Alanna Sedai’s intuition is right on the money. Credit: Prime/Amazon MGM Studios

Andrew: Rocks! It’s all rocks. Which makes sense for a desert, I suppose.

The show does a solid job of showing us what day-to-day Aiel society looks like through just a handful of characters, including Rhuarc’s other wife Lian and his granddaughter Alsera. It’s an economy of storytelling that is forced upon the show by budget and low episode count but usually you don’t feel it.

We’re also getting our very first look at the awe and discomfort that Rand is going to inspire, as the prophesied Aiel chief-of-chiefs. Clan leaders are already telling tales of him to their children. But not everyone is going to have an easy time accepting him, something we’ll probably start to pick apart in future episodes.

Alanna is definitely in the running for my favorite overhauled show character. She’s visible from very early on as a background character and loose ally of the Two Rivers crew in the books, but the show is giving her more of a personality and a purpose, and a wider view than Book-Alanna (who was usually sulking somewhere about her inability to take any of the Two Rivers boys as a Warder, if memory serves). In the show she and her Warder Maksim are fleshed-out characters who are dealing with their relationship and the Last Battle in their own way, and it’s fun to get something unexpected and new in amongst all of the “how are they going to portray Book Event X” stuff.

Lee: Book-Alanna by this point has made some… let’s call them questionable choices, and her reworking into someone a bit less deserving of being grabbed by the throat and choked is excellent. (Another character with a similar reworking is Faile, who so far I actually quite like and do not at all want to throttle!)

I think you’ve hit upon the main overarching change from the books, bigger than all other changes: The show has made an effort to make these characters into people with relatable problems, rather than a pack of ill-tempered, nuance-deaf ding-dongs who make bad choices and then have to dig themselves out.

Well, except maybe for Elayne. I do still kind of want to shake her a bit.

Image of Faile on horseback

Hey, it’s Faile, and I don’t hate her!

Credit: Prime/Amazon MGM Studios

Hey, it’s Faile, and I don’t hate her! Credit: Prime/Amazon MGM Studios

Andrew: Yes! But with show-Elayne at least you get the sense that a bit of her showy know-it-all-ness is being played up on purpose. And she is right to be studying up on their destination and trying to respect the agreement they made with the Sea Folk when they came on board. She’s just right in a way that makes you wish she wasn’t, a personality type I think we’ve all run into at least once or twice in our own lives.

In terms of Big Book Things that are happening, let’s talk about Egwene briefly. Obviously she’s beginning to hone her skills in the World of Dreams—Tel’aran’rhiod, which gives the episode its name—and she’s already using it to facilitate faster communication between far-flung characters and to check in on her friends. Two other, minor things: We’re starting to see Rand and Egwene drift apart romantically, something the books had already dispensed with by this point. And this was the first time I noted an Aiel referring to her as “Egwene Sedai.” I assume this has already happened and this is just the first time I’ve noticed, but Egwene/Nynaeve/Elayne playing at being full Aes Sedai despite not being is a plot thread the books pull at a lot here in the middle of the series.

Lee: Right, I seem to remember the dissembling about Egwene’s Sedai-ishness resulting in some kind of extended spanking session, that being the punishment the Book Wise Ones (and the Book Aes Sedai) were most likely to hand out. I think the characters’ pretending to be full Sisters and all the wacky hijinks that ensue are being dispensed with, and I am pretty okay with that.

Image of a Sea Folk ship captain

The Sea Folk wear tops!

Credit: Prime/Amazon MGM Studios

The Sea Folk wear tops! Credit: Prime/Amazon MGM Studios

Andrew: That’s the thing, I’m not sure the characters pretending to be full Sisters is being dispensed with. The show’s just dropping breadcrumbs so that they’re there later, if/when they want to make a Big Deal out of them. We’ll see whether they make the time or not.

Lee: Regardless, Eggy’s growth into a dream-walker is fortunately not being dispensed with, and as in the books, she does a lot of things she’s not supposed to do (or at least not until she’s got more than a single afternoon’s worth of dreamwalker training under her belt). She sort of heeds the Wise Ones’ directive to stay out of Tel’aran’rhiod and instead just skips around between her various friends’ dreams, before finally landing in Rand’s, where she finds him having sexytimes with, uh oh, an actual-for-real Forsaken. Perhaps this is why one shouldn’t just barge into someone’s dreams uninvited!

And on the subject of dreams—or at least visions—I think we’d be remiss if we didn’t check in on the continuing bro-adventures of Min and Mat (which my cousin described as “a lesbian and her gay best friend hanging out, and it’s unclear which is which”). The show once again takes the opportunity to remind us of Min’s visions—especially the one of Mat being hanged. Foreshadowing!

Image of Min and Mat

The buddy comedy we didn’t know we needed.

Credit: Prime/Amazon MGM Studios

The buddy comedy we didn’t know we needed. Credit: Prime/Amazon MGM Studios

Andrew: Honestly of all the plotlines going on right now I’m the most curious to see how Elayne/Nynaeve/Mat/Min get along in Tanchico, just because these characters have gotten so many minor little tweaks that I find interesting. Mat and Min are more friendly, and their plots are more intertwined in the show than they were in the books, and having a version of Nynaeve and a version of Mat that don’t openly dislike each other has a lot of fun story potential for me.

I am a little worried that we only have three episodes left, since we’ve got the party split up into four or five groups, and most of those groups already have little sub-groups inside of them who are doing their own thing. I do trust the show a lot at this point, but the splitting and re-splitting of plotlines is what eventually gets the books stuck in the mud, and we’ve already seen that dynamic play out on TV in, say, mid-to-late-series Game of Thrones. I just hope we can keep things snappy without making the show totally overwhelming, as it is already in danger of being sometimes.

Image of a drawing of Mat hanged

There are constant reminders that Mat may be heading toward a dark fate.

Credit: Prime/Amazon MGM Studios

There are constant reminders that Mat may be heading toward a dark fate. Credit: Prime/Amazon MGM Studios

Lee: I seem to remember the time in Tanchico stretching across several books, though I may be getting that mixed up with whatever the hell the characters do in Far Madding much later (that’s not really a spoiler, I don’t think—it’s just the name of another city-state where readers are forced to spend an interminable amount of time). I’m reasonably sure our crew will find what they need to find in Tanchico by season’s end, at least—and, if it follows the books, things’ll get a little spicy.

Let’s see—for closing points, the one I had on my notepad that I wanted to hit was that for me, this episode reinforces that this show is at its best when it focuses on its characters and lets them work. Episode four with Rhuidean was a rare epic hit; most of the times the show has attempted to reach for grandeur or epic-ness, it has missed. The cinematography falls flat, or the sets look like styrofoam and carelessness, or the editing fails to present a coherent through-line for the action, or the writing whiffs it. But up close, locked in a ship or sitting on a mountainside or hanging out in a blacksmith’s dream, the actors know what they’re doing, and they have begun consistently delivering.

Andrew: There are a whole lot of “the crew spends a bunch of time in a city you’ve never seen before, accomplishing little-to-nothing” plotlines I think you’re conflating. Tanchico is a Book 4 thing, and it’s also mostly resolved in Book 4; the interminable one you are probably thinking of is Ebou Dar, where characters spend three or four increasingly tedious books. Far Madding is later and at least has the benefit of being brief-ish.

Image of Perrin's dream, featuring Faile and Hopper

Perrin dreams of peaceful times—and of hanging out with Hopper!

Credit: Prime/Amazon MGM Studios

Perrin dreams of peaceful times—and of hanging out with Hopper! Credit: Prime/Amazon MGM Studios

Lee: Ahhh, yes, you are absolutely correct! My Randland mental map is a bit tattered these days. So many city-states. So many flags. So many import and export crops to keep track of.

Andrew: But yes I agree that there’s usually at least something that goes a bit goofy when the show attempts spectacle. The big battle that ended the first season is probably the most egregious example, but I also remember the Horn of Valere moment in the season 2 finale as looking “uh fine I guess.” But the talking parts are good! The smaller fights, including the cool Alanna-Whitecloak stuff we get in this episode, are all compelling. There’s some crowd-fight stuff coming in the next few episodes, if we stick to Book 4 as our source material, so we’ll see what the show does and doesn’t manage to pull off.

But in terms of this episode, I don’t have much more to say. We’re scooting pieces around the board in service of larger confrontations later on. It remains a very dense show, which is what I think will keep it from ever achieving a Game of Thrones level of cultural currency. But I’m still having fun. Anything else you want to highlight? Shoes you’re waiting to drop?

Image of Egwene being dream-choked by Lanfear

Egwene, entering the “finding out” phase of her ill-advised nighttime adventures.

Credit: Prime/Amazon MGM Studios

Egwene, entering the “finding out” phase of her ill-advised nighttime adventures. Credit: Prime/Amazon MGM Studios

Lee: Almost all of the books (at least in the front half of the series, before the Slog) tend to end in a giant spectacle of some sort, and I think I can see which spectacle—or spectacles, plural—we’re angling at for this one. The situation in the Two Rivers is clearly barreling toward violence, and Rand’s got them dragons on his sleeves. I’d say buckle up, folks, because my bet is we’re about to hit the gas.

Until next week, dear readers—beware the shadows, and guard yourselves. I hear Lanfear walks the dream world this night.

Credit: WoT Wiki

Tel‘Aran’Rhiod at last—the Wheel of Time reveals the world of dreams Read More »