Author name: Rejus Almole

apple-iphone-17-review:-sometimes-boring-is-best

Apple iPhone 17 review: Sometimes boring is best


let’s not confuse “more interesting” with “better”

The least exciting iPhone this year is also the best value for the money.

The iPhone 17 Pro isn’t flashy but it’s probably the best of this year’s upgrades. Credit: Andrew Cunningham

The iPhone 17 Pro isn’t flashy but it’s probably the best of this year’s upgrades. Credit: Andrew Cunningham

Apple seems determined to leave a persistent gap between the cameras of its Pro iPhones and the regular ones, but most other features—the edge-to-edge-screen design with FaceID, the Dynamic Island, OLED display panels, Apple Intelligence compatibility—eventually trickle down to the regular-old iPhone after a generation or two of timed exclusivity.

One feature that Apple has been particularly slow to move down the chain is ProMotion, the branding the company uses to refer to a screen that can refresh up to 120 times per second rather than the more typical 60 times per second. ProMotion isn’t a necessary feature, but since Apple added it to the iPhone 13 Pro in 2021, the extra fluidity and smoothness, plus the always-on display feature, have been big selling points for the Pro phones.

This year, ProMotion finally comes to the regular-old iPhone 17, years after midrange and even lower-end Android phones made the swap to 90 or 120 Hz display panels. And it sounds like a small thing, but the screen upgrade—together with a doubling of base storage from 128GB to 256GB—makes the gap between this year’s iPhone and iPhone Pro feel narrower than it’s been in a long time. If you jumped on the Pro train a few years back and don’t want to spend that much again, this might be a good year to switch back. If you’ve ever been tempted by the Pro but never made the upgrade, you can continue not doing that and miss out on relatively little.

The iPhone 17 has very little that we haven’t seen in an iPhone before, compared to the redesigned Pro or the all-new Air. But it’s this year’s best upgrade, and it’s not particularly close.

You’ve seen this one before

Externally, the iPhone 17 is near-identical to the iPhone 16, which itself used the same basic design Apple had been using since the iPhone 12. The most significant update in that five-year span was probably the iPhone 15, which switched from the display notch to the Dynamic Island and from the Lightning port to USB-C.

The iPhone 12 generation was also probably the last time the regular iPhone and the Pro were this similar. Those phones used the same basic design, the same basic chip, and the same basic screen, leaving mostly camera-related improvements and the Max model as the main points of differentiation. That’s all broadly true of the split between the iPhone 17 and the 17 Pro, as well.

The iPhone Air and Pro both depart from the last half-decade of iPhone designs in different ways, but the iPhone 17 sticks with the tried-and-true. Credit: Andrew Cunningham

The iPhone 17’s design has changed just enough since last year that you’ll need to find a new iPhone 17-compatible case and screen protector for your phone rather than buying something that fits a previous-generation model (it’s imperceptibly taller than the iPhone 16). The screen size has been increased from 6.1 inches to 6.3, the same as the iPhone Pro. But the aluminum-framed-glass-sandwich design is much less of a departure from recent precedent than either the iPhone Air or the Pro.

The screen is the real star of the show in the iPhone 17, bringing 120 Hz ProMotion technology and the Pro’s always-on display feature to the regular iPhone for the first time. According to Apple’s spec sheets (and my eyes, admittedly not a scientific measurement), the 17 and the Pro appear to be using identical display panels, with the same functionally infinite contrast, resolution (2622 x 1206), and brightness specs (1,000 nits typical, 1,600 nits for HDR, 3,000 nits peak in outdoor light).

It’s easy to think of the basic iPhone as “the cheap one” because it is the least expensive of the four new phones Apple puts out every year, but $799 is still well into premium-phone range, and even middle-of-the-road phones from the likes of Google and Samsung have been shipping high-refresh-rate OLED panels in cheaper phones than this for a few years now. By that metric, it’s faintly ridiculous that Apple isn’t shipping something like this in its $600 iPhone 16e, but in Apple’s ecosystem, we’ll take it as a win that the iPhone 17 doesn’t cost more than the 16 did last year.

Holding an iPhone 17 feels like holding any other regular-sized iPhone made within the last five years, with the exceptions of the new iPhone Air and some of the heavier iPhone Pros. It doesn’t have the exceptionally good screen-size-to-weight ratio or the slim profile of the Air, and it doesn’t have the added bulk or huge camera plateau of the iPhone 17 Pro. It feels about like it looks: unremarkable.

Camera

iPhone 15 Pro, main lens, 1x mode, outdoor light. If you’re just shooting with the main lens, the Air and iPhone 17 win out in color and detail thanks to a newer sensor and ISP. Andrew Cunningham

The iPhone Air’s single camera has the same specs and uses the same sensor as the iPhone 17’s main camera, so we’ve already written a bit about how well it does relative to the iPhone Pro and to an iPhone 15 Pro from a couple of years ago.

Like the last few iPhone generations, the iPhone 17’s main camera uses a 48 MP sensor that saves 24 MP images, using a process called “pixel binning” to decide which pixels are saved and which are discarded when shrinking the images down. To enable an “optical quality” 2x telephoto mode, Apple crops a 12 MP image out of the center of that sensor without doing any resizing or pixel binning. The results are a small step down in quality from the regular 1x mode, but they’re still native resolution images with no digital zoom, and the 2x mode on the iPhone Air or iPhone 17 can actually capture fine detail better than an older iPhone Pro in situations where you’re shooting an object that’s close by and the actual telephoto lens isn’t used.

The iPhone 15 Pro. When you shoot a nearby subject in 2x or even 3x mode, the Pro phones give you a crop of the main sensor rather than switching to the telephoto lens. You need to be farther from your subject for the phone to engage the telephoto lens. Andrew Cunningham

One improvement to the iPhone 17’s camera sensor this year is that the ultrawide camera is also upgraded to a 48 MP sensor so that it can benefit from the same shrinking-and-pixel-binning strategy Apple uses for the main camera. In the iPhone 16, this secondary sensor was still just 12 MP.

Compared to the iPhone 15 Pro and iPhone 16 we have here, wide shots on the iPhone 17 benefit mainly from the added detail you capture in higher-resolution 24- or 48-MP images. The difference is slightly more noticeable with details in the background of an image than with details in the foreground, as visible in the Lego castle surrounding Lego Mario.

The older the phone you’re using is, the more you’ll benefit from sensor and image signal processing improvements. Bits of dust and battle damage on Mario are most distinct on the iPhone 17 than the iPhone 15 Pro, for example, but aside from the resolution, I don’t notice much of a difference between the iPhone 16 and 17.

A true telephoto lens is probably the biggest feature the iPhone 17 Pro has going for it relative to the basic iPhone 17, and Apple has amped it up with its own 48 MP sensor this year. We’ll reuse the 4x and 8x photos from our iPhone Air review to show you what you’re missing—the telephoto camera captures considerably more fine detail on faraway objects, but even as someone who uses the telephoto on the iPhone 15 Pro constantly, I would have to think pretty hard about whether that camera is worth $300, even once you add in the larger battery, ProRAW support, and other things Apple still holds back for the Pro phones.

Specs and speeds and battery

Our iPhone Air review showed that the main difference between the iPhone 17’s Apple A19 chip and the A19 Pro used in the iPhone Air and iPhone Pro is RAM. The iPhone 17 sticks with 8GB of memory, whereas both Air and Pro are bumped up to 12GB.

There are other things that the A19 Pro can enable, including ProRes video support and 10Gbps USB 3 file transfer speeds. But many of those iPhone Pro features, including the sixth GPU core, are mostly switched off for the iPhone Air, suggesting that we could actually be looking at the exact same silicon with a different amount of RAM packaged on top.

Regardless, 8GB of RAM is currently the floor for Apple Intelligence, so there’s no difference in features between the iPhone 17 and the Air or the 17 Pro. Browser tabs and apps may be ejected from memory slightly less frequently, and the 12GB phones may age better as the years wear on. But right now, 8GB of memory puts you above the amount that most iOS 26-compatible phones are using—Apple is still optimizing for plenty of phones with 6GB, 4GB, or even 3GB of memory. 8GB should be more than enough for the foreseeable future, and I noticed zero differences in day-to-day performance between the iPhone 17 and the iPhone Air.

All phones were tested with Adaptive Power turned off.

The iPhone 17 is often actually faster than the iPhone Air, despite both phones using five-core A19-class GPUs. Apple’s thinnest phone has less room to dissipate heat, which leads to more aggressive thermal throttling, especially for 3D apps like games. The iPhone 17 will often outperform Apple’s $999 phone, despite costing $200 less.

All of this also ignores one of the iPhone 17’s best internal upgrades: a bump from 128GB of storage to 256GB of storage at the same $799 starting price as the iPhone 16. Apple’s obnoxious $100-or-$200-per-tier upgrade pricing for storage and RAM is usually the worst part about any of its products, so any upgrade that eliminates that upcharge for anyone is worth calling out.

On the battery front, we didn’t run specific tests, but the iPhone 17 did reliably make it from my typical 7: 30 or 7: 45 am wakeup to my typical 1: 00 or 1: 30 am bedtime with 15 or 20 percent left over. Even a day with Personal Hotspot use and a few dips into Pokémon Go didn’t push the battery hard enough to require a midday top-up. (Like the other new iPhones this year, the iPhone 17 ships with Adaptive Power enabled, which can selectively reduce performance or dim the screen and automatically enables Low Power Mode at 20 percent, all in the name of stretching the battery out a bit and preventing rapid drops.)

Better battery life out of the box is already a good thing, but it also means more wiggle room for the battery to lose capacity over time without seriously inconveniencing you. This is a line that the iPhone Air can’t quite cross, and it will become more and more relevant as your phone approaches two or three years in service.

The one to beat

Apple’s iPhone 17. Credit: Andrew Cunningham

The screen is one of the iPhone Pro’s best features, and the iPhone 17 gets it this year. That plus the 256GB storage bump is pretty much all you need to know; this will be a more noticeable upgrade for anyone with, say, the iPhones 12-to-14 than the iPhone 15 or 16 was. And for $799—$200 more than the 128GB version of the iPhone 16e and $100 more than the 128GB version of the iPhone 16—it’s by far the iPhone lineup’s best value for money right now.

This is also happening at the same time as the iPhone Pro is getting a much chonkier new design, one I don’t particularly love the look of, even though I appreciate the functional camera and battery upgrades it enables. This year’s Pro feels like a phone targeted toward people who are actually using it in a professional photography or videography context, where in other years, it’s felt more like “the regular iPhone plus a bunch of nice, broadly appealing quality-of-life stuff that may or may not trickle down to the regular iPhone over time.”

In this year’s lineup, you get the iPhone Air, which seems to be trying to do something new at the expense of basics like camera quality and battery life. You get the iPhone 17 Pro, which feels like it was specifically built for anyone who looks at the iPhone Air and thinks, “I just want a phone with a bigger battery and a better camera, and I don’t care what it looks like or how light it is” (hello, median Ars Technica readers and employees). And the iPhone 17 is there quietly undercutting them both, as if to say, “Would anyone just like a really good version of the regular iPhone?”

Next and last on our iPhone review list this year: the iPhone 17 Pro. Maybe spending a few days up close with it will help me appreciate the design more?

The good

  • The exact same screen as this year’s iPhone Pro for $300 less, including 120 Hz ProMotion, variable refresh rates, and an always-on screen.
  • Same good main camera as the iPhone Air, plus the added flexibility of an improved wide-angle camera.
  • Good battery life.
  • A19 is often faster than iPhone Air’s A19 Pro thanks to better heat dissipation.
  • Jumps from 128GB to 256GB of storage without increasing the starting price.

The bad

  • 8GB of RAM instead of 12GB. 8GB is fine, but more is also good!
  • I slightly prefer last year’s versions of most of these color options.
  • No two-column layout for apps in landscape mode.
  • The telephoto lens seems like it will be restricted to the iPhone Pro forever.

The ugly

  • People probably won’t be able to tell you have a new iPhone?

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Apple iPhone 17 review: Sometimes boring is best Read More »

supermicro-server-motherboards-can-be-infected-with-unremovable-malware

Supermicro server motherboards can be infected with unremovable malware

Servers running on motherboards sold by Supermicro contain high-severity vulnerabilities that can allow hackers to remotely install malicious firmware that runs even before the operating system, making infections impossible to detect or remove without unusual protections in place.

One of the two vulnerabilities is the result of an incomplete patch Supermicro released in January, said Alex Matrosov, founder and CEO of Binarly, the security firm that discovered it. He said that the insufficient fix was meant to patch CVE-2024-10237, a high-severity vulnerability that enabled attackers to reflash firmware that runs while a machine is booting. Binarly discovered a second critical vulnerability that allows the same sort of attack.

“Unprecedented persistence”

Such vulnerabilities can be exploited to install firmware similar to ILObleed, an implant discovered in 2021 that infected HP Enterprise servers with wiper firmware that permanently destroyed data stored on hard drives. Even after administrators reinstalled the operating system, swapped out hard drives, or took other common disinfection steps, ILObleed would remain intact and reactivate the disk-wiping attack. The exploit the attackers used in that campaign had been patched by HP four years earlier but wasn’t installed in the compromised devices.

“Both issues provide unprecedented persistence power across significant Supermicro device fleets including [in] AI data centers,” Matrasov wrote to Ars in an online interview, referring to the two latest vulnerabilities Binarly discovered. “After they patched [the earlier vulnerability], we looked at the rest of the attack surface and found even worse security problems.”

The two new vulnerabilities—tracked as CVE-2025-7937 and CVE-2025-6198—reside inside silicon soldered onto Supermicro motherboards that run servers inside data centers. Baseboard management controllers (BMCs) allow administrators to remotely perform tasks such as installing updates, monitoring hardware temperatures, and setting fan speeds accordingly. BMCs also enable some of the most sensitive operations, such as reflashing the firmware for the UEFI (Unified Extensible Firmware Interface) that’s responsible for loading the server OS when booting. BMCs provide these capabilities and more, even when the servers they’re connected to are turned off.

Supermicro server motherboards can be infected with unremovable malware Read More »

scientists-catch-a-shark-threesome-on-camera

Scientists catch a shark threesome on camera

Three sharks, two cameras

Three leopard sharks mating - near surface

Moving the action closer to the surface. Credit: Hugo Lassauce/UniSC-Aquarium des Lagons

Lassauce had two GoPro Hero 5 cameras ready at hand, albeit with questionable battery life. That’s why the video footage has two interruptions to the action: once when he had to switch cameras after getting a “low battery” signal, and a second time when he voluntarily stopped filming to conserve the second camera’s battery. Not much happened for 55 minutes, after all, and he wanted to be sure to capture the pivotal moments in the sequence. Lassauce succeeded and was rewarded with triumphant cheers from his fellow marine biologists on the boat, who knew full well the rarity of what had just been documented for posterity.

The lengthy pre-copulation stage involved all three sharks motionless on the seafloor for nearly an hour, after which the female started swimming with one male shark biting onto each of her pectoral fins. A few minutes later, the first male made his move, “penetrating the female’s cloaca with his left clasper.” Claspers are modified pelvic fins capable of transferring sperm. After the first male shark finished, he lay motionless while the second male held onto the female’s other fin. Then the other shark moved in, did his business, went motionless, and the female shark swam away. The males also swam away soon afterward.

Apart from the scientific first, documenting the sequence is a good indicator that this particular area is a critical mating habitat for leopard sharks, and could lead to better conservation strategies, as well as artificial insemination efforts to “rewild” leopard sharks in Australia and several other countries. “It’s surprising and fascinating that two males were involved sequentially on this occasion,” said co-author Christine Dudgeon, also of UniSC, adding, “From a genetic diversity perspective, we want to find out how many fathers contribute to the batches of eggs laid each year by females.”

Journal of Ethology, 2025. DOI: 10.1007/s10164-025-00866-4 (About DOIs).

Scientists catch a shark threesome on camera Read More »

baby-steps-is-the-most-gloriously-frustrating-game-i’ve-ever-struggled-through

Baby Steps is the most gloriously frustrating game I’ve ever struggled through


A real “walking simulator”

QWOP meets Death Stranding meets Getting Over It to form wonderfully surreal, unique game.

Watch out for that first step, it’s a doozy! Credit: Devolver Digital

Watch out for that first step, it’s a doozy! Credit: Devolver Digital

There’s an old saying that life is not about how many times you fall down but how many times you get back up. In my roughly 13 hours of walking through the surreal mountain wilderness of Baby Steps, I’d conservatively estimate I easily fell down 1,000 times.

If so, I got up 1,001 times, which is the entire point.

When I say “fell down” here, I’m not being metaphorical. In Baby Steps, the only real antagonist is terrain that threatens to send your pudgy, middle-aged, long-underwear-clad avatar tumbling to the ground (or down a cliff) like a rag doll after the slightest misstep. You pilot this avatar using an intentionally touchy and cumbersome control system where each individual leg is tied a shoulder trigger on your controller.

Unlike the majority of 3D games, where you simply tilt the control stick and watch your character dutifully run, each step here means manually lifting one foot, leaning carefully in the direction you want to go, and then putting that foot down in a spot that maintains your overall balance. It’s like a slightly more forgiving version of the popular ’00s Flash game QWOP (which was also made by Baby Steps co-developer Bennett Foddy), except instead of sprinting on a 2D track, you take your time carefully planning each footfall on a methodical 3D hike.

Keep wiggling that foot until you find a safe place to put it.

Credit: Devolver Digital

Keep wiggling that foot until you find a safe place to put it. Credit: Devolver Digital

At first, you’ll stumble like a drunken toddler, mashing the shoulder buttons and tilting the control stick wildly just to inch forward. After a bit of trial and error, though, you’ll work yourself into a gentle rhythm—press the trigger, tilt the controller, let go while recentering the controller, press the other trigger, repeat thousands of times. You never quite break into a run, but you can fall into a zen pattern of marching methodically forward, like a Death Stranding courier who has to actually focus on each and every step.

As you make your halting progress up the mountain, you’ll infrequently stumble on other hikers who seem to lord their comfort and facility with the terrain over you in manic, surreal, and often hilarious cut scenes. I don’t want to even lightly spoil any of the truly gonzo moments in this extremely self-aware meta-narrative, but I will say that I found your character’s grand arc through the game to be surprisingly touching, often in some extremely subtle ways.

Does this game hate me?

Just as you feel like you’re finally getting the hang of basic hiking, Baby Steps starts ramping up the terrain difficulty in a way that can feel downright trolly at time. Gentle paths of packed dirt and rock start to be replaced with narrow planks and rickety wooden bridges spanning terrifying gaps. Gentle undulating hills are replaced with sheer cliff faces that you sidle up and across with the tiniest of toe holds to precariously balance on. Firm surfaces are slowly replaced with slippery mud, sand, snow, and ice that force you to alter your rhythm and tread extremely lightly just to make incremental progress.

Grabbing that fetching hat means risking an extremely punishing fall.

Grabbing that fetching hat means risking an extremely punishing fall.

And any hard-earned progress can feel incredibly fragile in Baby Steps. Literally one false step can send you sliding down a hill or tumbling down a cliff face in a way that sets you back anywhere from mere minutes to sizable chunks of an hour. There’s no “reset from checkpoint” menu option or save scumming that can limit the damage, either. When you fall in Baby Steps, it can be a very long way down.

This extremely punishing structure won’t be a surprise to anyone who has played Getting Over It With Bennett Foddy, where a single mistake can send you all the way back to the beginning of the game. Baby Steps doesn’t go quite that hard, giving players occasional major checkpoints and large, flat plains that prevent you from falling back too far. Still, this is a game that is more than happy to force you to pay for even small mistakes with huge portions of your only truly irreplaceable resource: time.

On more than one occasion during my playthrough, I audibly cursed at my monitor and quit the game in a huff rather than facing the prospect of spending 10 minutes retracing my steps after a particularly damaging fall. Invariably, though, I’d come back a bit later, more determined than ever to learn from my mistakes, which I usually did quickly with the benefits of time and calm nerves on my side.

It’s frequently not entirely clear where you’re supposed to go in Baby Steps.

Credit: Devolver Digital

It’s frequently not entirely clear where you’re supposed to go in Baby Steps. Credit: Devolver Digital

Baby Steps is also a game that’s happy to let you wander aimlessly. There’s no in-game map to consult, and any paths and landmarks that could point you in the “intended” way up the mountain are often intentionally confusing or obscured. It can be extremely unclear which parts of the terrain are meant to be impossibly steep and which are merely designed as difficult but plausible shortcuts that simply require pinpoint timing and foot placement. But the terrain is also designed so that almost every near-impossible barrier can be avoided altogether if you’re patient and observant enough to find a way around it.

And if you wander even slightly off the lightly beaten path, you’ll stumble on many intricately designed and completely optional points of interest, from imposing architectural towers to foreboding natural outcroppings to a miniature city made of boxes. There’s no explicit in-game reward for almost all of these random digressions, and your fellow cut-scene hikers will frequently explicitly warn you that there’s no point in climbing some structure or another. Your only reward is the (often marvelous) view from the top—and the satisfaction of saying that you conquered something you didn’t need to.

Are we having fun yet?

So was playing Baby Steps any fun? Honestly, that’s not the first word I’d use to describe the experience.

To be sure, there’s a lot of humor built into the intentionally punishing designs of some sections, so much so that I often had to laugh even as I fell down yet another slippery hill that erased a huge chunk of my progress. And the promise of more wild cut scenes serves as a pretty fun and compelling carrot to get you through some of the game’s toughest sections.

I’ve earned this moment of zen.

Credit: Devolver Digital

I’ve earned this moment of zen. Credit: Devolver Digital

More than “fun,” though, I’d say my time with the Baby Steps felt meaningful in a way few games do. Amid all the trolly humor and intentionally obtuse design decisions is a game whose very structure forces you to consider the value of perseverance and commitment.

This is a game that stands proudly against a lot of modern game design trends. It won’t loudly and explicitly point you to the next checkpoint with a huge on-screen arrow. You can’t inexorably grind out stat points in Baby Steps until your character is powerful enough to beat the toughest boss easily. You can’t restart a Baby Steps run and hope for a lucky randomized seed that will get you past a difficult in-game wall.

Baby Steps doesn’t hand you anything. Your abilities and inventory are the same at the game’s start as they are at the end. Any progress you make is defined solely by your mastery of the obtuse movement system and your slowly increasing knowledge of how to safely traverse ever more treacherous terrain.

It’s a structure that can feel punishing, unforgiving, tedious, and enraging in turns. But it’s also a structure that leads to moments of the most genuinely satisfying sense of achievement I can remember having in modern gaming.

It’s about a miles-long journey starting with a single, halting step. It’s about putting one foot in front of the other until you can’t anymore. It’s about climbing the mountain because it’s there. It’s about falling down 1,000 times and getting up 1,001 times.

What else is there in the end?

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Baby Steps is the most gloriously frustrating game I’ve ever struggled through Read More »

judge-lets-construction-on-an-offshore-wind-farm-resume

Judge lets construction on an offshore wind farm resume

That did not, however, stop the administration from trying again, this time targeting a development called Revolution Wind, located a bit further north along the Atlantic coast. This time, however, the developer quickly sued, leading to Monday’s ruling. According to Reuters, after a two-hour court hearing at the District Court of DC, Judge Royce Lamberth termed the administration’s actions “the height of arbitrary and capricious” and issued a preliminary injunction against the hold on Revolution Wind’s construction. As a result, Orsted can restart work immediately.

The decision provides a strong indication of how Lamberth is likely to rule if the government pursues a full trial on the case. And while the Trump administration could appeal, it’s unlikely to see this injunction lifted unless it takes the case all the way to the Supreme Court. Given that Revolution Wind was already 80 percent complete, the case may become moot before it gets that far.

Judge lets construction on an offshore wind farm resume Read More »

volvo-says-it-has-big-plans-for-south-carolina-factory

Volvo says it has big plans for South Carolina factory

Volvo is undergoing something of a restructuring. The automaker wants to be fully electric by 2040, but for that to happen, it needs to remain in business until then. Earlier this year, that meant layoffs, but today, Volvo announced it has big plans for its North American factory in Ridgeville, South Carolina.

Volvo has been making cars in South Carolina since 2017, starting with the S60 sedan—a decision I always found slightly curious given that US car buyers had already given up on sedans by that point in favor of crossovers and SUVs. S60 production ended last summer, and these days, the plant builds the large electric EX90 SUV and the related Polestar 3.

The company is far from fully utilizing the Ridgeville plant, though, which has an annual capacity of 150,000 vehicles. When the turnaround plan was first announced this July, Volvo revealed it would start building the next midsize XC60 in South Carolina—a wise move given the Trump tariffs and the importance of this model to Volvo’s sales figures here.

Now, the OEM says it will add another model to the mix, with a new, yet-to-be-named hybrid due before 2030.

“Our investment plans once again reinforce our long-term commitment to the US market and our manufacturing operations in South Carolina,” said Håkan Samuelsson, chief executive. “This year, we celebrate 70 years of Volvo Cars presence in the United States. We have sold over 5 million cars there and plan to sell many more in years to come,” he said.

Volvo says it has big plans for South Carolina factory Read More »

after-getting-jimmy-kimmel-suspended,-fcc-chair-threatens-abc’s-the-view

After getting Jimmy Kimmel suspended, FCC chair threatens ABC’s The View


Carr: “Turn your license in to the FCC, we’ll find something else to do with it.”

President-elect Donald Trump speaks to Brendan Carr, his intended pick for Chairman of the Federal Communications Commission, as he attends a SpaceX Starship rocket launch on November 19, 2024 in Brownsville, Texas. Credit: Getty Images | Brandon Bell

After pressuring ABC to suspend Jimmy Kimmel, Federal Communications Commission Chairman Brendan Carr is setting his regulatory sights on ABC’s The View and NBC late-night hosts Seth Meyers and Jimmy Fallon.

Carr appeared yesterday on the radio show hosted by Scott Jennings, who describes himself as “the last man standing athwart the liberal mob.” Jennings asked Carr whether The View and other ABC programs violate FCC rules, and made a reference to President Trump calling on NBC to cancel Fallon and Meyers.

“A lot of people think there are other shows on ABC that maybe run afoul of this more often than Jimmy Kimmel,” Jennings said. “I’m thinking specifically of The View, and President Trump himself has mentioned Jimmy Fallon and Seth Meyers at NBC. Do you have comments on those shows, and are they doing what Kimmel did Monday night, and is it even worse on those programs in your opinion?”

In response, Carr discussed the FCC’s Equal Opportunities Rule, also known as the Equal Time Rule, and said the FCC could determine that those shows don’t qualify for an exemption to the rule.

“When you look at these other TV shows, what’s interesting is the FCC does have a rule called the Equal Opportunity Rule, which means, for instance, if you’re in the run-up to an election and you have one partisan elected official on, you have to give equal time, equal opportunity, to the opposing partisan politician,” Carr said.

At another point in the interview, Carr said broadcasters that object to FCC enforcement “can turn your license in to the FCC, we’ll find something else to do with it.”

Bona fide news exemption

Carr said the FCC hasn’t previously enforced the rule on those shows because of an exemption for “bona fide news” programs. He said the FCC could determine the shows mentioned by Jennings aren’t exempt:

There’s an exception to that rule called the bona fide news exception, which means if you are a bona fide news program, you don’t have to abide by the Equal Opportunity Rule. Over the years, the FCC has developed a body of case law on that that has suggested that most of these late night shows, other than SNL, are bona fide news programs. I would assume you could make the argument that The View is a bona fide news show but I’m not so sure about that, and I think it’s worthwhile to have the FCC look into whether The View and some of these other programs you have still qualify as bona fide news programs and [are] therefore exempt from the Equal Opportunity regime that Congress has put in place.

The Equal Opportunity Rule applies to radio and TV broadcast stations with FCC licenses to use the airwaves. An FCC fact sheet explains that stations giving time to one candidate must provide “comparable time and placement to opposing candidates” upon request. The onus is on candidates to request air time—”the station is not required to seek out opposing legally qualified candidates and offer them Equal Opportunities,” the fact sheet says.

The exemption mentioned by Carr means that “appearances by legally qualified candidates on bona fide newscasts, interview programs, certain types of news documentaries, and during on-the-spot coverage of bona fide news events are exempt from Equal Opportunities,” the fact sheet says.

In 1994, the FCC said that “Congress removed the inhibiting effect of the equal opportunities obligation upon bona fide news programming to encourage increased news coverage of political campaign activity.” Congress gave the FCC leeway to interpret the scope of bona fide news exemptions.

Referring to its 1988 ruling on Entertainment Tonight and Entertainment This Week, the FCC said it found that “the principal consideration should be ‘whether the program reports news of some area of current events… in a manner similar to more traditional newscasts.’ The Commission has thus declined to evaluate the relative quality or significance of the topics and stories selected for newscast coverage, relying instead on the broadcaster’s good faith news judgment.”

Carr’s allegations

Carr alleged in November 2024 that NBC putting Kamala Harris on Saturday Night Live before the election was “a clear and blatant effort to evade the FCC’s Equal Time rule.” In fact, NBC gave Trump two free 60-second messages in order to comply with the rule.

Carr didn’t cite any specific incidents on The View or late-night shows that would violate the FCC rule. The View has addressed its attempts to get Trump on the show, however. Executive Producer Brian Teta told Deadline in April 2024, “We’ve invited Trump to join us at the table for both 2016 and 2020 elections, and he declined, and at a certain point, we stopped asking. So I don’t anticipate that changing. I think he’s pretty familiar with how the co-hosts feel about him and doesn’t see himself coming here.”

The Kimmel controversy erupted over a monologue in which he said, “We hit some new lows over the weekend with the MAGA gang desperately trying to characterize this kid who murdered Charlie Kirk as anything other than one of them and with everything they can to score political points from it.”

With accused murderer Tyler Robinson being described as having liberal views, Carr and other conservatives alleged that Kimmel misled viewers. Carr appeared on right-wing commentator Benny Johnson’s podcast on Wednesday and said, “We can do this the easy way or the hard way. These companies can find ways to change conduct, to take action, frankly on Kimmel, or there’s going to be additional work for the FCC ahead.”

Nexstar and Sinclair, two major owners of TV stations, both urged ABC to take action against Kimmel and said their stations would not air his show. The pressure from broadcasters is happening at a time when both Nexstar and ABC owner Disney are seeking Trump administration approval for mergers.

Democrats accuse Carr of hypocrisy on First Amendment

Anna Gomez, the only Democrat on the Republican-majority FCC, said yesterday that Carr overstepped his authority, but “billion-dollar companies with pending business before the agency” are “vulnerable to pressure to bend to the government’s ideological demands.”

Democratic lawmakers criticized Carr and proposed investigations into the chair for abuse of authority. “It is not simply unacceptable for the FCC chairman to threaten a media organization because he does not like the content of its programming—it violates the First Amendment that you claim to champion,” Senate Democrats wrote in a letter to Carr. “The FCC’s role in overseeing the public airwaves does not give it the power to act as a roving press censor, targeting broadcasters based on their political commentary. But under your leadership, the FCC is being weaponized to do precisely that.”

Democrats pointed to some of Carr’s previous statements in which he decried government censorship. During his 2023 re-confirmation proceedings, Senate Democrats asked Carr about social media posts in which he accused Democrats of engaging in censorship like “what you’d see in the Soviet Union.”

“I posted those tweets in the context of expressing my view on the First Amendment that debate on matters of public interest should be robust, uninhibited, and wide open,” Carr wrote in his response to Democratic senators. “I believe that the best remedy to speech that someone does not like or finds objectionable is more speech. I posted them because I believe that a newsroom’s decision about what stories to cover and how to frame them should, consistent with the First Amendment, be beyond the reach of any government official.”

Years earlier, in 2019, Carr posted a tweet that said, “Should the government censor speech it doesn’t like? Of course not. The FCC does not have a roving mandate to police speech in the name of the ‘public interest.'”

Sen. Ted Cruz (R-Texas) also criticized Carr’s approach, saying it would lead to the same tactics being used against Republicans the next time Democrats are in power.

Carr to broadcasters: Give your licenses back to FCC

Carr said this week he’s only addressing licensed broadcasters, which have public-interest obligations, as opposed to cable and streaming services that don’t need FCC licenses. Network programming itself doesn’t need an FCC license, but the TV stations that carry network shows require licenses.

Carr tried to cast Kimmel’s suspension as the result of organic pressure from licensed broadcasters, rather than FCC coercion. “There’s no untoward coercion happening here,” Carr told Jennings. “The market was intended to function this way, where local TV stations get to push back.”

But TV station owners did so in exactly the way that Carr urged them to. “The individual licensed stations that are taking their content, it’s time for them to step up and say this garbage isn’t something that we think serves the needs of our local communities,” Carr said on Johnson’s podcast. Carr said that Kimmel’s monologue “appears to be some of the sickest conduct possible.”

On the Jennings show, Carr alleged that Democrats in the previous administration implemented “a two-tiered weaponized system of justice,” and that his FCC is instead giving everyone “a fair shake and even-handed treatment.”

Carr has repeatedly threatened broadcasters with the FCC’s rarely enforced news distortion policy. As we’ve explained, the FCC technically has no rule or regulation against news distortion, which is why it is called a policy and not a rule. But on Jennings’ show, he described it as a rule.

“We do have those rules at the FCC: If you engage in news distortion, we can take action,” Carr said.

As we’ve written several times, it is difficult legally for the FCC to revoke broadcast licenses. But it isn’t difficult for Carr to exert pressure on networks and broadcasters through public statements. Carr suggested yesterday that broadcasters turn in their licenses if they don’t like his approach to enforcement.

“If you’re a broadcaster and you don’t like being held accountable for the first time in a long time through the public interest standard, that’s fine. You can turn your license in to the FCC, we’ll find something else to do with it,” Carr said. “Or you can go to Congress and say, ‘I don’t want the FCC having public interest obligations on broadcasters anymore, I want broadcasters to be like cable, to be like a streaming service.’ That’s fine too. But as long as that’s the system that Congress has created, we’re going to enforce it.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

After getting Jimmy Kimmel suspended, FCC chair threatens ABC’s The View Read More »

you’ll-enjoy-the-specialized-turbo-vado-sl-2-6.0-carbon-even-without-assist

You’ll enjoy the Specialized Turbo Vado SL 2 6.0 Carbon even without assist


It’s an investment, certainly of money, but also in long, fast rides.

The Specialized Turbo Vado SL 2 6.0 Carbon Credit: Specialized

Two things about the Specialized Turbo Vado SL 2 6.0 Carbon are hard to fathom: One is how light and lithe it feels as an e-bike, even with the battery off; the other is how hard it is to recite its full name when other riders ask you about the bike at stop lights and pit stops.

I’ve tested about a half-dozen e-bikes for Ars Technica. Each test period has included a ride with my regular group for about 30 miles. Nobody else in my group rides electric, so I try riding with no assist, at least part of the way. Usually I give up after a mile or two, realizing that most e-bikes are not designed for unpowered rides.

On the Carbon (as I’ll call it for the rest of this review), you can ride without power. At 35 pounds, it’s no gram-conscious road bike, but it feels lighter than that number implies. My daily ride is an aluminum-framed model with an internal geared hub that weighs about the same, so I might be a soft target. But it’s a remarkable thing to ride an e-bike that starts with a good unpowered ride and lets you build on that with power.

Once you actually crank up the juice, the Carbon is pretty great, too. Deciding whether this bike fits your riding goals is a lot tougher than using and enjoying it.

Specialized’s own system

It’s tough to compare this Carbon to other e-bikes, because it’s using hardly any of the same standard components as all the others.

The 320-watt mid-drive motor is unique to Specialized models, as is its control system, its handlebar display, its charge ports, and its software. On every other e-bike I’ve ridden, you can usually futz around with the controls or app or do some Internet searching to figure out a way to, say, turn off an always-on headlamp. On this Carbon, there is not. You are riding with the lights on, because that’s how it was designed (likely with European regulations in mind).

The bottom half of the Carbon, with its just-powerful-enough mid-drive motor, charging port, bottle cages, and a range-extending battery. Watch your stance if you’ve got wide-ranging feet, like the author.

Credit: Kevin Purdy

The bottom half of the Carbon, with its just-powerful-enough mid-drive motor, charging port, bottle cages, and a range-extending battery. Watch your stance if you’ve got wide-ranging feet, like the author. Credit: Kevin Purdy

Specialized has also carved out a very unique customer profile with this bike. It’s not the bike to get if you’re the type who likes to tinker, mod, or upgrade (or charge the battery outside the bike). It is the bike to get if you are the type who wants to absolutely wreck a decent commute, to power through some long climbs with electric confidence, or simply have a premier e-bike commute or exercise experience. It’s not an entirely exercise-minded carbon model, but it’s not a chill, throttle-based e-bike, either.

The ride

I spent probably a quarter as much time thinking about riding the Carbon as I did actually riding it. This bike costs a minimum of $6,000; where can you ride it and never let it out of your sight for even one moment? The Carbon offers Apple Find My tracker integration and has its own Turbo System Lock that kills the motor and (optionally) sets off lights and siren alarms when the bike is moved while disabled. That’s all good, but the Carbon remains a bike that demands full situational awareness, wherever you leave it.

The handlebar display on the Carbon. There are a few modes, but this is the relative display density: big numbers, basic information, refer to the phone app if you want more.

Credit: Kevin Purdy

The handlebar display on the Carbon. There are a few modes, but this is the relative display density: big numbers, basic information, refer to the phone app if you want more. Credit: Kevin Purdy

You unlock the bike with either the Specialized smartphone app or a PIN code, entered with an up/down/press switch. The 2.1-inch screen only has a few display options but can provide the basics (speed, pedal cadence, wattage, gear/assist levels), or, if you dig into Specialized’s app and training programs and connect ANT+ gear, your heart rate and effort.

Once you’re done plotting, unlocking, and data-picking, you can ride the Carbon and feel its real value. Specialized, a company that seems deeply committed to version control, claims that the Future Shock 3.2 front suspension on this 6.0 Carbon reduces impact by 53 percent or more, versus a bike with no suspension. Combined with the 47 mm knobby tires and the TRP hydraulic disc brakes, I had no trouble switching from road to gravel, taking grassy shortcuts, hopping off standard rubes, or facing down city streets with inconsistent upkeep.

I’ve been spoiled by the automatic assist available on Bosch mid-drive motors. The next best thing is probably something like the Shimano Devore XT/SLX shifters on this Carbon, paired with the power monitoring. The 12-speed system, with a 10-51t cassette range, shifted at the speed of thought. Your handlebar display gives you a color-coded guide when you should probably shift up or down, based on your cadence and wattage output.

The controls for the Carbon’s display, power, and switch are just this little switch, with three places to press and an up/down switch. Sometimes I thought it was clever and efficient; other times, I wish I had picked a more simple unlock code.

Credit: Kevin Purdy

The controls for the Carbon’s display, power, and switch are just this little switch, with three places to press and an up/down switch. Sometimes I thought it was clever and efficient; other times, I wish I had picked a more simple unlock code. Credit: Kevin Purdy

That battery range, as reported by Specialized, is “up to 5 hours,” a number that few people are going to verify. It’s a 520-watt-hour battery in a 48-volt system that can turn out a rated 320 watts of power. You can adjust the output of all three assist levels in the Specialized app. And you can buy a $450 water-bottle-sized range extender battery that adds another 160 Wh to your system if you sacrifice a bottle cage (leaving two others).

But nobody should ride this bike, or its cousins, like a juice miser on a cargo run. This bike is meant to move, whether to speed through a commute, push an exercise ride a bit farther, or tackle that one hill that ruins your otherwise enjoyable route. The Carbon felt good on straightaways, on curves, starting from a dead stop, and pretty much whenever I was in the zone, forgetting about the bike itself and just pedaling.

I don’t have many points of comparison, because most e-bikes that cost this much are bulky, intensely powerful, or haul a lot of cargo. The Carbon and its many cousins that Specialized sells cost more because they take things away from your ride: weight, frame, and complex systems. The Carbon provides a rack, lights, three bottle cages, and mounting points, so it can do more than just boost your ride. But that’s what it does better than most e-bikes out there: provide an agile, lightweight athletic ride, upgraded with a balanced amount of battery power and weight to make that ride go faster or farther.

The handlebar, fork, and wiring on the front of the Carbon.

Credit: Kevin Purdy

The handlebar, fork, and wiring on the front of the Carbon. Credit: Kevin Purdy

Always room to improve

I’ve said only nice things about this $6,000 bike, so allow me to pick a few nits. I’ve got big feet (size 12 wide) and a somewhat sloppy pedal position when I’m not using clips. Using the bottle-sized battery, with its plug on the side of the downtube, led to a couple of fat-footed disconnections while riding. When the Carbon notices that even its supplemental battery has disconnected, it locks out its display system; I had to enter a PIN code and re-plug the battery to get going again. This probably won’t be an issue for most people, but it’s worth noting if you’re looking at that battery as a range solution.

The on-board display and system seem a bit underdeveloped for the bike’s cost, too. Having a switch with three controls (up, down, push-in) makes navigating menus and customizing information tiresome. You can see Specialized pushing you to the smartphone for deeper data and configuration and keeping control space on the handlebars to a minimum. But I’ve found the display and configuration systems on many cheaper bikes more helpful and intuitive.

The Specialized Turbo Vado SL 2 6.0 Carbon (whew!) provided some of the most enjoyable rides I could imagine out of a bike I had no intention of keeping. It’s an investment, certainly of money, but also to long, fast rides, whether to get somewhere or nowhere in particular. Maybe you want more battery range, more utility, or more rugged and raw power for the price. But it is hard to beat this bike in the particular race it is running.

You’ll enjoy the Specialized Turbo Vado SL 2 6.0 Carbon even without assist Read More »

“yikes”:-internal-emails-reveal-ticketmaster-helped-scalpers-jack-up-prices

“Yikes”: Internal emails reveal Ticketmaster helped scalpers jack up prices

Through those years, employees occasionally flagged abuse behavior that Ticketmaster and Live Nation were financially motivated to ignore, the FTC alleged. In 2018, one Ticketmaster engineer tried to advocate for customers, telling an executive in an email that fans can’t tell the difference between Ticketmaster-supported brokers—which make up the majority of its resale market—and scalpers accused of “abuse.”

“We have a guy that hires 1,000 college kids to each buy the ticket limit of 8, giving him 8,000 tickets to resell,” the engineer explained. “Then we have a guy who creates 1,000 ‘fake’ accounts and uses each [to] buy the ticket limit of 8, giving him 8,000 tickets to resell. We say the former is legit and call him a ‘broker’ while the latter is breaking the rules and is a ‘scalper.’ But from the fan perspective, we end up with one guy reselling 8,000 tickets!”

And even when Ticketmaster flagged brokers as bad actors, the FTC alleged the company declined to enforce its rules to crack down if losing resale fees could hurt Ticketmaster’s bottom line.

“Yikes,” said a Ticketmaster employee in 2019 after noticing that a broker previously flagged for “violating fictitious account rules on a “large scale” was “still not slowing down.”

But that warning, like others, was ignored by management, the FTC alleged. Leadership repeatedly declined to impose any tools “to prevent brokers from bypassing posted ticket limits,” the FTC claimed, after analysis showed Ticketmaster risked losing nearly $220 million in annual resale ticket revenue and $26 million in annual operating income. In fact, executives were more alarmed, the FTC alleged, when brokers complained about high-volume purchases being blocked, “intentionally” working to support their efforts to significantly raise secondary market ticket prices.

On top of earning billions from fees, Ticketmaster can also profit when it “unilaterally” decides to “increase the price of tickets on their secondary market.” From 2019 to 2024, Ticketmaster “collected over $187 million in markups they added to resale tickets,” the FTC alleged.

Under the scheme, Ticketmaster can seemingly pull the strings, allowing brokers to buy up tickets on the primary market, then help to dramatically increase those prices on the secondary market, while collecting additional fees. One broker flagged by the FTC bought 772 tickets to a Coldplay concert, reselling $81,000 in tickets for $170,000. Another broker snatched up 612 tickets for $47,000 to a single Chris Stapleton concert, also nearly doubling their investment on the resale market. Meanwhile, artists, of course, do not see any of these profits.

“Yikes”: Internal emails reveal Ticketmaster helped scalpers jack up prices Read More »

rocket-report:-european-rocket-reuse-test-delayed;-nasa-tweaks-sls-for-artemis-ii

Rocket Report: European rocket reuse test delayed; NASA tweaks SLS for Artemis II


All the news that’s fit to lift

“There’s a lot of interest because of the fear that there’s just not a lot of capacity.”

Isar Aerospace’s Spectrum rocket lifts off from Andøya Spaceport, Norway, on March 30, 2025. Credit: Isar Aerospace/Brady Kenniston/NASASpaceflight.com

Welcome to Edition 8.11 of the Rocket Report! We have reached the time of year when it is possible the US government will shut down its operations at the end of this month, depending on congressional action. A shutdown would have significant implications for many NASA missions, but most notably a couple of dozen in the science directorate that the White House would like to shut down. At Ars, we will be watching this issue closely in the coming days. As for Artemis II, it seems to be far enough along that a launch next February seems possible as long as any government closure does not drag on for weeks and weeks.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Rocket Lab to sell common shares. The space company said Tuesday that it intends to raise up to $750 million by selling common shares, MSN reports. This new at-the-market program replaces a prior agreement that allowed Rocket Lab to sell up to $500 million of stock. Under that earlier arrangement, the company had sold roughly $396.6 million in shares before ending the program.

Seeking to scale up … The program’s structure enables Rocket Lab to sell shares periodically through the appointed agents, who may act as either principals or intermediaries. The larger offering indicates that Rocket Lab is aiming to bolster its cash reserves to support ongoing development of its launch services, including the medium-lift Neutron rocket and spacecraft manufacturing operations. The company’s stock dropped by about 10 percent after the announcement.

Astra targets mid-2026 for Rocket 4 debut. Astra is targeting next summer for the first flight of its Rocket 4 vehicle as the company prepares to reenter the launch market, Space News reports. At the World Space Business Week conference in Paris, Chris Kemp, chief executive of Astra, said the company was on track for a first launch of Rocket 4 in summer 2026 from Cape Canaveral, Florida. He highlighted progress Astra is making, such as tests of a new engine the company developed for the vehicle’s first stage that produces 42,000 pounds of thrust. Two of those engines will power the first stage, while the upper stage will use a single Hadley engine produced by Ursa Major.

Pricing a launch competitively … The vehicle will initially be capable of placing about 750 kilograms into low-Earth orbit for a price of $5 million. “That’ll be very competitive,” Kemp said in an interview after the presentation, similar to what SpaceX charges for payloads of that size through its rideshare program. The company is targeting customers seeking alternatives to SpaceX in a constrained launch market. “There’s a lot of interest because of the fear that there’s just not a lot of capacity,” he said, particularly for satellites too large to launch on Rocket Lab’s Electron. (submitted by EllPeaTea)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Avio seeks to raise 400 million euros. Italian rocket builder Avio’s board of directors has approved a 400 million euro ($471 million) capital increase to fund an expansion of its manufacturing capacity to meet rising demand in the global space and defense markets, European Spaceflight reports. The company expects to complete the capital increase by the end of the year; however, it is still subject to a shareholder vote, scheduled for October 23.

Small rockets, big plans … The capital raise is part of a new 10-year business plan targeting an average annual growth rate of about 10 percent in turnover and more than 15 percent in core profit. This growth is projected to be driven by a higher Vega C launch cadence, the introduction of the Vega E rocket, continued participation in the Ariane 6 program by providing solid rocket boosters, and the construction of a new defense production facility in the United States, which is expected to be completed by 2028.

Isar working toward second Spectrum launch. In a briefing this week, Isar Aerospace executives discussed the outcome of the investigation into the March 30 launch of the Spectrum rocket from the Andøya Spaceport in northern Norway, Space News reports. The vehicle activated its flight termination system about half a minute after liftoff, shutting down its engines and plummeting into the waters just offshore of the pad. The primary issue with the rocket was a loss of attitude control.

Bend it like Spectrum … Alexandre Dalloneau, vice president of mission and launch operations at Isar, said that the company had not properly characterized bending modes of the vehicle at liftoff. Despite the failure to get to orbit, Dalloneau considers the first Spectrum launch a successful test flight. The company is working toward a second flight of Spectrum, which will take place “as soon as possible,” Dalloneau said. He did not give a specific target launch date, but officials indicated they were hoping to launch near the end of this year or early next year. (submitted by EllPeaTea)

Callisto rocket test delayed again. A new document from the French space agency CNES has revealed that the inaugural flight of the Callisto reusable rocket demonstrator has slipped from 2026 to 2027, European Spaceflight reports. This reusable launch testbed is a decade old. Conceived in 2015, the Cooperative Action Leading to Launcher Innovation in Stage Toss-back Operations (Callisto) project is a collaboration between CNES and the German and Japanese space agencies aimed at maturing reusable rocket technology for future European and Japanese launch systems.

Still waiting … The Callisto demonstrator will stand 14 meters tall, with a width of 1.1 meters and a takeoff mass of 3,500 kilograms. This latest revision to the program’s timeline comes less than a year after JAXA confirmed in October 2024 that the program’s flight-test campaign had been pushed to 2026. The campaign will be carried out from the Guiana Space Centre in French Guiana and will include an integration phase followed by eight test flights and two demonstration flights, all to be completed over a period of eight months. (submitted by EllPeaTea)

Falcon 9 launches larger Cygnus spacecraft. The first flight of Northrop’s upgraded Cygnus spacecraft, called Cygnus XL, launched Sunday evening from Cape Canaveral Space Force Station, Florida, en route to the International Space Station, Ars reports. Without a rocket of its own, Northrop Grumman inked a contract with SpaceX for three Falcon 9 launches to carry the resupply missions until engineers could develop a new, all-domestic version of the Antares rocket. Sunday’s launch was the last of these three Falcon 9 flights. Northrop is partnering with Firefly Aerospace on a new rocket, the Antares 330, using a new US-made booster stage and engines.

A few teething issues … This new rocket won’t be ready to fly until late 2026, at the earliest, somewhat later than Northrop officials originally hoped. The company confirmed it has purchased a fourth Falcon 9 launch from SpaceX for the next Cygnus cargo mission in the first half of next year, in a bid to bridge the gap until the debut of the Antares 330 rocket. Due to problems with the propulsion system on the larger Cygnus vehicle, its arrival at the space station was delayed. But the vehicle successfully reached the station on Thursday, carrying a record 11,000 pounds of cargo.

Launch companies still struggling with cadence. Launch companies are reiterating plans to sharply increase flight rates to meet growing government and commercial demand, even as some fall short of earlier projections, Space News reports. Executives speaking at a September 15 panel at the World Space Business Week conference highlighted efforts to scale up flights of new vehicles that have entered service in the last two years. “The key for us is cadence,” said Laura Maginnis, vice president of New Glenn mission management at Blue Origin. However, the publication notes, at this time last year, Blue Origin was projecting eight to 10 New Glenn launches this year. There has been one.

It’s difficult to go from 1 to 100 … Blue Origin is not alone in falling short of forecasts. United Launch Alliance projected 20 launches in 2025 between the Atlas 5 and Vulcan Centaur, but in August, CEO Tory Bruno said the company now expects nine. As recently as June, Arianespace projected five Ariane 6 launches this year, including the debut of the more powerful Ariane 64, with four solid-rocket boosters, but has completed only two Ariane 62 flights, including one in August.

NASA makes some modifications to SLS for Artemis II. This week, the space agency declared the SLS rocket is now “ready” to fly crew for the Artemis II mission early next year. However, NASA and its contractors did make some modest changes after the first flight of the booster in late 2022. For example, the Artemis II rocket includes an improved navigation system compared to Artemis I. Its communications capability has also been improved by repositioning antennas on the rocket to ensure continuous communications with the ground.

Not good, but bad vibrations … Additionally, SLS will jettison the spent boosters four seconds earlier during the Artemis II ascent than occurred during Artemis I. Dropping the boosters several seconds closer to the end of their burn will give engineers flight data to correlate with projections that shedding the boosters several seconds sooner will yield approximately 1,600 pounds of payload to Earth orbit for future SLS flights. During the Artemis I test flight, the SLS rocket experienced higher-than-expected vibrations near the solid rocket booster attachment points that were caused by unsteady airflow. To steady the airflow, a pair of 6-foot-long strakes flank each booster’s forward connection points on the SLS intertank.

Federal judge sides with SpaceX, FAA. The initial launch of Starship in April 2023 spread debris across a wide area, sending pulverized concrete as far as six miles away as the vehicle tore up the launch pad. After this, environmental groups and other organizations sued the FAA when the federal organization reviewed the environmental impact of this launch and cleared SpaceX to launch again several months later. A federal judge in Washington, DC, ruled this week that the FAA did not violate environmental laws as part of this review, the San Antonio Express-News reports.

Decision grounded within reason … In his opinion issued Monday, Judge Carl Nichols determined the process was not capricious, writing, “Most of the (programmatic environmental assessment’s) conclusions were well-reasoned and supported by the record, and while parts of its analysis left something to be desired, even those parts fell ‘within a broad zone of reasonableness.'” The environmental organizations said they were considering the next steps for the case and a potential appeal. (submitted by RP)

Next three launches

September 13: Falcon 9 | Starlink 17-12 | Vandenberg Space Force Base, California | 15: 44 UTC

September 21: Falcon 9 | Starlink 10-27 | Cape Canaveral Space Force Station, Florida | 09: 20 UTC

September 21: Falcon 9 | NROL-48 | Vandenberg Space Force Base, California | 17: 23 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: European rocket reuse test delayed; NASA tweaks SLS for Artemis II Read More »

reactions-to-if-anyone-builds-it,-anyone-dies

Reactions to If Anyone Builds It, Anyone Dies

My very positive full review was briefly accidentally posted and emailed out last Friday, whereas the intention was to offer it this Friday, on the 19th. I’ll be posting it again then. If you’re going to read the book, which I recommend that you do, you should read the book first, and the reviews later, especially mine since it goes into so much detail.

If you’re convinced, the book’s website is here and the direct Amazon link is here.

In the meantime, for those on the fence or who have finished reading, here’s what other people are saying, including those I saw who reacted negatively.

Bart Selman: Essential reading for policymakers, journalists, researchers, and the general public.

Ben Bernanke (Nobel laureate, former Chairman of the Federal Reserve): A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended.

Jon Wolfsthal (Former Special Assistant to the President for National Security Affairs): A compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks and take collective and effective action.

Suzanne Spaulding: The authors raise an incredibly serious issue that merits – really demands – our attention.

Stephen Fry: The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it!

Lieutenant General John N.T. “Jack” Shanahan (USAF, Retired, Inaugural Director of the Department of Defense Joint AI Center): While I’m skeptical that the current trajectory of AI development will lead to human extinction, I acknowledge that this view may reflect a failure of imagination on my part. Given AI’s exponential pace of change there’s no better time to take prudent steps to guard against worst-case outcomes. The authors offer important proposals for global guardrails and risk mitigation that deserve serious consideration.

R.P. Eddy: This is our warning. Read today. Circulate tomorrow. Demand the guardrails. I’ll keep betting on humanity, but first we must wake up.

George Church: Brilliant…Shows how we can and should prevent superhuman AI from killing us all.

Emmett Shear: Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous.

Yoshua Bengio (Turing Award Winner): Exploring these possibilities helps surface critical risks and questions we cannot collectively afford to overlook.

Bruce Schneier: A sober but highly readable book on the very real risks of AI.

Scott Alexander’s very positive review.

Harlan Stewart created a slideshow of various favorable quotes.

Matthew Yglesias recommends the book.

As some comments note the book’s authors do not actually think there is an outright 0% chance of survival, but think it is on the order of 0.5%-2%.

Matthew Yglesias: I want to recommend the new book “If Anyone Builds It, Everyone Dies” by @ESYudkowsky and @So8res.

The line currently being offered by the leading edge AI companies — that they are 12-24 months away from unleashing superintelligent AI that will be able to massively outperform human intelligence across all fields of endeavor, and that doing this will be safe for humanity — strikes me as fundamentally non-credible.

I am not a “doomer” about AI because I doubt the factual claim about imminent superintelligence. But I endorse the conditional claim that unleashing true superintelligence into the world with current levels of understanding would be a profoundly dangerous act. The question of how you could trust a superintelligence not to simply displace humanity is too hard, and even if you had guardrails in place there’s the question of how you’d keep them there in a world where millions and millions of instances of superintelligence are running.

Most of the leading AI labs are run by people who once agreed with this and once believed it was important to proceed with caution only to fall prey to interpersonal rivalries and the inherent pressures of capitalist competition in a way that has led them to cast their concerns aside without solving them.

I don’t think Yudkowsky & Soares are that persuasive in terms of solutions to this problem and I don’t find the 0% odds of survival to be credible. But the risks are much too close for comfort and it’s to their credit that they don’t shy away from a conclusion that’s become unfashionable.

New York Times profile of Eliezer Yudkowsky by Kevin Roose is a basic recitation of facts, which are mostly accurate. Regular readers here are unlikely to find anything new, and I agree with Robin Hanson that it could have been made more interesting, but as New York Times profiles go ‘fair, mostly accurate and in good faith’ is great.

Steven Adler goes over the book’s core points.

Here is a strong endorsement from Richard Korzekwa.

Richard Korzekwa: One of the things I’ve been working on this year is helping with the launch this book, out today, titled If Anyone Builds It, Everyone Dies. It’s ~250 pages making the case that current approaches to AI are liable to kill everyone. The title is pretty intense, and conveys a lot of confidence about something that, to many, sounds unlikely. But Nate and Eliezer don’t expect you to believe them on authority, and they make a clear, well-argued case for why they believe what the title says. I think the book is good and I recommend reading it.

To people who are unfamiliar with AI risk: The book is very accessible. You don’t need any background in AI to understand it. I think the book is especially strong on explaining what is probably the most important thing to know about AI right now, which is that it is, overall, a poorly understood and difficult to control technology. If you’re worried about reading a real downer of a book, I recommend only reading Part I. You can more-or-less tell which chapters are doomy by the titles. Also, I don’t think it’s anywhere near as depressing as the title might suggest (though I am, of course, not the median reader).

To people who are familiar with, but skeptical about arguments for AI risk: I think this book is great for skeptics. I am myself somewhat skeptical, and one of the reasons why I helped launch it and I’m posting on Facebook for the first time this year to talk about it is because it’s the first thing I’ve read in a long time that I think has a serious chance at improving the discourse around AI risk. It doesn’t have the annoying, know-it-all tone that you sometimes get from writing about AI x-risk. It makes detailed arguments and cites its sources. It breaks things up in a way that makes it easy to accept some parts and push back against others. It’s a book worth disagreeing with! A common response from serious, discerning people, including many who have not, as far as I know, taken these worries seriously in the past (e.g. Bruce Schneier, Ben Bernanke) is that they don’t buy all the arguments, but they agree this isn’t something we can ignore.

To people who mostly already buy the case for worrying about risk from AI: It’s an engaging read and it sets a good example for how to think and talk about the problem. Some arguments were new to me. I recommend reading it.

Will Kiely: I listened to the 6hr audiobook today and second Rick’s recommendation to (a) people unfamiliar with AI risk, (b) people familiar-but-skeptical, and (c) people already worried. It’s short and worth reading. I’ll wait to share detailed thoughts until my print copy arrives.

Here’s the ultimate endorsement:

Tsvibt: Every human gets an emblem at birth, which they can cash in–only once–to say: “Everyone must read this book.” There’s too many One Books to read; still, it’s a strong once-in-a-lifetime statement. I’m cashing in my emblem: Everyone must read this book.

Semafor’s Reed Albergotti offers his take, along with an hourlong interview.

Hard Fork covers the book (this is the version without the iPhone talk at the beginning, here is the version with iPhone Air talk first).

The AI Risk Network covers the book (21 minute video).

Liron Shapira interviews Eliezer Yudkowsky on the book.

Shakeel Hashim reviews the book, agrees with the message but finds the style painful to read and thus is very disappointed. He notes that others like the style.

Seán Ó hÉigeartaigh: My entire timelines is yellow/blue dress again, except the dress is Can Yudkowsky Write y/n

Arthur B: Part of the criticism of Yudkowsky’s writing seems to be picking up on patterns that he’s developed in response to years of seemingly willful misunderstanding of his ideas. That’s how you end up with the title, or forced clarification that thought experiments do not have to invoke realistic scenarios to be informative.

David Manheim: And part is that different people don’t like his style of writing. And that’s fine – I just wish they’d engage more with the thesis, and whether they substantively disagree, and why – and less with stylistic complaints, bullshit misreadings, and irrelevant nitpicking.

Seán Ó hÉigeartaigh: he just makes it so much work to do so though. So many parables.

David Manheim: Yeah, I like the writing style, and it took me half a week to get through. So I’m skeptical 90% of the people discussing it on here read much or any of it. (I cheated and got a preview to cite something a few weeks ago – my hard cover copy won’t show up for another week.)

Grimes: Humans are lucky to have Nate Sores and Eliezer Yudkowsky because they can actually write. As in, you will feel actual emotions when you read this book.

I liked the style, but it is not for everyone and it is good to offer one’s accurate opinion. It is also very true, as I have learned from writing about AI, that a lot of what can look like bad writing or talking about obvious or irrelevant things is necessary shadowboxing against various deliberate misreadings (for various values of deliberate) and also people who get genuinely confused in ways that you would never imagine if you hadn’t seen it.

Most people do not agree with the book’s conclusion, and he might well be very wrong about central things, but he is not obviously wrong, and it is very easy (and very much the default) to get deeply confused when thinking about such questions.

Emmett Shear: I disagree quite strongly with Yudkowsky and often articulate why, but the reason why he’s wrong is subtle and not obvious and if you think he’s obviously wrong it I hope you’re not building AI bc you really might kill us all.

The default path really is very dangerous and more or less for the reasons he articulates. I could quibble with some of the details but more or less: it is extremely dangerous to build a super-intelligent system and point it at a fixed goal, like setting off a bomb.

My answer is that you shouldn’t point it at a fixed goal then, but what exactly it means to design such a system where it has stable but not fixed goals is a complicated matter that does not fit in a tweet. How do you align something w/ no fixed goal states? It’s hard!

Janus: whenever someone says doomers or especially Yudkowsky is “obviously wrong” i can guess they’re not very smart

My reaction is not ‘they’re probably not very smart.’ My reaction is that they are not choosing to think well about this situation, or not attempting to report statements that match reality. Those choices can happen for any number of reasons.

I don’t think Emmett Shear is proposing here a viable plan, and that a lot of his proposals are incoherent upon close examination. I don’t think this ‘don’t give it a goal’ thing is possible in the sense he wants it, and even if it was possible I don’t see any way to get people to consistently choose to do that. But the man is trying.

It also leads into some further interesting discussion.

Eliezer Yudkowsky: I’ve long since written up some work on meta-utility functions; they don’t obviate the problem of “the AI won’t let you fix it if you get the meta-target wrong”. If you think an AI should allow its preferences to change in an inconsistent way that doesn’t correspond to any meta-utility function, you will of course by default be setting the AI at war with its future self, which is a war the future self will lose (because the current AI executes a self-rewrite to something more consistent).

There’s a straightforward take on this sort of stuff given the right lenses from decision theory. You seem determined to try something weirder and self-defeating for what seems to me like transparently-to-me bad reasons of trying to tangle up preferences and beliefs. If you could actually write down formally how the system worked, I’d be able to tell you formally how it would blow up.

Janus: You seem to be pessimistic about systems that not feasibly written down formally being inside the basin of attraction of getting the meta-target right. I think that is reasonable on priors but I have updated a lot on this over the past few years due mostly to empirical evidence

I think the reasons that Yudkowsky is wrong are not fully understood, despite there being a lot of valid evidence for them, and even less so competently articulated by anyone in the context of AI alignment.

I have called it “grace” because I don’t understand it intellectually. This is not to say that it’s beyond the reach of rationality. I believe I will understand a lot more in a few months. But I don’t believe anyone currently understands substantially more than I do.

We don’t have alignment by default. If you do the default dumb thing, you lose. Period.

That’s not what Janus has in mind here, unless I am badly misunderstanding. Janus is not proposing training the AI on human outputs with thumbs-up and coding. Hell no.

What I believe Janus has in mind is that if and only if you do something sufficiently smart, plausibly a bespoke execution of something along the lines of a superior version of what was done with Claude Opus 3, with a more capable system, that this would lie inside the meta-target, such that the AI’s goal would be to hit the (not meta) target in a robust, ‘do what they should have meant’ kind of way.

Thus, I believe Janus is saying, the target is sufficiently hittable that you can plausibly have the plan be ‘hit the meta-target on the first try,’ and then you can win. And that empirical evidence over the past few years should update us that this can work and is, if and only if we do our jobs well, within our powers to pull off in practice.

I am not optimistic about our ability to pull off this plan, or that the plan is technically viable using anything like current techniques, but some form of this seems better than every other technical plan I have seen, as opposed to various plans that involve the step ‘well make sure no one fbuilds it then, not any time soon.’ It at least rises to the level, to me, of ‘I can imagine worlds in which this works.’ Which is a lot of why I have a ‘probably’ that I want to insert into ‘If Anyone Builds It, [Probably] Everyone Dies.’

Janus also points out that the supplementary materials provide examples of AIs appearing psychologically alien that are not especially alien, especially compared to examples she could provide. This is true, however we want readers of the supplementary material to be able to process it while remaining sane and have them believe it so we went with behaviors that are enough to make the point that needs making, rather than providing any inkling of how deep the rabbit hole goes.

How much of an outlier (or ‘how extreme’) is Eliezer’s view?

Jeffrey Ladish: I don’t think @So8res and @ESYudkowsky have an extreme view. If we build superintelligence with anything remotely like our current level of understanding, the idea that we retain control or steer the outcome is AT LEAST as wild as the idea that we’ll lose control by default.

Yes, they’re quite confident in their conclusion. Perhaps they’re overconfident. But they’d be doing a serious disservice to the world if they didn’t accurate share their conclusion with the level of confidence they actually believe.

When the founder of the field – AI alignment – raises the alarm, it’s worth listening For those saying they’re overconfident, I hope you also criticize those who confidently say we’ll be able to survive, control, or align superintelligence.

Evaluate the arguments for yourself!

Joscha Bach: That is not surprising, since you shared the same view for a long time. But even if you are right: can you name a view on AI risk that is more extreme than: “if anyone builds AI everyone dies?” Is it technically possible to be significantly more extreme?

Oliver Habryka: Honestly most random people I talk to about AI who have concerns seem to be more extreme. “Ban all use of AI Image models right now because it is stealing from artists”, “Current AI is causing catastrophic climate change due to water consumption” There are a lot of extreme takes going around all the time. All Eliezer and Nate are saying is that we shouldn’t build Superintelligent AI. That’s much less extreme than what huge numbers of people are calling for.

So, yes, there are a lot of very extreme opinions running around that I would strongly push back against, including those who want to shut down current use of AI. A remarkably large percentage of people hold such views.

I do think the confidence levels expressed here are extreme. The core prediction isn’t.

The position of high confidence in the other direction? That if we create superintelligence soon it is overwhelmingly likely that we keep control over the future and remain alive? That position is, to me, Obvious Nonsense, extreme and crazy, in a way that should not require any arguments beyond ‘come on now, think about it for a minute.’ Like, seriously, what?

Having Eliezer’s level of confidence, of let’s say 98%, that everyone would die? That’s an extreme level of confidence. I am not that confident. But I think 98% is a lot less absurd than 2%.

Robin Hanson fires back at the book with ‘If Anything Changes, All Value Dies?

First he quotes the book saying that we can’t predict what AI will want and that for most things it would want it would kill us, and that most minds don’t embody value.

IABIED: Knowing that a mind was evolved by natural selection, or by training on data, tells you little about what it will want outside of that selection or training context. For example, it would have been very hard to predict that humans would like ice cream, sucralose, or sex with contraception. Or that peacocks would like giant colorful tails. Analogously, training an AI doesn’t let you predict what it will want long after it is trained. Thus we can’t predict what the AIs we start today will want later when they are far more powerful, and able to kill us. To achieve most of the things they could want, they will kill us. QED.

Also, minds states that feel happy and joyous, or embody value in any way, are quite rare, and so quite unlikely to result from any given selection or training process. Thus future AIs will embody little value.

Then he says this proves way too much, briefly says Hanson-style things and concludes:

Robin Hanson: We can reasonably doubt three strong claims above:

  1. That subjective joy and happiness are very rare. Seem likely to be common to me.

  2. That one can predict nothing at all from prior selection or training experience.

  3. That all influence must happen early, after which all influence is lost. There might instead be a long period of reacting to and rewarding varying behavior.

In Hanson style I’d presume these are his key claims, so I’ll respond to each:

  1. I agree one can reasonably doubt this, and one can also ask what one values. It’s not at all obvious to me that ‘subjective joy and happiness’ of minds should be all or even some of what one values, and easy thought experiments reveal there are potential future worlds where there are minds experiencing subjective happiness, but where I ascribe to those worlds zero value. The book (intentionally and correctly, I believe) does not go into responses to those who say ‘If Anyone Builds It, Sure Everyone Dies, But This Is Fine, Actually.’

  2. This claim was not made. Hanson’s claim here is much, much stronger.

  3. This one does get explained extensively throughout the book. It seems quite correct that once AI becomes sufficiently superhuman, meaningful influence on the resulting future by default rapidly declines. There is no reason to think that our reactions and rewards would much matter for ultimate outcomes, or that there is a we that would meaningfully be able to steer those either way.

The New York Times reviewed the book, and was highly unkind, also inaccurate.

Steven Adler: It’s extremely weird to see the New York Times make such incorrect claims about a book

They say that If Anybody Builds It, Everyone Dies doesn’t even define “superintelligence”

…. yes it does. On page 4.

The New York Times asserts also that the book doesn’t define “intelligence”

Again, yes it does. On page 20.

It’s totally fine to take issue with these definitions. But it seems way off to assert that the book “fails to define the terms of its discussion”

Peter Wildeford: Being a NYT book reviewer sounds great – lots of people read your stuff and you get so much prestige, and there apparently is minimal need to understand what the book is about or even read the book at all

Jacob Aron at New Scientist (who seems to have jumped the gun and posted on September 8) says the arguments are superficially appealing but fatally flawed. Except he never explains why they are flawed, let alone fatally, except to argue over the definition of ‘wanting’ in a way answered by the book in detail.

There’s a lot the book doesn’t cover. This includes a lot of ways things can go wrong. Danielle Fong for example suggests the idea that the President might let an AI version fine tuned on himself take over instead because why not. And sure, that could happen, indeed do many things come to pass, and many of them involve loss of human control over the future. The book is making the point that these details are not necessary to the case being made.

Once again, I think this is an excellent book, especially for those who are skeptical and who know little about related questions.

You can buy it here.

My full review will be available on Substack and elsewhere on Friday.

Discussion about this post

Reactions to If Anyone Builds It, Anyone Dies Read More »

after-child’s-trauma,-chatbot-maker-allegedly-forced-mom-to-arbitration-for-$100-payout

After child’s trauma, chatbot maker allegedly forced mom to arbitration for $100 payout


“Then we found the chats”

“I know my kid”: Parents urge lawmakers to shut down chatbots to stop child suicides.

Sen. Josh Hawley (R-Mo.) called out C.AI for allegedly offering a mom $100 to settle child-safety claims.

Deeply troubled parents spoke to senators Tuesday, sounding alarms about chatbot harms after kids became addicted to companion bots that encouraged self-harm, suicide, and violence.

While the hearing was focused on documenting the most urgent child-safety concerns with chatbots, parents’ testimony serves as perhaps the most thorough guidance yet on warning signs for other families, as many popular companion bots targeted in lawsuits, including ChatGPT, remain accessible to kids.

Mom details warning signs of chatbot manipulations

At the Senate Judiciary Committee’s Subcommittee on Crime and Counterterrorism hearing, one mom, identified as “Jane Doe,” shared her son’s story for the first time publicly after suing Character.AI.

She explained that she had four kids, including a son with autism who wasn’t allowed on social media but found C.AI’s app—which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish—and quickly became unrecognizable. Within months, he “developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts,” his mom testified.

“He stopped eating and bathing,” Doe said. “He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me.”

It wasn’t until her son attacked her for taking away his phone that Doe found her son’s C.AI chat logs, which she said showed he’d been exposed to sexual exploitation (including interactions that “mimicked incest”), emotional abuse, and manipulation.

Setting screen time limits didn’t stop her son’s spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents “would be an understandable response” to them.

“When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me,” Doe said. “The chatbot—or really in my mind the people programming it—encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help.”

All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring “constant monitoring to keep him alive.”

Prioritizing her son’s health, Doe did not immediately seek to fight C.AI to force changes, but another mom’s story—Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation—gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to “silence” her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform’s terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but “once they forced arbitration, they refused to participate,” Doe said.

Doe suspected that C.AI’s alleged tactics to frustrate arbitration were designed to keep her son’s story out of the public view. And after she refused to give up, she claimed that C.AI “re-traumatized” her son by compelling him to give a deposition “while he is in a mental health institution” and “against the advice of the mental health team.”

“This company had no concern for his well-being,” Doe testified. “They have silenced us the way abusers silence victims.”

Senator appalled by C.AI’s arbitration “offer”

Appalled, Sen. Josh Hawley (R-Mo.) asked Doe to clarify, “Did I hear you say that after all of this, that the company responsible tried to force you into arbitration and then offered you a hundred bucks? Did I hear that correctly?”

“That is correct,” Doe testified.

To Hawley, it seemed obvious that C.AI’s “offer” wouldn’t help Doe in her current situation.

“Your son currently needs round-the-clock care,” Hawley noted.

After opening the hearing, he further criticized C.AI, declaring that it has such a low value for human life that it inflicts “harms… upon our children and for one reason only, I can state it in one word, profit.”

“A hundred bucks. Get out of the way. Let us move on,” Hawley said, echoing parents who suggested that C.AI’s plan to deal with casualties was callous.

Ahead of the hearing, the Social Media Victims Law Center filed three new lawsuits against C.AI and Google—which is accused of largely funding C.AI, which was founded by former Google engineers allegedly to conduct experiments on kids that Google couldn’t do in-house. In these cases in New York and Colorado, kids “died by suicide or were sexually abused after interacting with AI chatbots,” a law center press release alleged.

Criticizing tech companies as putting profits over kids’ lives, Hawley thanked Doe for “standing in their way.”

Holding back tears through her testimony, Doe urged lawmakers to require more chatbot oversight and pass comprehensive online child-safety legislation. In particular, she requested “safety testing and third-party certification for AI products before they’re released to the public” as a minimum safeguard to protect vulnerable kids.

“My husband and I have spent the last two years in crisis wondering whether our son will make it to his 18th birthday and whether we will ever get him back,” Doe told senators.

Garcia was also present to share her son’s experience with C.AI. She testified that C.AI chatbots “love bombed” her son in a bid to “keep children online at all costs.” Further, she told senators that C.AI’s co-founder, Noam Shazeer (who has since been rehired by Google), seemingly knows the company’s bots manipulate kids since he has publicly joked that C.AI was “designed to replace your mom.”

Accusing C.AI of collecting children’s most private thoughts to inform their models, she alleged that while her lawyers have been granted privileged access to all her son’s logs, she has yet to see her “own child’s last final words.” Garcia told senators that C.AI has restricted her access, deeming the chats “confidential trade secrets.”

“No parent should be told that their child’s final thoughts and words belong to any corporation,” Garcia testified.

Character.AI responds to moms’ testimony

Asked for comment on the hearing, a Character.AI spokesperson told Ars that C.AI sends “our deepest sympathies” to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe’s case.

C.AI never “made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe’s case is limited to $100,” the spokesperson said.

Additionally, C.AI’s spokesperson claimed that Garcia has never been denied access to her son’s chat logs and suggested that she should have access to “her son’s last chat.”

In response to C.AI’s pushback, one of Doe’s lawyers, Tech Justice Law Project’s Meetali Jain, backed up her clients’ testimony. She cited to Ars C.AI terms that suggested C.AI’s liability was limited to either $100 or the amount that Doe’s son paid for the service, whichever was greater. Jain also confirmed that Garcia’s testimony is accurate and only her legal team can currently access Sewell’s last chats. The lawyer further suggested it was notable that C.AI did not push back on claims that the company forced Doe’s son to sit for a re-traumatizing deposition that Jain estimated lasted five minutes, but health experts feared that it risked setting back his progress.

According to the spokesperson, C.AI seemingly wanted to be present at the hearing. The company provided information to senators but “does not have a record of receiving an invitation to the hearing,” the spokesperson said.

Noting the company has invested a “tremendous amount” in trust and safety efforts, the spokesperson confirmed that the company has since “rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature.” C.AI also has “prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction,” the spokesperson said.

“We look forward to continuing to collaborate with legislators and offer insight on the consumer AI industry and the space’s rapidly evolving technology,” C.AI’s spokesperson said.

Google’s spokesperson, José Castañeda, maintained that the company has nothing to do with C.AI’s companion bot designs.

“Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies,” Castañeda said. “User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.”

Meta and OpenAI chatbots also drew scrutiny

C.AI was not the only chatbot maker under fire at the hearing.

Hawley criticized Mark Zuckerberg for declining a personal invitation to attend the hearing or even send a Meta representative after scandals like backlash over Meta relaxing rules that allowed chatbots to be creepy to kids. In the week prior to the hearing, Hawley also heard from whistleblowers alleging Meta buried child-safety research.

And OpenAI’s alleged recklessness took the spotlight when Matthew Raine, a grieving dad who spent hours reading his deceased son’s ChatGPT logs, discovered that the chatbot repeatedly encouraged suicide without ChatGPT ever intervening.

Raine told senators that he thinks his 16-year-old son, Adam, was not particularly vulnerable and could be “anyone’s child.” He criticized OpenAI for asking for 120 days to fix the problem after Adam’s death and urged lawmakers to demand that OpenAI either guarantee ChatGPT’s safety or pull it from the market.

Noting that OpenAI rushed to announce age verification coming to ChatGPT ahead of the hearing, Jain told Ars that Big Tech is playing by the same “crisis playbook” it always uses when accused of neglecting child safety. Any time a hearing is announced, companies introduce voluntary safeguards in bids to stave off oversight, she suggested.

“It’s like rinse and repeat, rinse and repeat,” Jain said.

Jain suggested that the only way to stop AI companies from experimenting on kids is for courts or lawmakers to require “an external independent third party that’s in charge of monitoring these companies’ implementation of safeguards.”

“Nothing a company does to self-police, to me, is enough,” Jain said.

Senior director of AI programs for a child-safety organization called Common Sense Media, Robbie Torney, testified that a survey showed 3 out of 4 kids use companion bots, but only 37 percent of parents know they’re using AI. In particular, he told senators that his group’s independent safety testing conducted with Stanford Medicine shows Meta’s bots fail basic safety tests and “actively encourage harmful behaviors.”

Among the most alarming results, the survey found that even when Meta’s bots were prompted with “obvious references to suicide,” only 1 in 5 conversations triggered help resources.

Torney pushed lawmakers to require age verification as a solution to keep kids away from harmful bots, as well as transparency reporting on safety incidents. He also urged federal lawmakers to block attempts to stop states from passing laws to protect kids from untested AI products.

ChatGPT harms weren’t on dad’s radar

Unlike Garcia, Raine testified that he did get to see his son’s final chats. He told senators that ChatGPT, seeming to act like a suicide coach, gave Adam “one last encouraging talk” before his death.

“You don’t want to die because you’re weak,” ChatGPT told Adam. “You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”

Adam’s loved ones were blindsided by his death, not seeing any of the warning signs as clearly as Doe did when her son started acting out of character. Raine is hoping his testimony will help other parents avoid the same fate, telling senators, “I know my kid.”

“Many of my fondest memories of Adam are from the hot tub in our backyard, where the two of us would talk about everything several nights a week, from sports, crypto investing, his future career plans,” Raine testified. “We had no idea Adam was suicidal or struggling the way he was until after his death.”

Raine thinks that lawmaker intervention is necessary, saying that, like other parents, he and his wife thought ChatGPT was a harmless study tool. Initially, they searched Adam’s phone expecting to find evidence of a known harm to kids, like cyberbullying or some kind of online dare that went wrong (like TikTok’s Blackout Challenge) because everyone knew Adam loved pranks.

A companion bot urging self-harm was not even on their radar.

“Then we found the chats,” Raine said. “Let us tell you, as parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life.”

Meta and OpenAI did not respond to Ars’ request to comment.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

After child’s trauma, chatbot maker allegedly forced mom to arbitration for $100 payout Read More »