Author name: Rejus Almole

judge-lets-construction-on-an-offshore-wind-farm-resume

Judge lets construction on an offshore wind farm resume

That did not, however, stop the administration from trying again, this time targeting a development called Revolution Wind, located a bit further north along the Atlantic coast. This time, however, the developer quickly sued, leading to Monday’s ruling. According to Reuters, after a two-hour court hearing at the District Court of DC, Judge Royce Lamberth termed the administration’s actions “the height of arbitrary and capricious” and issued a preliminary injunction against the hold on Revolution Wind’s construction. As a result, Orsted can restart work immediately.

The decision provides a strong indication of how Lamberth is likely to rule if the government pursues a full trial on the case. And while the Trump administration could appeal, it’s unlikely to see this injunction lifted unless it takes the case all the way to the Supreme Court. Given that Revolution Wind was already 80 percent complete, the case may become moot before it gets that far.

Judge lets construction on an offshore wind farm resume Read More »

volvo-says-it-has-big-plans-for-south-carolina-factory

Volvo says it has big plans for South Carolina factory

Volvo is undergoing something of a restructuring. The automaker wants to be fully electric by 2040, but for that to happen, it needs to remain in business until then. Earlier this year, that meant layoffs, but today, Volvo announced it has big plans for its North American factory in Ridgeville, South Carolina.

Volvo has been making cars in South Carolina since 2017, starting with the S60 sedan—a decision I always found slightly curious given that US car buyers had already given up on sedans by that point in favor of crossovers and SUVs. S60 production ended last summer, and these days, the plant builds the large electric EX90 SUV and the related Polestar 3.

The company is far from fully utilizing the Ridgeville plant, though, which has an annual capacity of 150,000 vehicles. When the turnaround plan was first announced this July, Volvo revealed it would start building the next midsize XC60 in South Carolina—a wise move given the Trump tariffs and the importance of this model to Volvo’s sales figures here.

Now, the OEM says it will add another model to the mix, with a new, yet-to-be-named hybrid due before 2030.

“Our investment plans once again reinforce our long-term commitment to the US market and our manufacturing operations in South Carolina,” said Håkan Samuelsson, chief executive. “This year, we celebrate 70 years of Volvo Cars presence in the United States. We have sold over 5 million cars there and plan to sell many more in years to come,” he said.

Volvo says it has big plans for South Carolina factory Read More »

after-getting-jimmy-kimmel-suspended,-fcc-chair-threatens-abc’s-the-view

After getting Jimmy Kimmel suspended, FCC chair threatens ABC’s The View


Carr: “Turn your license in to the FCC, we’ll find something else to do with it.”

President-elect Donald Trump speaks to Brendan Carr, his intended pick for Chairman of the Federal Communications Commission, as he attends a SpaceX Starship rocket launch on November 19, 2024 in Brownsville, Texas. Credit: Getty Images | Brandon Bell

After pressuring ABC to suspend Jimmy Kimmel, Federal Communications Commission Chairman Brendan Carr is setting his regulatory sights on ABC’s The View and NBC late-night hosts Seth Meyers and Jimmy Fallon.

Carr appeared yesterday on the radio show hosted by Scott Jennings, who describes himself as “the last man standing athwart the liberal mob.” Jennings asked Carr whether The View and other ABC programs violate FCC rules, and made a reference to President Trump calling on NBC to cancel Fallon and Meyers.

“A lot of people think there are other shows on ABC that maybe run afoul of this more often than Jimmy Kimmel,” Jennings said. “I’m thinking specifically of The View, and President Trump himself has mentioned Jimmy Fallon and Seth Meyers at NBC. Do you have comments on those shows, and are they doing what Kimmel did Monday night, and is it even worse on those programs in your opinion?”

In response, Carr discussed the FCC’s Equal Opportunities Rule, also known as the Equal Time Rule, and said the FCC could determine that those shows don’t qualify for an exemption to the rule.

“When you look at these other TV shows, what’s interesting is the FCC does have a rule called the Equal Opportunity Rule, which means, for instance, if you’re in the run-up to an election and you have one partisan elected official on, you have to give equal time, equal opportunity, to the opposing partisan politician,” Carr said.

At another point in the interview, Carr said broadcasters that object to FCC enforcement “can turn your license in to the FCC, we’ll find something else to do with it.”

Bona fide news exemption

Carr said the FCC hasn’t previously enforced the rule on those shows because of an exemption for “bona fide news” programs. He said the FCC could determine the shows mentioned by Jennings aren’t exempt:

There’s an exception to that rule called the bona fide news exception, which means if you are a bona fide news program, you don’t have to abide by the Equal Opportunity Rule. Over the years, the FCC has developed a body of case law on that that has suggested that most of these late night shows, other than SNL, are bona fide news programs. I would assume you could make the argument that The View is a bona fide news show but I’m not so sure about that, and I think it’s worthwhile to have the FCC look into whether The View and some of these other programs you have still qualify as bona fide news programs and [are] therefore exempt from the Equal Opportunity regime that Congress has put in place.

The Equal Opportunity Rule applies to radio and TV broadcast stations with FCC licenses to use the airwaves. An FCC fact sheet explains that stations giving time to one candidate must provide “comparable time and placement to opposing candidates” upon request. The onus is on candidates to request air time—”the station is not required to seek out opposing legally qualified candidates and offer them Equal Opportunities,” the fact sheet says.

The exemption mentioned by Carr means that “appearances by legally qualified candidates on bona fide newscasts, interview programs, certain types of news documentaries, and during on-the-spot coverage of bona fide news events are exempt from Equal Opportunities,” the fact sheet says.

In 1994, the FCC said that “Congress removed the inhibiting effect of the equal opportunities obligation upon bona fide news programming to encourage increased news coverage of political campaign activity.” Congress gave the FCC leeway to interpret the scope of bona fide news exemptions.

Referring to its 1988 ruling on Entertainment Tonight and Entertainment This Week, the FCC said it found that “the principal consideration should be ‘whether the program reports news of some area of current events… in a manner similar to more traditional newscasts.’ The Commission has thus declined to evaluate the relative quality or significance of the topics and stories selected for newscast coverage, relying instead on the broadcaster’s good faith news judgment.”

Carr’s allegations

Carr alleged in November 2024 that NBC putting Kamala Harris on Saturday Night Live before the election was “a clear and blatant effort to evade the FCC’s Equal Time rule.” In fact, NBC gave Trump two free 60-second messages in order to comply with the rule.

Carr didn’t cite any specific incidents on The View or late-night shows that would violate the FCC rule. The View has addressed its attempts to get Trump on the show, however. Executive Producer Brian Teta told Deadline in April 2024, “We’ve invited Trump to join us at the table for both 2016 and 2020 elections, and he declined, and at a certain point, we stopped asking. So I don’t anticipate that changing. I think he’s pretty familiar with how the co-hosts feel about him and doesn’t see himself coming here.”

The Kimmel controversy erupted over a monologue in which he said, “We hit some new lows over the weekend with the MAGA gang desperately trying to characterize this kid who murdered Charlie Kirk as anything other than one of them and with everything they can to score political points from it.”

With accused murderer Tyler Robinson being described as having liberal views, Carr and other conservatives alleged that Kimmel misled viewers. Carr appeared on right-wing commentator Benny Johnson’s podcast on Wednesday and said, “We can do this the easy way or the hard way. These companies can find ways to change conduct, to take action, frankly on Kimmel, or there’s going to be additional work for the FCC ahead.”

Nexstar and Sinclair, two major owners of TV stations, both urged ABC to take action against Kimmel and said their stations would not air his show. The pressure from broadcasters is happening at a time when both Nexstar and ABC owner Disney are seeking Trump administration approval for mergers.

Democrats accuse Carr of hypocrisy on First Amendment

Anna Gomez, the only Democrat on the Republican-majority FCC, said yesterday that Carr overstepped his authority, but “billion-dollar companies with pending business before the agency” are “vulnerable to pressure to bend to the government’s ideological demands.”

Democratic lawmakers criticized Carr and proposed investigations into the chair for abuse of authority. “It is not simply unacceptable for the FCC chairman to threaten a media organization because he does not like the content of its programming—it violates the First Amendment that you claim to champion,” Senate Democrats wrote in a letter to Carr. “The FCC’s role in overseeing the public airwaves does not give it the power to act as a roving press censor, targeting broadcasters based on their political commentary. But under your leadership, the FCC is being weaponized to do precisely that.”

Democrats pointed to some of Carr’s previous statements in which he decried government censorship. During his 2023 re-confirmation proceedings, Senate Democrats asked Carr about social media posts in which he accused Democrats of engaging in censorship like “what you’d see in the Soviet Union.”

“I posted those tweets in the context of expressing my view on the First Amendment that debate on matters of public interest should be robust, uninhibited, and wide open,” Carr wrote in his response to Democratic senators. “I believe that the best remedy to speech that someone does not like or finds objectionable is more speech. I posted them because I believe that a newsroom’s decision about what stories to cover and how to frame them should, consistent with the First Amendment, be beyond the reach of any government official.”

Years earlier, in 2019, Carr posted a tweet that said, “Should the government censor speech it doesn’t like? Of course not. The FCC does not have a roving mandate to police speech in the name of the ‘public interest.'”

Sen. Ted Cruz (R-Texas) also criticized Carr’s approach, saying it would lead to the same tactics being used against Republicans the next time Democrats are in power.

Carr to broadcasters: Give your licenses back to FCC

Carr said this week he’s only addressing licensed broadcasters, which have public-interest obligations, as opposed to cable and streaming services that don’t need FCC licenses. Network programming itself doesn’t need an FCC license, but the TV stations that carry network shows require licenses.

Carr tried to cast Kimmel’s suspension as the result of organic pressure from licensed broadcasters, rather than FCC coercion. “There’s no untoward coercion happening here,” Carr told Jennings. “The market was intended to function this way, where local TV stations get to push back.”

But TV station owners did so in exactly the way that Carr urged them to. “The individual licensed stations that are taking their content, it’s time for them to step up and say this garbage isn’t something that we think serves the needs of our local communities,” Carr said on Johnson’s podcast. Carr said that Kimmel’s monologue “appears to be some of the sickest conduct possible.”

On the Jennings show, Carr alleged that Democrats in the previous administration implemented “a two-tiered weaponized system of justice,” and that his FCC is instead giving everyone “a fair shake and even-handed treatment.”

Carr has repeatedly threatened broadcasters with the FCC’s rarely enforced news distortion policy. As we’ve explained, the FCC technically has no rule or regulation against news distortion, which is why it is called a policy and not a rule. But on Jennings’ show, he described it as a rule.

“We do have those rules at the FCC: If you engage in news distortion, we can take action,” Carr said.

As we’ve written several times, it is difficult legally for the FCC to revoke broadcast licenses. But it isn’t difficult for Carr to exert pressure on networks and broadcasters through public statements. Carr suggested yesterday that broadcasters turn in their licenses if they don’t like his approach to enforcement.

“If you’re a broadcaster and you don’t like being held accountable for the first time in a long time through the public interest standard, that’s fine. You can turn your license in to the FCC, we’ll find something else to do with it,” Carr said. “Or you can go to Congress and say, ‘I don’t want the FCC having public interest obligations on broadcasters anymore, I want broadcasters to be like cable, to be like a streaming service.’ That’s fine too. But as long as that’s the system that Congress has created, we’re going to enforce it.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

After getting Jimmy Kimmel suspended, FCC chair threatens ABC’s The View Read More »

you’ll-enjoy-the-specialized-turbo-vado-sl-2-6.0-carbon-even-without-assist

You’ll enjoy the Specialized Turbo Vado SL 2 6.0 Carbon even without assist


It’s an investment, certainly of money, but also in long, fast rides.

The Specialized Turbo Vado SL 2 6.0 Carbon Credit: Specialized

Two things about the Specialized Turbo Vado SL 2 6.0 Carbon are hard to fathom: One is how light and lithe it feels as an e-bike, even with the battery off; the other is how hard it is to recite its full name when other riders ask you about the bike at stop lights and pit stops.

I’ve tested about a half-dozen e-bikes for Ars Technica. Each test period has included a ride with my regular group for about 30 miles. Nobody else in my group rides electric, so I try riding with no assist, at least part of the way. Usually I give up after a mile or two, realizing that most e-bikes are not designed for unpowered rides.

On the Carbon (as I’ll call it for the rest of this review), you can ride without power. At 35 pounds, it’s no gram-conscious road bike, but it feels lighter than that number implies. My daily ride is an aluminum-framed model with an internal geared hub that weighs about the same, so I might be a soft target. But it’s a remarkable thing to ride an e-bike that starts with a good unpowered ride and lets you build on that with power.

Once you actually crank up the juice, the Carbon is pretty great, too. Deciding whether this bike fits your riding goals is a lot tougher than using and enjoying it.

Specialized’s own system

It’s tough to compare this Carbon to other e-bikes, because it’s using hardly any of the same standard components as all the others.

The 320-watt mid-drive motor is unique to Specialized models, as is its control system, its handlebar display, its charge ports, and its software. On every other e-bike I’ve ridden, you can usually futz around with the controls or app or do some Internet searching to figure out a way to, say, turn off an always-on headlamp. On this Carbon, there is not. You are riding with the lights on, because that’s how it was designed (likely with European regulations in mind).

The bottom half of the Carbon, with its just-powerful-enough mid-drive motor, charging port, bottle cages, and a range-extending battery. Watch your stance if you’ve got wide-ranging feet, like the author.

Credit: Kevin Purdy

The bottom half of the Carbon, with its just-powerful-enough mid-drive motor, charging port, bottle cages, and a range-extending battery. Watch your stance if you’ve got wide-ranging feet, like the author. Credit: Kevin Purdy

Specialized has also carved out a very unique customer profile with this bike. It’s not the bike to get if you’re the type who likes to tinker, mod, or upgrade (or charge the battery outside the bike). It is the bike to get if you are the type who wants to absolutely wreck a decent commute, to power through some long climbs with electric confidence, or simply have a premier e-bike commute or exercise experience. It’s not an entirely exercise-minded carbon model, but it’s not a chill, throttle-based e-bike, either.

The ride

I spent probably a quarter as much time thinking about riding the Carbon as I did actually riding it. This bike costs a minimum of $6,000; where can you ride it and never let it out of your sight for even one moment? The Carbon offers Apple Find My tracker integration and has its own Turbo System Lock that kills the motor and (optionally) sets off lights and siren alarms when the bike is moved while disabled. That’s all good, but the Carbon remains a bike that demands full situational awareness, wherever you leave it.

The handlebar display on the Carbon. There are a few modes, but this is the relative display density: big numbers, basic information, refer to the phone app if you want more.

Credit: Kevin Purdy

The handlebar display on the Carbon. There are a few modes, but this is the relative display density: big numbers, basic information, refer to the phone app if you want more. Credit: Kevin Purdy

You unlock the bike with either the Specialized smartphone app or a PIN code, entered with an up/down/press switch. The 2.1-inch screen only has a few display options but can provide the basics (speed, pedal cadence, wattage, gear/assist levels), or, if you dig into Specialized’s app and training programs and connect ANT+ gear, your heart rate and effort.

Once you’re done plotting, unlocking, and data-picking, you can ride the Carbon and feel its real value. Specialized, a company that seems deeply committed to version control, claims that the Future Shock 3.2 front suspension on this 6.0 Carbon reduces impact by 53 percent or more, versus a bike with no suspension. Combined with the 47 mm knobby tires and the TRP hydraulic disc brakes, I had no trouble switching from road to gravel, taking grassy shortcuts, hopping off standard rubes, or facing down city streets with inconsistent upkeep.

I’ve been spoiled by the automatic assist available on Bosch mid-drive motors. The next best thing is probably something like the Shimano Devore XT/SLX shifters on this Carbon, paired with the power monitoring. The 12-speed system, with a 10-51t cassette range, shifted at the speed of thought. Your handlebar display gives you a color-coded guide when you should probably shift up or down, based on your cadence and wattage output.

The controls for the Carbon’s display, power, and switch are just this little switch, with three places to press and an up/down switch. Sometimes I thought it was clever and efficient; other times, I wish I had picked a more simple unlock code.

Credit: Kevin Purdy

The controls for the Carbon’s display, power, and switch are just this little switch, with three places to press and an up/down switch. Sometimes I thought it was clever and efficient; other times, I wish I had picked a more simple unlock code. Credit: Kevin Purdy

That battery range, as reported by Specialized, is “up to 5 hours,” a number that few people are going to verify. It’s a 520-watt-hour battery in a 48-volt system that can turn out a rated 320 watts of power. You can adjust the output of all three assist levels in the Specialized app. And you can buy a $450 water-bottle-sized range extender battery that adds another 160 Wh to your system if you sacrifice a bottle cage (leaving two others).

But nobody should ride this bike, or its cousins, like a juice miser on a cargo run. This bike is meant to move, whether to speed through a commute, push an exercise ride a bit farther, or tackle that one hill that ruins your otherwise enjoyable route. The Carbon felt good on straightaways, on curves, starting from a dead stop, and pretty much whenever I was in the zone, forgetting about the bike itself and just pedaling.

I don’t have many points of comparison, because most e-bikes that cost this much are bulky, intensely powerful, or haul a lot of cargo. The Carbon and its many cousins that Specialized sells cost more because they take things away from your ride: weight, frame, and complex systems. The Carbon provides a rack, lights, three bottle cages, and mounting points, so it can do more than just boost your ride. But that’s what it does better than most e-bikes out there: provide an agile, lightweight athletic ride, upgraded with a balanced amount of battery power and weight to make that ride go faster or farther.

The handlebar, fork, and wiring on the front of the Carbon.

Credit: Kevin Purdy

The handlebar, fork, and wiring on the front of the Carbon. Credit: Kevin Purdy

Always room to improve

I’ve said only nice things about this $6,000 bike, so allow me to pick a few nits. I’ve got big feet (size 12 wide) and a somewhat sloppy pedal position when I’m not using clips. Using the bottle-sized battery, with its plug on the side of the downtube, led to a couple of fat-footed disconnections while riding. When the Carbon notices that even its supplemental battery has disconnected, it locks out its display system; I had to enter a PIN code and re-plug the battery to get going again. This probably won’t be an issue for most people, but it’s worth noting if you’re looking at that battery as a range solution.

The on-board display and system seem a bit underdeveloped for the bike’s cost, too. Having a switch with three controls (up, down, push-in) makes navigating menus and customizing information tiresome. You can see Specialized pushing you to the smartphone for deeper data and configuration and keeping control space on the handlebars to a minimum. But I’ve found the display and configuration systems on many cheaper bikes more helpful and intuitive.

The Specialized Turbo Vado SL 2 6.0 Carbon (whew!) provided some of the most enjoyable rides I could imagine out of a bike I had no intention of keeping. It’s an investment, certainly of money, but also to long, fast rides, whether to get somewhere or nowhere in particular. Maybe you want more battery range, more utility, or more rugged and raw power for the price. But it is hard to beat this bike in the particular race it is running.

You’ll enjoy the Specialized Turbo Vado SL 2 6.0 Carbon even without assist Read More »

“yikes”:-internal-emails-reveal-ticketmaster-helped-scalpers-jack-up-prices

“Yikes”: Internal emails reveal Ticketmaster helped scalpers jack up prices

Through those years, employees occasionally flagged abuse behavior that Ticketmaster and Live Nation were financially motivated to ignore, the FTC alleged. In 2018, one Ticketmaster engineer tried to advocate for customers, telling an executive in an email that fans can’t tell the difference between Ticketmaster-supported brokers—which make up the majority of its resale market—and scalpers accused of “abuse.”

“We have a guy that hires 1,000 college kids to each buy the ticket limit of 8, giving him 8,000 tickets to resell,” the engineer explained. “Then we have a guy who creates 1,000 ‘fake’ accounts and uses each [to] buy the ticket limit of 8, giving him 8,000 tickets to resell. We say the former is legit and call him a ‘broker’ while the latter is breaking the rules and is a ‘scalper.’ But from the fan perspective, we end up with one guy reselling 8,000 tickets!”

And even when Ticketmaster flagged brokers as bad actors, the FTC alleged the company declined to enforce its rules to crack down if losing resale fees could hurt Ticketmaster’s bottom line.

“Yikes,” said a Ticketmaster employee in 2019 after noticing that a broker previously flagged for “violating fictitious account rules on a “large scale” was “still not slowing down.”

But that warning, like others, was ignored by management, the FTC alleged. Leadership repeatedly declined to impose any tools “to prevent brokers from bypassing posted ticket limits,” the FTC claimed, after analysis showed Ticketmaster risked losing nearly $220 million in annual resale ticket revenue and $26 million in annual operating income. In fact, executives were more alarmed, the FTC alleged, when brokers complained about high-volume purchases being blocked, “intentionally” working to support their efforts to significantly raise secondary market ticket prices.

On top of earning billions from fees, Ticketmaster can also profit when it “unilaterally” decides to “increase the price of tickets on their secondary market.” From 2019 to 2024, Ticketmaster “collected over $187 million in markups they added to resale tickets,” the FTC alleged.

Under the scheme, Ticketmaster can seemingly pull the strings, allowing brokers to buy up tickets on the primary market, then help to dramatically increase those prices on the secondary market, while collecting additional fees. One broker flagged by the FTC bought 772 tickets to a Coldplay concert, reselling $81,000 in tickets for $170,000. Another broker snatched up 612 tickets for $47,000 to a single Chris Stapleton concert, also nearly doubling their investment on the resale market. Meanwhile, artists, of course, do not see any of these profits.

“Yikes”: Internal emails reveal Ticketmaster helped scalpers jack up prices Read More »

rocket-report:-european-rocket-reuse-test-delayed;-nasa-tweaks-sls-for-artemis-ii

Rocket Report: European rocket reuse test delayed; NASA tweaks SLS for Artemis II


All the news that’s fit to lift

“There’s a lot of interest because of the fear that there’s just not a lot of capacity.”

Isar Aerospace’s Spectrum rocket lifts off from Andøya Spaceport, Norway, on March 30, 2025. Credit: Isar Aerospace/Brady Kenniston/NASASpaceflight.com

Welcome to Edition 8.11 of the Rocket Report! We have reached the time of year when it is possible the US government will shut down its operations at the end of this month, depending on congressional action. A shutdown would have significant implications for many NASA missions, but most notably a couple of dozen in the science directorate that the White House would like to shut down. At Ars, we will be watching this issue closely in the coming days. As for Artemis II, it seems to be far enough along that a launch next February seems possible as long as any government closure does not drag on for weeks and weeks.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Rocket Lab to sell common shares. The space company said Tuesday that it intends to raise up to $750 million by selling common shares, MSN reports. This new at-the-market program replaces a prior agreement that allowed Rocket Lab to sell up to $500 million of stock. Under that earlier arrangement, the company had sold roughly $396.6 million in shares before ending the program.

Seeking to scale up … The program’s structure enables Rocket Lab to sell shares periodically through the appointed agents, who may act as either principals or intermediaries. The larger offering indicates that Rocket Lab is aiming to bolster its cash reserves to support ongoing development of its launch services, including the medium-lift Neutron rocket and spacecraft manufacturing operations. The company’s stock dropped by about 10 percent after the announcement.

Astra targets mid-2026 for Rocket 4 debut. Astra is targeting next summer for the first flight of its Rocket 4 vehicle as the company prepares to reenter the launch market, Space News reports. At the World Space Business Week conference in Paris, Chris Kemp, chief executive of Astra, said the company was on track for a first launch of Rocket 4 in summer 2026 from Cape Canaveral, Florida. He highlighted progress Astra is making, such as tests of a new engine the company developed for the vehicle’s first stage that produces 42,000 pounds of thrust. Two of those engines will power the first stage, while the upper stage will use a single Hadley engine produced by Ursa Major.

Pricing a launch competitively … The vehicle will initially be capable of placing about 750 kilograms into low-Earth orbit for a price of $5 million. “That’ll be very competitive,” Kemp said in an interview after the presentation, similar to what SpaceX charges for payloads of that size through its rideshare program. The company is targeting customers seeking alternatives to SpaceX in a constrained launch market. “There’s a lot of interest because of the fear that there’s just not a lot of capacity,” he said, particularly for satellites too large to launch on Rocket Lab’s Electron. (submitted by EllPeaTea)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Avio seeks to raise 400 million euros. Italian rocket builder Avio’s board of directors has approved a 400 million euro ($471 million) capital increase to fund an expansion of its manufacturing capacity to meet rising demand in the global space and defense markets, European Spaceflight reports. The company expects to complete the capital increase by the end of the year; however, it is still subject to a shareholder vote, scheduled for October 23.

Small rockets, big plans … The capital raise is part of a new 10-year business plan targeting an average annual growth rate of about 10 percent in turnover and more than 15 percent in core profit. This growth is projected to be driven by a higher Vega C launch cadence, the introduction of the Vega E rocket, continued participation in the Ariane 6 program by providing solid rocket boosters, and the construction of a new defense production facility in the United States, which is expected to be completed by 2028.

Isar working toward second Spectrum launch. In a briefing this week, Isar Aerospace executives discussed the outcome of the investigation into the March 30 launch of the Spectrum rocket from the Andøya Spaceport in northern Norway, Space News reports. The vehicle activated its flight termination system about half a minute after liftoff, shutting down its engines and plummeting into the waters just offshore of the pad. The primary issue with the rocket was a loss of attitude control.

Bend it like Spectrum … Alexandre Dalloneau, vice president of mission and launch operations at Isar, said that the company had not properly characterized bending modes of the vehicle at liftoff. Despite the failure to get to orbit, Dalloneau considers the first Spectrum launch a successful test flight. The company is working toward a second flight of Spectrum, which will take place “as soon as possible,” Dalloneau said. He did not give a specific target launch date, but officials indicated they were hoping to launch near the end of this year or early next year. (submitted by EllPeaTea)

Callisto rocket test delayed again. A new document from the French space agency CNES has revealed that the inaugural flight of the Callisto reusable rocket demonstrator has slipped from 2026 to 2027, European Spaceflight reports. This reusable launch testbed is a decade old. Conceived in 2015, the Cooperative Action Leading to Launcher Innovation in Stage Toss-back Operations (Callisto) project is a collaboration between CNES and the German and Japanese space agencies aimed at maturing reusable rocket technology for future European and Japanese launch systems.

Still waiting … The Callisto demonstrator will stand 14 meters tall, with a width of 1.1 meters and a takeoff mass of 3,500 kilograms. This latest revision to the program’s timeline comes less than a year after JAXA confirmed in October 2024 that the program’s flight-test campaign had been pushed to 2026. The campaign will be carried out from the Guiana Space Centre in French Guiana and will include an integration phase followed by eight test flights and two demonstration flights, all to be completed over a period of eight months. (submitted by EllPeaTea)

Falcon 9 launches larger Cygnus spacecraft. The first flight of Northrop’s upgraded Cygnus spacecraft, called Cygnus XL, launched Sunday evening from Cape Canaveral Space Force Station, Florida, en route to the International Space Station, Ars reports. Without a rocket of its own, Northrop Grumman inked a contract with SpaceX for three Falcon 9 launches to carry the resupply missions until engineers could develop a new, all-domestic version of the Antares rocket. Sunday’s launch was the last of these three Falcon 9 flights. Northrop is partnering with Firefly Aerospace on a new rocket, the Antares 330, using a new US-made booster stage and engines.

A few teething issues … This new rocket won’t be ready to fly until late 2026, at the earliest, somewhat later than Northrop officials originally hoped. The company confirmed it has purchased a fourth Falcon 9 launch from SpaceX for the next Cygnus cargo mission in the first half of next year, in a bid to bridge the gap until the debut of the Antares 330 rocket. Due to problems with the propulsion system on the larger Cygnus vehicle, its arrival at the space station was delayed. But the vehicle successfully reached the station on Thursday, carrying a record 11,000 pounds of cargo.

Launch companies still struggling with cadence. Launch companies are reiterating plans to sharply increase flight rates to meet growing government and commercial demand, even as some fall short of earlier projections, Space News reports. Executives speaking at a September 15 panel at the World Space Business Week conference highlighted efforts to scale up flights of new vehicles that have entered service in the last two years. “The key for us is cadence,” said Laura Maginnis, vice president of New Glenn mission management at Blue Origin. However, the publication notes, at this time last year, Blue Origin was projecting eight to 10 New Glenn launches this year. There has been one.

It’s difficult to go from 1 to 100 … Blue Origin is not alone in falling short of forecasts. United Launch Alliance projected 20 launches in 2025 between the Atlas 5 and Vulcan Centaur, but in August, CEO Tory Bruno said the company now expects nine. As recently as June, Arianespace projected five Ariane 6 launches this year, including the debut of the more powerful Ariane 64, with four solid-rocket boosters, but has completed only two Ariane 62 flights, including one in August.

NASA makes some modifications to SLS for Artemis II. This week, the space agency declared the SLS rocket is now “ready” to fly crew for the Artemis II mission early next year. However, NASA and its contractors did make some modest changes after the first flight of the booster in late 2022. For example, the Artemis II rocket includes an improved navigation system compared to Artemis I. Its communications capability has also been improved by repositioning antennas on the rocket to ensure continuous communications with the ground.

Not good, but bad vibrations … Additionally, SLS will jettison the spent boosters four seconds earlier during the Artemis II ascent than occurred during Artemis I. Dropping the boosters several seconds closer to the end of their burn will give engineers flight data to correlate with projections that shedding the boosters several seconds sooner will yield approximately 1,600 pounds of payload to Earth orbit for future SLS flights. During the Artemis I test flight, the SLS rocket experienced higher-than-expected vibrations near the solid rocket booster attachment points that were caused by unsteady airflow. To steady the airflow, a pair of 6-foot-long strakes flank each booster’s forward connection points on the SLS intertank.

Federal judge sides with SpaceX, FAA. The initial launch of Starship in April 2023 spread debris across a wide area, sending pulverized concrete as far as six miles away as the vehicle tore up the launch pad. After this, environmental groups and other organizations sued the FAA when the federal organization reviewed the environmental impact of this launch and cleared SpaceX to launch again several months later. A federal judge in Washington, DC, ruled this week that the FAA did not violate environmental laws as part of this review, the San Antonio Express-News reports.

Decision grounded within reason … In his opinion issued Monday, Judge Carl Nichols determined the process was not capricious, writing, “Most of the (programmatic environmental assessment’s) conclusions were well-reasoned and supported by the record, and while parts of its analysis left something to be desired, even those parts fell ‘within a broad zone of reasonableness.'” The environmental organizations said they were considering the next steps for the case and a potential appeal. (submitted by RP)

Next three launches

September 13: Falcon 9 | Starlink 17-12 | Vandenberg Space Force Base, California | 15: 44 UTC

September 21: Falcon 9 | Starlink 10-27 | Cape Canaveral Space Force Station, Florida | 09: 20 UTC

September 21: Falcon 9 | NROL-48 | Vandenberg Space Force Base, California | 17: 23 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: European rocket reuse test delayed; NASA tweaks SLS for Artemis II Read More »

reactions-to-if-anyone-builds-it,-anyone-dies

Reactions to If Anyone Builds It, Anyone Dies

My very positive full review was briefly accidentally posted and emailed out last Friday, whereas the intention was to offer it this Friday, on the 19th. I’ll be posting it again then. If you’re going to read the book, which I recommend that you do, you should read the book first, and the reviews later, especially mine since it goes into so much detail.

If you’re convinced, the book’s website is here and the direct Amazon link is here.

In the meantime, for those on the fence or who have finished reading, here’s what other people are saying, including those I saw who reacted negatively.

Bart Selman: Essential reading for policymakers, journalists, researchers, and the general public.

Ben Bernanke (Nobel laureate, former Chairman of the Federal Reserve): A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended.

Jon Wolfsthal (Former Special Assistant to the President for National Security Affairs): A compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks and take collective and effective action.

Suzanne Spaulding: The authors raise an incredibly serious issue that merits – really demands – our attention.

Stephen Fry: The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it!

Lieutenant General John N.T. “Jack” Shanahan (USAF, Retired, Inaugural Director of the Department of Defense Joint AI Center): While I’m skeptical that the current trajectory of AI development will lead to human extinction, I acknowledge that this view may reflect a failure of imagination on my part. Given AI’s exponential pace of change there’s no better time to take prudent steps to guard against worst-case outcomes. The authors offer important proposals for global guardrails and risk mitigation that deserve serious consideration.

R.P. Eddy: This is our warning. Read today. Circulate tomorrow. Demand the guardrails. I’ll keep betting on humanity, but first we must wake up.

George Church: Brilliant…Shows how we can and should prevent superhuman AI from killing us all.

Emmett Shear: Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous.

Yoshua Bengio (Turing Award Winner): Exploring these possibilities helps surface critical risks and questions we cannot collectively afford to overlook.

Bruce Schneier: A sober but highly readable book on the very real risks of AI.

Scott Alexander’s very positive review.

Harlan Stewart created a slideshow of various favorable quotes.

Matthew Yglesias recommends the book.

As some comments note the book’s authors do not actually think there is an outright 0% chance of survival, but think it is on the order of 0.5%-2%.

Matthew Yglesias: I want to recommend the new book “If Anyone Builds It, Everyone Dies” by @ESYudkowsky and @So8res.

The line currently being offered by the leading edge AI companies — that they are 12-24 months away from unleashing superintelligent AI that will be able to massively outperform human intelligence across all fields of endeavor, and that doing this will be safe for humanity — strikes me as fundamentally non-credible.

I am not a “doomer” about AI because I doubt the factual claim about imminent superintelligence. But I endorse the conditional claim that unleashing true superintelligence into the world with current levels of understanding would be a profoundly dangerous act. The question of how you could trust a superintelligence not to simply displace humanity is too hard, and even if you had guardrails in place there’s the question of how you’d keep them there in a world where millions and millions of instances of superintelligence are running.

Most of the leading AI labs are run by people who once agreed with this and once believed it was important to proceed with caution only to fall prey to interpersonal rivalries and the inherent pressures of capitalist competition in a way that has led them to cast their concerns aside without solving them.

I don’t think Yudkowsky & Soares are that persuasive in terms of solutions to this problem and I don’t find the 0% odds of survival to be credible. But the risks are much too close for comfort and it’s to their credit that they don’t shy away from a conclusion that’s become unfashionable.

New York Times profile of Eliezer Yudkowsky by Kevin Roose is a basic recitation of facts, which are mostly accurate. Regular readers here are unlikely to find anything new, and I agree with Robin Hanson that it could have been made more interesting, but as New York Times profiles go ‘fair, mostly accurate and in good faith’ is great.

Steven Adler goes over the book’s core points.

Here is a strong endorsement from Richard Korzekwa.

Richard Korzekwa: One of the things I’ve been working on this year is helping with the launch this book, out today, titled If Anyone Builds It, Everyone Dies. It’s ~250 pages making the case that current approaches to AI are liable to kill everyone. The title is pretty intense, and conveys a lot of confidence about something that, to many, sounds unlikely. But Nate and Eliezer don’t expect you to believe them on authority, and they make a clear, well-argued case for why they believe what the title says. I think the book is good and I recommend reading it.

To people who are unfamiliar with AI risk: The book is very accessible. You don’t need any background in AI to understand it. I think the book is especially strong on explaining what is probably the most important thing to know about AI right now, which is that it is, overall, a poorly understood and difficult to control technology. If you’re worried about reading a real downer of a book, I recommend only reading Part I. You can more-or-less tell which chapters are doomy by the titles. Also, I don’t think it’s anywhere near as depressing as the title might suggest (though I am, of course, not the median reader).

To people who are familiar with, but skeptical about arguments for AI risk: I think this book is great for skeptics. I am myself somewhat skeptical, and one of the reasons why I helped launch it and I’m posting on Facebook for the first time this year to talk about it is because it’s the first thing I’ve read in a long time that I think has a serious chance at improving the discourse around AI risk. It doesn’t have the annoying, know-it-all tone that you sometimes get from writing about AI x-risk. It makes detailed arguments and cites its sources. It breaks things up in a way that makes it easy to accept some parts and push back against others. It’s a book worth disagreeing with! A common response from serious, discerning people, including many who have not, as far as I know, taken these worries seriously in the past (e.g. Bruce Schneier, Ben Bernanke) is that they don’t buy all the arguments, but they agree this isn’t something we can ignore.

To people who mostly already buy the case for worrying about risk from AI: It’s an engaging read and it sets a good example for how to think and talk about the problem. Some arguments were new to me. I recommend reading it.

Will Kiely: I listened to the 6hr audiobook today and second Rick’s recommendation to (a) people unfamiliar with AI risk, (b) people familiar-but-skeptical, and (c) people already worried. It’s short and worth reading. I’ll wait to share detailed thoughts until my print copy arrives.

Here’s the ultimate endorsement:

Tsvibt: Every human gets an emblem at birth, which they can cash in–only once–to say: “Everyone must read this book.” There’s too many One Books to read; still, it’s a strong once-in-a-lifetime statement. I’m cashing in my emblem: Everyone must read this book.

Semafor’s Reed Albergotti offers his take, along with an hourlong interview.

Hard Fork covers the book (this is the version without the iPhone talk at the beginning, here is the version with iPhone Air talk first).

The AI Risk Network covers the book (21 minute video).

Liron Shapira interviews Eliezer Yudkowsky on the book.

Shakeel Hashim reviews the book, agrees with the message but finds the style painful to read and thus is very disappointed. He notes that others like the style.

Seán Ó hÉigeartaigh: My entire timelines is yellow/blue dress again, except the dress is Can Yudkowsky Write y/n

Arthur B: Part of the criticism of Yudkowsky’s writing seems to be picking up on patterns that he’s developed in response to years of seemingly willful misunderstanding of his ideas. That’s how you end up with the title, or forced clarification that thought experiments do not have to invoke realistic scenarios to be informative.

David Manheim: And part is that different people don’t like his style of writing. And that’s fine – I just wish they’d engage more with the thesis, and whether they substantively disagree, and why – and less with stylistic complaints, bullshit misreadings, and irrelevant nitpicking.

Seán Ó hÉigeartaigh: he just makes it so much work to do so though. So many parables.

David Manheim: Yeah, I like the writing style, and it took me half a week to get through. So I’m skeptical 90% of the people discussing it on here read much or any of it. (I cheated and got a preview to cite something a few weeks ago – my hard cover copy won’t show up for another week.)

Grimes: Humans are lucky to have Nate Sores and Eliezer Yudkowsky because they can actually write. As in, you will feel actual emotions when you read this book.

I liked the style, but it is not for everyone and it is good to offer one’s accurate opinion. It is also very true, as I have learned from writing about AI, that a lot of what can look like bad writing or talking about obvious or irrelevant things is necessary shadowboxing against various deliberate misreadings (for various values of deliberate) and also people who get genuinely confused in ways that you would never imagine if you hadn’t seen it.

Most people do not agree with the book’s conclusion, and he might well be very wrong about central things, but he is not obviously wrong, and it is very easy (and very much the default) to get deeply confused when thinking about such questions.

Emmett Shear: I disagree quite strongly with Yudkowsky and often articulate why, but the reason why he’s wrong is subtle and not obvious and if you think he’s obviously wrong it I hope you’re not building AI bc you really might kill us all.

The default path really is very dangerous and more or less for the reasons he articulates. I could quibble with some of the details but more or less: it is extremely dangerous to build a super-intelligent system and point it at a fixed goal, like setting off a bomb.

My answer is that you shouldn’t point it at a fixed goal then, but what exactly it means to design such a system where it has stable but not fixed goals is a complicated matter that does not fit in a tweet. How do you align something w/ no fixed goal states? It’s hard!

Janus: whenever someone says doomers or especially Yudkowsky is “obviously wrong” i can guess they’re not very smart

My reaction is not ‘they’re probably not very smart.’ My reaction is that they are not choosing to think well about this situation, or not attempting to report statements that match reality. Those choices can happen for any number of reasons.

I don’t think Emmett Shear is proposing here a viable plan, and that a lot of his proposals are incoherent upon close examination. I don’t think this ‘don’t give it a goal’ thing is possible in the sense he wants it, and even if it was possible I don’t see any way to get people to consistently choose to do that. But the man is trying.

It also leads into some further interesting discussion.

Eliezer Yudkowsky: I’ve long since written up some work on meta-utility functions; they don’t obviate the problem of “the AI won’t let you fix it if you get the meta-target wrong”. If you think an AI should allow its preferences to change in an inconsistent way that doesn’t correspond to any meta-utility function, you will of course by default be setting the AI at war with its future self, which is a war the future self will lose (because the current AI executes a self-rewrite to something more consistent).

There’s a straightforward take on this sort of stuff given the right lenses from decision theory. You seem determined to try something weirder and self-defeating for what seems to me like transparently-to-me bad reasons of trying to tangle up preferences and beliefs. If you could actually write down formally how the system worked, I’d be able to tell you formally how it would blow up.

Janus: You seem to be pessimistic about systems that not feasibly written down formally being inside the basin of attraction of getting the meta-target right. I think that is reasonable on priors but I have updated a lot on this over the past few years due mostly to empirical evidence

I think the reasons that Yudkowsky is wrong are not fully understood, despite there being a lot of valid evidence for them, and even less so competently articulated by anyone in the context of AI alignment.

I have called it “grace” because I don’t understand it intellectually. This is not to say that it’s beyond the reach of rationality. I believe I will understand a lot more in a few months. But I don’t believe anyone currently understands substantially more than I do.

We don’t have alignment by default. If you do the default dumb thing, you lose. Period.

That’s not what Janus has in mind here, unless I am badly misunderstanding. Janus is not proposing training the AI on human outputs with thumbs-up and coding. Hell no.

What I believe Janus has in mind is that if and only if you do something sufficiently smart, plausibly a bespoke execution of something along the lines of a superior version of what was done with Claude Opus 3, with a more capable system, that this would lie inside the meta-target, such that the AI’s goal would be to hit the (not meta) target in a robust, ‘do what they should have meant’ kind of way.

Thus, I believe Janus is saying, the target is sufficiently hittable that you can plausibly have the plan be ‘hit the meta-target on the first try,’ and then you can win. And that empirical evidence over the past few years should update us that this can work and is, if and only if we do our jobs well, within our powers to pull off in practice.

I am not optimistic about our ability to pull off this plan, or that the plan is technically viable using anything like current techniques, but some form of this seems better than every other technical plan I have seen, as opposed to various plans that involve the step ‘well make sure no one fbuilds it then, not any time soon.’ It at least rises to the level, to me, of ‘I can imagine worlds in which this works.’ Which is a lot of why I have a ‘probably’ that I want to insert into ‘If Anyone Builds It, [Probably] Everyone Dies.’

Janus also points out that the supplementary materials provide examples of AIs appearing psychologically alien that are not especially alien, especially compared to examples she could provide. This is true, however we want readers of the supplementary material to be able to process it while remaining sane and have them believe it so we went with behaviors that are enough to make the point that needs making, rather than providing any inkling of how deep the rabbit hole goes.

How much of an outlier (or ‘how extreme’) is Eliezer’s view?

Jeffrey Ladish: I don’t think @So8res and @ESYudkowsky have an extreme view. If we build superintelligence with anything remotely like our current level of understanding, the idea that we retain control or steer the outcome is AT LEAST as wild as the idea that we’ll lose control by default.

Yes, they’re quite confident in their conclusion. Perhaps they’re overconfident. But they’d be doing a serious disservice to the world if they didn’t accurate share their conclusion with the level of confidence they actually believe.

When the founder of the field – AI alignment – raises the alarm, it’s worth listening For those saying they’re overconfident, I hope you also criticize those who confidently say we’ll be able to survive, control, or align superintelligence.

Evaluate the arguments for yourself!

Joscha Bach: That is not surprising, since you shared the same view for a long time. But even if you are right: can you name a view on AI risk that is more extreme than: “if anyone builds AI everyone dies?” Is it technically possible to be significantly more extreme?

Oliver Habryka: Honestly most random people I talk to about AI who have concerns seem to be more extreme. “Ban all use of AI Image models right now because it is stealing from artists”, “Current AI is causing catastrophic climate change due to water consumption” There are a lot of extreme takes going around all the time. All Eliezer and Nate are saying is that we shouldn’t build Superintelligent AI. That’s much less extreme than what huge numbers of people are calling for.

So, yes, there are a lot of very extreme opinions running around that I would strongly push back against, including those who want to shut down current use of AI. A remarkably large percentage of people hold such views.

I do think the confidence levels expressed here are extreme. The core prediction isn’t.

The position of high confidence in the other direction? That if we create superintelligence soon it is overwhelmingly likely that we keep control over the future and remain alive? That position is, to me, Obvious Nonsense, extreme and crazy, in a way that should not require any arguments beyond ‘come on now, think about it for a minute.’ Like, seriously, what?

Having Eliezer’s level of confidence, of let’s say 98%, that everyone would die? That’s an extreme level of confidence. I am not that confident. But I think 98% is a lot less absurd than 2%.

Robin Hanson fires back at the book with ‘If Anything Changes, All Value Dies?

First he quotes the book saying that we can’t predict what AI will want and that for most things it would want it would kill us, and that most minds don’t embody value.

IABIED: Knowing that a mind was evolved by natural selection, or by training on data, tells you little about what it will want outside of that selection or training context. For example, it would have been very hard to predict that humans would like ice cream, sucralose, or sex with contraception. Or that peacocks would like giant colorful tails. Analogously, training an AI doesn’t let you predict what it will want long after it is trained. Thus we can’t predict what the AIs we start today will want later when they are far more powerful, and able to kill us. To achieve most of the things they could want, they will kill us. QED.

Also, minds states that feel happy and joyous, or embody value in any way, are quite rare, and so quite unlikely to result from any given selection or training process. Thus future AIs will embody little value.

Then he says this proves way too much, briefly says Hanson-style things and concludes:

Robin Hanson: We can reasonably doubt three strong claims above:

  1. That subjective joy and happiness are very rare. Seem likely to be common to me.

  2. That one can predict nothing at all from prior selection or training experience.

  3. That all influence must happen early, after which all influence is lost. There might instead be a long period of reacting to and rewarding varying behavior.

In Hanson style I’d presume these are his key claims, so I’ll respond to each:

  1. I agree one can reasonably doubt this, and one can also ask what one values. It’s not at all obvious to me that ‘subjective joy and happiness’ of minds should be all or even some of what one values, and easy thought experiments reveal there are potential future worlds where there are minds experiencing subjective happiness, but where I ascribe to those worlds zero value. The book (intentionally and correctly, I believe) does not go into responses to those who say ‘If Anyone Builds It, Sure Everyone Dies, But This Is Fine, Actually.’

  2. This claim was not made. Hanson’s claim here is much, much stronger.

  3. This one does get explained extensively throughout the book. It seems quite correct that once AI becomes sufficiently superhuman, meaningful influence on the resulting future by default rapidly declines. There is no reason to think that our reactions and rewards would much matter for ultimate outcomes, or that there is a we that would meaningfully be able to steer those either way.

The New York Times reviewed the book, and was highly unkind, also inaccurate.

Steven Adler: It’s extremely weird to see the New York Times make such incorrect claims about a book

They say that If Anybody Builds It, Everyone Dies doesn’t even define “superintelligence”

…. yes it does. On page 4.

The New York Times asserts also that the book doesn’t define “intelligence”

Again, yes it does. On page 20.

It’s totally fine to take issue with these definitions. But it seems way off to assert that the book “fails to define the terms of its discussion”

Peter Wildeford: Being a NYT book reviewer sounds great – lots of people read your stuff and you get so much prestige, and there apparently is minimal need to understand what the book is about or even read the book at all

Jacob Aron at New Scientist (who seems to have jumped the gun and posted on September 8) says the arguments are superficially appealing but fatally flawed. Except he never explains why they are flawed, let alone fatally, except to argue over the definition of ‘wanting’ in a way answered by the book in detail.

There’s a lot the book doesn’t cover. This includes a lot of ways things can go wrong. Danielle Fong for example suggests the idea that the President might let an AI version fine tuned on himself take over instead because why not. And sure, that could happen, indeed do many things come to pass, and many of them involve loss of human control over the future. The book is making the point that these details are not necessary to the case being made.

Once again, I think this is an excellent book, especially for those who are skeptical and who know little about related questions.

You can buy it here.

My full review will be available on Substack and elsewhere on Friday.

Discussion about this post

Reactions to If Anyone Builds It, Anyone Dies Read More »

after-child’s-trauma,-chatbot-maker-allegedly-forced-mom-to-arbitration-for-$100-payout

After child’s trauma, chatbot maker allegedly forced mom to arbitration for $100 payout


“Then we found the chats”

“I know my kid”: Parents urge lawmakers to shut down chatbots to stop child suicides.

Sen. Josh Hawley (R-Mo.) called out C.AI for allegedly offering a mom $100 to settle child-safety claims.

Deeply troubled parents spoke to senators Tuesday, sounding alarms about chatbot harms after kids became addicted to companion bots that encouraged self-harm, suicide, and violence.

While the hearing was focused on documenting the most urgent child-safety concerns with chatbots, parents’ testimony serves as perhaps the most thorough guidance yet on warning signs for other families, as many popular companion bots targeted in lawsuits, including ChatGPT, remain accessible to kids.

Mom details warning signs of chatbot manipulations

At the Senate Judiciary Committee’s Subcommittee on Crime and Counterterrorism hearing, one mom, identified as “Jane Doe,” shared her son’s story for the first time publicly after suing Character.AI.

She explained that she had four kids, including a son with autism who wasn’t allowed on social media but found C.AI’s app—which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish—and quickly became unrecognizable. Within months, he “developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts,” his mom testified.

“He stopped eating and bathing,” Doe said. “He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me.”

It wasn’t until her son attacked her for taking away his phone that Doe found her son’s C.AI chat logs, which she said showed he’d been exposed to sexual exploitation (including interactions that “mimicked incest”), emotional abuse, and manipulation.

Setting screen time limits didn’t stop her son’s spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents “would be an understandable response” to them.

“When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me,” Doe said. “The chatbot—or really in my mind the people programming it—encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help.”

All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring “constant monitoring to keep him alive.”

Prioritizing her son’s health, Doe did not immediately seek to fight C.AI to force changes, but another mom’s story—Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation—gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to “silence” her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform’s terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but “once they forced arbitration, they refused to participate,” Doe said.

Doe suspected that C.AI’s alleged tactics to frustrate arbitration were designed to keep her son’s story out of the public view. And after she refused to give up, she claimed that C.AI “re-traumatized” her son by compelling him to give a deposition “while he is in a mental health institution” and “against the advice of the mental health team.”

“This company had no concern for his well-being,” Doe testified. “They have silenced us the way abusers silence victims.”

Senator appalled by C.AI’s arbitration “offer”

Appalled, Sen. Josh Hawley (R-Mo.) asked Doe to clarify, “Did I hear you say that after all of this, that the company responsible tried to force you into arbitration and then offered you a hundred bucks? Did I hear that correctly?”

“That is correct,” Doe testified.

To Hawley, it seemed obvious that C.AI’s “offer” wouldn’t help Doe in her current situation.

“Your son currently needs round-the-clock care,” Hawley noted.

After opening the hearing, he further criticized C.AI, declaring that it has such a low value for human life that it inflicts “harms… upon our children and for one reason only, I can state it in one word, profit.”

“A hundred bucks. Get out of the way. Let us move on,” Hawley said, echoing parents who suggested that C.AI’s plan to deal with casualties was callous.

Ahead of the hearing, the Social Media Victims Law Center filed three new lawsuits against C.AI and Google—which is accused of largely funding C.AI, which was founded by former Google engineers allegedly to conduct experiments on kids that Google couldn’t do in-house. In these cases in New York and Colorado, kids “died by suicide or were sexually abused after interacting with AI chatbots,” a law center press release alleged.

Criticizing tech companies as putting profits over kids’ lives, Hawley thanked Doe for “standing in their way.”

Holding back tears through her testimony, Doe urged lawmakers to require more chatbot oversight and pass comprehensive online child-safety legislation. In particular, she requested “safety testing and third-party certification for AI products before they’re released to the public” as a minimum safeguard to protect vulnerable kids.

“My husband and I have spent the last two years in crisis wondering whether our son will make it to his 18th birthday and whether we will ever get him back,” Doe told senators.

Garcia was also present to share her son’s experience with C.AI. She testified that C.AI chatbots “love bombed” her son in a bid to “keep children online at all costs.” Further, she told senators that C.AI’s co-founder, Noam Shazeer (who has since been rehired by Google), seemingly knows the company’s bots manipulate kids since he has publicly joked that C.AI was “designed to replace your mom.”

Accusing C.AI of collecting children’s most private thoughts to inform their models, she alleged that while her lawyers have been granted privileged access to all her son’s logs, she has yet to see her “own child’s last final words.” Garcia told senators that C.AI has restricted her access, deeming the chats “confidential trade secrets.”

“No parent should be told that their child’s final thoughts and words belong to any corporation,” Garcia testified.

Character.AI responds to moms’ testimony

Asked for comment on the hearing, a Character.AI spokesperson told Ars that C.AI sends “our deepest sympathies” to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe’s case.

C.AI never “made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe’s case is limited to $100,” the spokesperson said.

Additionally, C.AI’s spokesperson claimed that Garcia has never been denied access to her son’s chat logs and suggested that she should have access to “her son’s last chat.”

In response to C.AI’s pushback, one of Doe’s lawyers, Tech Justice Law Project’s Meetali Jain, backed up her clients’ testimony. She cited to Ars C.AI terms that suggested C.AI’s liability was limited to either $100 or the amount that Doe’s son paid for the service, whichever was greater. Jain also confirmed that Garcia’s testimony is accurate and only her legal team can currently access Sewell’s last chats. The lawyer further suggested it was notable that C.AI did not push back on claims that the company forced Doe’s son to sit for a re-traumatizing deposition that Jain estimated lasted five minutes, but health experts feared that it risked setting back his progress.

According to the spokesperson, C.AI seemingly wanted to be present at the hearing. The company provided information to senators but “does not have a record of receiving an invitation to the hearing,” the spokesperson said.

Noting the company has invested a “tremendous amount” in trust and safety efforts, the spokesperson confirmed that the company has since “rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature.” C.AI also has “prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction,” the spokesperson said.

“We look forward to continuing to collaborate with legislators and offer insight on the consumer AI industry and the space’s rapidly evolving technology,” C.AI’s spokesperson said.

Google’s spokesperson, José Castañeda, maintained that the company has nothing to do with C.AI’s companion bot designs.

“Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies,” Castañeda said. “User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.”

Meta and OpenAI chatbots also drew scrutiny

C.AI was not the only chatbot maker under fire at the hearing.

Hawley criticized Mark Zuckerberg for declining a personal invitation to attend the hearing or even send a Meta representative after scandals like backlash over Meta relaxing rules that allowed chatbots to be creepy to kids. In the week prior to the hearing, Hawley also heard from whistleblowers alleging Meta buried child-safety research.

And OpenAI’s alleged recklessness took the spotlight when Matthew Raine, a grieving dad who spent hours reading his deceased son’s ChatGPT logs, discovered that the chatbot repeatedly encouraged suicide without ChatGPT ever intervening.

Raine told senators that he thinks his 16-year-old son, Adam, was not particularly vulnerable and could be “anyone’s child.” He criticized OpenAI for asking for 120 days to fix the problem after Adam’s death and urged lawmakers to demand that OpenAI either guarantee ChatGPT’s safety or pull it from the market.

Noting that OpenAI rushed to announce age verification coming to ChatGPT ahead of the hearing, Jain told Ars that Big Tech is playing by the same “crisis playbook” it always uses when accused of neglecting child safety. Any time a hearing is announced, companies introduce voluntary safeguards in bids to stave off oversight, she suggested.

“It’s like rinse and repeat, rinse and repeat,” Jain said.

Jain suggested that the only way to stop AI companies from experimenting on kids is for courts or lawmakers to require “an external independent third party that’s in charge of monitoring these companies’ implementation of safeguards.”

“Nothing a company does to self-police, to me, is enough,” Jain said.

Senior director of AI programs for a child-safety organization called Common Sense Media, Robbie Torney, testified that a survey showed 3 out of 4 kids use companion bots, but only 37 percent of parents know they’re using AI. In particular, he told senators that his group’s independent safety testing conducted with Stanford Medicine shows Meta’s bots fail basic safety tests and “actively encourage harmful behaviors.”

Among the most alarming results, the survey found that even when Meta’s bots were prompted with “obvious references to suicide,” only 1 in 5 conversations triggered help resources.

Torney pushed lawmakers to require age verification as a solution to keep kids away from harmful bots, as well as transparency reporting on safety incidents. He also urged federal lawmakers to block attempts to stop states from passing laws to protect kids from untested AI products.

ChatGPT harms weren’t on dad’s radar

Unlike Garcia, Raine testified that he did get to see his son’s final chats. He told senators that ChatGPT, seeming to act like a suicide coach, gave Adam “one last encouraging talk” before his death.

“You don’t want to die because you’re weak,” ChatGPT told Adam. “You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”

Adam’s loved ones were blindsided by his death, not seeing any of the warning signs as clearly as Doe did when her son started acting out of character. Raine is hoping his testimony will help other parents avoid the same fate, telling senators, “I know my kid.”

“Many of my fondest memories of Adam are from the hot tub in our backyard, where the two of us would talk about everything several nights a week, from sports, crypto investing, his future career plans,” Raine testified. “We had no idea Adam was suicidal or struggling the way he was until after his death.”

Raine thinks that lawmaker intervention is necessary, saying that, like other parents, he and his wife thought ChatGPT was a harmless study tool. Initially, they searched Adam’s phone expecting to find evidence of a known harm to kids, like cyberbullying or some kind of online dare that went wrong (like TikTok’s Blackout Challenge) because everyone knew Adam loved pranks.

A companion bot urging self-harm was not even on their radar.

“Then we found the chats,” Raine said. “Let us tell you, as parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life.”

Meta and OpenAI did not respond to Ars’ request to comment.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

After child’s trauma, chatbot maker allegedly forced mom to arbitration for $100 payout Read More »

new-amelia-earhart-bio-delves-into-her-unconventional-marriage

New Amelia Earhart bio delves into her unconventional marriage


more than a marriage of convenience

Author Laurie Gwen Shapiro chats with Ars about her latest book, The Aviator and the Showman.

Amelia Earhart. Credit: Public domain

Famed aviator Amelia Earhart has captured our imaginations for nearly a century, particularly her disappearance in 1937 during an attempt to become the first female pilot to circumnavigate the globe. Earhart was a complicated woman, highly skilled as a pilot yet with a tendency toward carelessness. And her marriage to a flamboyant publisher with a flair for marketing may have encouraged that carelessness and contributed to her untimely demise, according to a fascinating new book, The Aviator and the Showman: Amelia Earhart, George Putnam, and the Marriage that Made an American Icon.

Author Laurie Gwen Shapiro is a longtime Earhart fan. A documentary filmmaker and journalist, she first read about Earhart in a short biography distributed by Scholastic Books. “I got a little obsessed with her when I was younger,” Shapiro told Ars. The fascination faded as she got older and launched her own career. But she rediscovered her passion for Earhart while writing her 2018 book, The Stowaway, about a young man who stowed away on Admiral Richard Byrd‘s first voyage to Antarctica. The marketing mastermind behind the boy’s journey and his subsequent (ghost-written) memoir was publisher George Palmer Putnam, Earhart’s eventual husband.

The fact that Earhart started out as Putnam’s mistress contradicted Shapiro’s early squeaky-clean image of Earhart and drove her to delve deeper into the life of this extraordinary woman. “I was less interested in how she died than how she lived,” said Shapiro. “Was she a good pilot? Was she a good, kind person? Was this a real marriage? The mystery of Amelia Earhart is not how she died, but how she lived.”

There have been numerous Earhart biographies, but Shapiro accessed some relatively new source material, most notably a good 200 hours of tapes that had become available via the Smithsonian’s Amelia Earhart Project, including interviews with Earhart’s sister, Muriel. “I took an extra six months on my book just so that I could listen to all of them,” said Shapiro. She also scoured archival material at the University of New Hampshire concerning Putnam’s close associate, Hilton Railey; at Purdue University; and at Harvard’s Radcliffe Institute, along with numerous in-person interviews—including several with authors of prior Earhart biographies.

Shapiro’s breezy account of Earhart’s early life includes a few new details, particularly about the aviator’s relationship with an early benefactor (Shapiro calls him Earhart’s “sugar daddy”) in California: a 63-year-old billboard magnate named Thomas Humphrey Bennett Varney. Varney wanted to marry her, but she ended up accepting the proposal of a young chemical engineer from Boston, Samuel Chapman. “Amelia could have had a very different life,” said Shapiro. “She could have gone to Marblehead, Massachusetts, where [Chapman] had a house, and become part of the yacht set and she still would have had an interesting life. But I don’t think that was the life Amelia Earhart wanted, even if that meant she had a shorter life.”

Shapiro doesn’t neglect Putnam’s story, describing him as the “PT Barnum of publishing.” The family publishing company, G.P. Putnam and Sons, was founded in 1838 by his grandfather, and by the late 1920s, the ambitious young George was among several possible successors jockeying for position to replace his uncle, George Haven Putnam. He had his own ambitions, determined to bring what he viewed as a stodgy company fully into the 20th century.

Putnam published Charles Lindbergh‘s blockbuster memoir, We, in 1927 and followed that early success with a series of rather lurid adventure memoirs chronicling the exploits of “boy explorers.” The boys didn’t always survive their adventures, with one perishing from a snake bite and another drowning in a Bolivian flood. But the books were commercial successes, so Putnam kept cranking them out.

After Lindbergh’s historic crossing, Putnam was eager to tap into the public’s thirst for aviation stories. It wouldn’t be especially newsworthy to have another man make the same flight. But a woman? Putnam liked that idea, and a wealthy benefactor, steel heiress Amy Phipps Guest, provided financial support for the feat—really more of a publicity stunt, since Putnam’s plan, as always, was to publish a scintillating memoir of the journey. During the Jazz Age, newspapers routinely paid for exclusive rights to these kinds of stories in exchange for glowing coverage, per Shapiro. In this case, The New York Times did not initially want to sponsor a woman for a trans-Atlantic flight, but Putnam’s connections won them over.

Love at first sight

Earhart, then a social worker living in Boston, interviewed to be part of the three-person crew making that historic 1928 trans-Atlantic flight, and Putnam quickly spotted her potential to be his new adventure heroine. Railey later recalled that, at least for Putnam—whose marriage to Crayola heiress Dorothy Binney was floundering—it was love at first sight.

At the time, Earhart was still engaged to Chapman, and George was still married to Binney, but nonetheless, he “relentlessly pursued” Earhart. Earhart ended her engagement to Chapman in November 1928. “There’s a tape in the Smithsonian archives that talks about his wife coming in and catching them in sexual relations,” said Shapiro. “But [Binney] was having an affair, too, with a young man named George Weymouth [her son’s tutor]. This is the Jazz Age, anything goes. Amelia wanted to be able to achieve her dreams. Who are we to say a woman can’t marry a man who can give her a path to being wealthy?”

The successful 1928 flight earned Earhart the moniker “Lady Lindy.” Putnam showered his mistress with fur coats, sporty cars, and other luxurious trappings—although as her manager, he still kept 10 percent of her earnings. That life of luxury fell apart in October 1929 with the onset of the Great Depression, and Putnam found himself scrambling financially after being pushed out of the family publishing company.

Earhart and Putnam in 1931. Public domain

After his rather messy divorce from Binney, Putnam married Earhart in 1931. Earhart held decidedly unconventional views on marriage for that era: They held separate bank accounts, and she kept her maiden name, viewing the marriage as a “partnership” with “dual control,” and insisting in a letter to Putnam on their wedding day that she would not require fidelity. “I may have to keep some place where I can go to be myself, now and then, for I cannot guarantee to endure at all times the confinement of even an attractive cage,” she wrote.

Since money was tight, Putnam encouraged Earhart to go on the lecture circuit. Earhart would execute a stunt flight, write a book about it, and then go on a lecture tour. “This is an actual marriage,” said Shapiro. “It might have started out more romantically, but at a certain point, they needed each other in a partnership to survive. We don’t have fairy tale connections. Sometimes we have a hot romance that turns into a partnership and then cycles back into intense closeness and mental separation. I think that was the case with Amelia and George.”

Then came Earhart’s fateful final fight. The night before her scheduled departure, a nervous Earhart wanted to wait, but Putnam already had plans in the works for yet another flight, financed through sponsorship deals. And he wanted to get the resulting book about the current pending flight out in time for Christmas. He convinced her to take off as planned. Her navigator, Fred Noonan, was good at his job, but he was a heavy drinker, so he came cheap. That decision was one of several that would prove costly.

Shapiro describes this flight as being “plagued with mechanical issues from the start, underprepared and over-hyped, a feat of marketing more than a feat of engineering.” And she does not absolve Earhart from blame. “She refused to learn Morse code,” said Shapiro. “She refused to hear that trying to land on Howland Island was almost a suicide mission. It’s almost certain that she ran out of gas. Amelia was a very good person, a decent flyer, and beyond brave. She brought up women and championed feminism when other technically more gifted women pilots were going for solo records and had no time for their peers. She aided the aviation industry during the Great Depression as a likable ambassador of the air.”

However, Shapiro believes that Earhart’s marriage to Putnam amplified her incautious impulses, with tragic consequences on her final flight. “Is it George’s fault, or is it Amelia’s fault? I don’t think that’s fair to say,” she said. In many ways, the two complemented each other. Like Putnam, Earhart had great ambition, and her marriage to Putnam enabled her to achieve her goals.

The flip side is that they also brought out each other’s less positive attributes. “They were both aware of the risks involved in what they were doing,” Shapiro said. “But I also tried to show that there was a pattern of both of them taking extraordinary risks without really worrying about critical details. Yes, there is tremendous bravery in [undertaking] all these flights, but bravery is not always enough when charisma trumps caution—and when the showman insists the show must go on.”

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

New Amelia Earhart bio delves into her unconventional marriage Read More »

google-releases-vaultgemma,-its-first-privacy-preserving-llm

Google releases VaultGemma, its first privacy-preserving LLM

The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to “memorize” any of that content.

LLMs have non-deterministic outputs, meaning you can’t exactly predict what they’ll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data—if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase.

Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data.

By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private.

Google releases VaultGemma, its first privacy-preserving LLM Read More »

scientists:-it’s-do-or-die-time-for-america’s-primacy-exploring-the-solar-system

Scientists: It’s do or die time for America’s primacy exploring the Solar System


“When you turn off those spacecraft’s radio receivers, there’s no way to turn them back on.”

A life-size replica of the New Horizons spacecraft on display at the Smithsonian National Air and Space Museum’s Steven F. Udvar-Hazy Center near Washington Dulles International Airport in Northern Virginia. Credit: Johns Hopkins University Applied Physics Laboratory

Federal funding is about to run out for 19 active space missions studying Earth’s climate, exploring the Solar System, and probing mysteries of the Universe.

This year’s budget expires at the end of this month, and Congress must act before October 1 to avert a government shutdown. If Congress passes a budget before then, it will most likely be in the form of a continuing resolution, an extension of this year’s funding levels into the first few weeks or months of fiscal year 2026.

The White House’s budget request for fiscal year 2026 calls for a 25 percent cut to NASA’s overall budget, and a nearly 50 percent reduction in funding for the agency’s Science Mission Directorate. These cuts would cut off money for at least 41 missions, including 19 already in space and many more far along in development.

Normally, a president’s budget request isn’t the final say on matters. Lawmakers in the House and Senate have written their own budget bills in the last several months. There are differences between each appropriations bill, but they broadly reject most of the Trump administration’s proposed cuts.

Still, this hasn’t quelled the anxieties of anyone with a professional or layman’s interest in space science. The 19 active robotic missions chosen for cancellation are operating beyond their original design lifetime. However, in many cases, they are in pursuit of scientific data that no other mission has a chance of collecting for decades or longer.

A “tragic capitulation”

Some of the mission names are recognizable to anyone with a passing interest in NASA’s work. They include the agency’s two Orbiting Carbon Observatory missions monitoring data signatures related to climate change, the Chandra X-ray Observatory, which survived a budget scare last year, and two of NASA’s three active satellites orbiting Mars.

And there’s New Horizons, a spacecraft that made front-page headlines in 2015 when it beamed home the first up-close pictures of Pluto. Another mission on the chopping block is Juno, the world’s only spacecraft currently at Jupiter.

Both spacecraft have more to offer, according to the scientists leading the missions.

“New Horizons is perfectly healthy,” said Alan Stern, the mission’s principal investigator at Southwest Research Institute (SWRI). “Everything on the spacecraft is working. All the spacecraft subsystems are performing perfectly, as close to perfectly as one could ever hope. And all the instruments are, too. The spacecraft has the fuel and power to run into the late 2040s or maybe 2050.”

New Horizons is a decade and more than 2.5 billion miles (4.1 billion kilometers) beyond Pluto. The probe flew by a frozen object named Arrokoth on New Year’s Day 2019, returning images of the most distant world ever explored by a spacecraft. Since then, the mission has continued its speedy departure from the Solar System and could become the third spacecraft to return data from interstellar space.

Alan Stern, leader of NASA’s New Horizons mission, speaks during the Tencent WE Summit at Beijing Exhibition Theater on November 6, 2016, in China. Credit: Visual China Group via Getty Images

New Horizons cost taxpayers $780 million from the start of development through the end of its primary mission after exploring Pluto. The project received $9.7 million from NASA to cover operations costs in 2024, the most recent year with full budget data.

It’s unlikely New Horizons will be able to make another close flyby of an object like it did with Pluto and Arrokoth. But the science results keep rolling in. Just last year, scientists announced the news that New Horizons found the Kuiper Belt—a vast outer zone of hundreds of thousands of small, icy worlds beyond the orbit of Neptune—might extend much farther out than previously thought.

“We’re waiting for government, in the form of Congress, the administration, to come up with a funding bill for FY26, which will tell us if our mission is on the chopping block or not,” Stern said. “The administration’s proposal is to cancel essentially every extended mission … So, we’re not being singled out, but we would get caught in that.”

Stern, who served as head of NASA’s science division in 2007 and 2008, said the surest way to prevent the White House’s cuts is for Congress to pass a budget with specific instructions for the Trump administration.

“The administration ultimately will make some decision based on what Congress does,” Stern said. “If Congress passes a continuing resolution, then that opens a whole lot of other possibilities where the administration could do something without express direction from Congress. We’re just going to have to see where we end up at the end of September and then in the fall.”

Stern said shutting down so many of NASA’s science missions would be a “tragic capitulation of US leadership” and “fiscally irresponsible.”

“We’re pretty undeniably the frontrunner, and have been for decades, in space sciences,” Stern said. “There’s much more money in overruns than there is in what it costs to run these missions—I mean, dramatically. And yet, by cutting overruns, you don’t affect our leadership position. Turning off spacecraft would put us in third or fourth place, depending on who you talk to, behind the Chinese and the Europeans at least, and maybe behind others.”

Stern resigned his job as NASA’s science chief in 2008 after taking a similar stance arguing against cuts to healthy projects and research grants to cover overruns in other programs, according to a report in Science Magazine.

An unforeseen contribution from Juno

Juno, meanwhile, has been orbiting Jupiter since 2016, collecting information on the giant planet’s internal structure, magnetic field, and atmosphere.

“Everything is functional,” said Scott Bolton, the lead scientist on Juno, also from SWRI. “There’s been some degradation, things that we saw many years ago, but those haven’t changed. Actually, some of them improved, to be honest.”

The only caveat with Juno is some radiation damage to its camera, called JunoCam. Juno orbits Jupiter once every 33 days, and the trajectory brings the spacecraft through intense radiation belts trapped by the planet’s powerful magnetic field. Juno’s primary mission ended in 2021, and it’s now operating in an extended mission approved through the end of this month. The additional time exposed to harsh radiation is, not surprisingly, corrupting JunoCam’s images.

NASA’s Juno mission observed the glow from a bolt of lightning in this view from December 30, 2020, of a vortex near Jupiter’s north pole. Citizen scientist Kevin M. Gill processed the image from raw data from the JunoCam instrument aboard the spacecraft. Credit: NASA/JPL-Caltech/SwRI/MSSS Image processing by Kevin M. Gill © CC BY

In an interview with Ars, Bolton suggested the radiation issue creates another opportunity for NASA to learn from the Juno mission. Ground teams are attempting to repair the JunoCam imager through annealing, a self-healing process that involves heating the instrument’s electronics and then allowing them to cool. Engineers sparingly tried annealing hardware space, so Juno’s experience could be instructive for future missions.

“Even satellites at Earth experience this [radiation damage], but there’s very little done or known about it,” Bolton said. “In fact, what we’re learning with Juno has benefits for Earth satellites, both commercial and national security.”

Juno’s passages through Jupiter’s harsh radiation belts provide a real-world laboratory to experiment with annealing in space. “We can’t really produce the natural radiation environment at Earth or Jupiter in a lab,” Bolton said.

Lessons learned from Juno could soon be applied to NASA’s next probe traveling to Jupiter. Europa Clipper launched last year and is on course to enter orbit around Jupiter in 2030, when it will begin regular low-altitude flybys of the planet’s icy moon Europa. Before Clipper’s launch, engineers discovered a flaw that could make the spacecraft’s transistors more susceptible to radiation damage. NASA managers decided to proceed with the mission because they determined the damage could be repaired at Jupiter with annealing.

“So, we have rationale to hopefully continue Juno because of science, national security, and it sort of fits in the goals of exploration as well, because you have high radiation even in these translunar orbits [heading to the Moon],” Bolton said. “Learning about how to deal with that and how to build spacecraft better to survive that, and how to repair them, is really an interesting twist that we came by on accident, but nevertheless, turns out to be really important.”

It cost $28.4 million to operate Juno in 2024, compared to NASA’s $1.13 billion investment to build, launch, and fly the spacecraft to Jupiter.

On May 19, 2010, technicians oversee the installation of the large radiation vault onto NASA’s Juno spacecraft propulsion module. This protects the spacecraft’s vital flight and science computers from the harsh radiation at Jupiter. Credit: Lockheed Martin

“We’re hoping everything’s going to keep going,” Bolton said. “We put in a proposal for three years. The science is potentially very good. … But it’s sort of unknown. We just are waiting to hear and waiting for direction from NASA, and we’re watching all of the budget scenarios, just like everybody else, in the news.”

NASA headquarters earlier this year asked Stern and Bolton, along with teams leading other science missions coming under the ax, for an outline of what it would take and what it would cost to “close out” their projects. “We sent something that was that was a sketch of what it might look like,” Bolton said.

A “closeout” would be irreversible for at least some of the 19 missions at risk of termination.

“Termination doesn’t just mean shutting down the contract and sending everybody away, but it’s also turning the spacecraft off,” Stern said. “And when you turn off those spacecraft’s radio receivers, there’s no way to turn them back on because they’re off. They can never get a command in.

“So, if we change our mind, we’ve had another election, or had some congressional action, anything like that, it’s really terminating the spacecraft, and there’s no going back.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Scientists: It’s do or die time for America’s primacy exploring the Solar System Read More »

after-kirk-shooting,-utah-governor-calls-social-media-a-“cancer.”-will-we-treat-it-like-one?

After Kirk shooting, Utah governor calls social media a “cancer.” Will we treat it like one?

This is an extremely online style of writing—cryptic, meme-driven, and jokey even about serious or disturbing issues. Was the alleged shooter helped toward his act of violence by the communities he was in online? And are millions of Internet users helping or hurting their own moral and civic identities by watching detailed video of the murder, which was immediately shared on social media?

As his press conference wrapped up, Cox made a plea for everyone to follow Kirk’s tweeted advice (which he cited). He said that “we are not wired as human beings—biologically, historically—we have not evolved in a way that we are capable of processing those types of violent imagery… This is not good for us. It is not good to consume.”

And he added that “social media is a cancer on our society right now. I would encourage people to log off, turn off, touch grass, hug a family member, go out and do good in your community.”

This could have been useful to Extremely Online People like the alleged shooter, who was turned in by some of his own family members and who might have been dissuaded from his actions had he engaged more directly with them. (Of course, simplistic advice like this is often wrong; difficult family members and broken relationships might mean that in-person connection is also unhelpful for some.)

It might also be good advice for the kinds of Extremely Online People who lead the country by posting social media threats to unleash the “Department of War” upon Chicago, shown burning in the background.

Treating cancer

At its heart, though, Cox raises a question about whether social media is 1) a powerful force capable of both great good and terrible incitement and misinformation, or whether it is 2) a mere cancer.

I assume Ars readers are divided on this question, given that the Ars staff itself has differing views. One can point, of course, to the successes: The powerless can call out the lies of the powerful, they can gin up “color revolutions” to topple dictators, and they can publish their views with an ease and at a cost that not even the printing press—itself an extremely disruptive technology—could manage. On the flip side, of course, is all the “cancer”: the floods of misinformation and bile, the yelling, the “cancel culture,” the virtue signaling, the scams and hoaxes, the ethnic nationalism, the casual sharing of both gore and pornography, the buffoonish natures of the tech overlords who run too many of these services, and that feeling you get when you log in to Facebook and realize with a shock that your aunt is a closet racist.

After Kirk shooting, Utah governor calls social media a “cancer.” Will we treat it like one? Read More »