Author name: Paul Patrick

as-white-house-talks-about-impounding-nasa-funding,-congress-takes-the-threat-seriously

As White House talks about impounding NASA funding, Congress takes the threat seriously

This year, given the recent action on the budget measures, it is possible that Congress could pass Appropriations legislation for most of the federal government, including NASA before October 1.

Certainly there is motivation to do so, because the White House and its Office of Management and Budget, led by Russ Vought, has indicated that in absence of Appropriations legislation it is planning to take measures that would implement the Presidents Budget Request, which set significantly lower spending levels for NASA and other federal agencies.

For example, as Ars reported earlier this month, the principal investigators of NASA science missions that White House seeks to kill have been told to create termination plans that could be implemented within three months, beginning as soon as October 1.

Whether there is a continuing resolution, or shutdown, then, the White House appears likely to go to court to implement its spending priorities at federal agencies, including NASA.

Congress acknowledges the threat

This week the Ranking Members of House committee with oversight over NASA raised the alarm publicly about this in a letter to Sean Duffy, the Secretary of Transportation who was recently named interim administrator of NASA as well.

NASA appears to be acting in accordance with a fringe, extremist ideology emanating from the White House Office of Management and Budget that asserts a right to impound funds appropriated by Congress for the sake of executive branch priorities. Moreover, it now appears that the agency intends to implement funding cuts that were never enacted by Congress in order to “align” the agency’s present-day budget with the Trump Administration’s slash-and-burn proposed budget for the next fiscal year, with seemingly no concern for the devastation that will be caused by mass layoffs, widespread program terminations, and the possible closure of critical centers and facilities. These decisions are wrong, and they are not yours to make.

The letter reminds Duffy that Congress sets the budget, and federal agencies work toward those budget levels. However, the legislators say, NASA is moving ahead with funding freezes for various programs reducing employees across the agency. Approximately 2,700 employees have left the agency since the beginning of the Trump Administration.

As White House talks about impounding NASA funding, Congress takes the threat seriously Read More »

netflix’s-first-show-with-generative-ai-is-a-sign-of-what’s-to-come-in-tv,-film

Netflix’s first show with generative AI is a sign of what’s to come in TV, film

Netflix used generative AI in an original, scripted series that debuted this year, it revealed this week. Producers used the technology to create a scene in which a building collapses, hinting at the growing use of generative AI in entertainment.

During a call with investors yesterday, Netflix co-CEO Ted Sarandos revealed that Netflix’s Argentine show The Eternaut, which premiered in April, is “the very first GenAI final footage to appear on screen in a Netflix, Inc. original series or film.” Sarandos further explained, per a transcript of the call, saying:

The creators wanted to show a building collapsing in Buenos Aires. So our iLine team, [which is the production innovation group inside the visual effects house at Netflix effects studio Scanline], partnered with their creative team using AI-powered tools. … And in fact, that VFX sequence was completed 10 times faster than it could have been completed with visual, traditional VFX tools and workflows. And, also, the cost of it would just not have been feasible for a show in that budget.

Sarandos claimed that viewers have been “thrilled with the results”; although that likely has much to do with how the rest of the series, based on a comic, plays out, not just one, AI-crafted scene.

More generative AI on Netflix

Still, Netflix seems open to using generative AI in shows and movies more, with Sarandos saying the tech “represents an incredible opportunity to help creators make films and series better, not just cheaper.”

“Our creators are already seeing the benefits in production through pre-visualization and shot planning work and, certainly, visual effects,” he said. “It used to be that only big-budget projects would have access to advanced visual effects like de-aging.”

Netflix’s first show with generative AI is a sign of what’s to come in TV, film Read More »

experts-lay-into-tesla-safety-in-federal-autopilot-trial

Experts lay into Tesla safety in federal autopilot trial

For example, she said Tesla “clearly recognized that mode confusion is an issue—this is where people, for example, think the car is in Autopilot and don’t understand that the Autopilot has disengaged,” she told the court.

Cummings also referred to the deposition of Tesla autopilot firmware engineer Ajshay Phatak. Phatak’s deposition told the court that the company did not keep good track of Autopilot crashes prior to 2018, and Cummings pointed out that “it was clear they knew that they had a big problem with people ignoring the warnings. Ignoring the hands-on requests. And…as you know, prior to this accident. It was known to Tesla that they were having problems with people ignoring their warnings.”

Tesla’s abuse of statistics to make misleading claims about safety is nothing new: In 2017, Ars found out that Tesla’s claim about Autopilot reducing crashes was not at all backed by data, which, in fact, showed the driver assist increased crash rates.

Mendel Singer, a statistician at Case Western University School of Medicine, was very unimpressed with Tesla’s approach to crash data statistics in his testimony. Singer noted that he was “not aware of any published study, any reports that are done independently… where [Tesla] actually had raw data and could validate it to see does it tend to make sense” and that the car company was not comparing like with like.

“Non-Teslas crashes are counted based on police reports, regardless of safety system deployment,” Singer said. Further, Tesla kept misleading claims about safety on its website for years, Singer pointed out. When asked whether he would have accepted a paper for peer review from Tesla regarding its reports, “that would have been a really quick and easy rejection,” he said.

While it’s possible that Tesla will still settle this case, we may also see the trial carried out to its conclusion.

“The plaintiffs in this instance have already received compensation from the driver of the Tesla in question, apparently in a decent amount. My understanding is that this makes them much less likely to take the kinds of offers Tesla has been making for settlements, and this is more about the justice,” said Edward Niedermeyer, author and long-time Tesla-watcher.

“That said, the judge in the case has made some frustrating rulings around confidentiality on key issues, so it’s possible that may be in Tesla’s favor. They could also just up their settlement offer enough to be impossible to refuse,” Niedermeyer said.

Experts lay into Tesla safety in federal autopilot trial Read More »

apple-sues-youtuber-who-leaked-ios-26’s-new-“liquid-glass”-software-redesign

Apple sues YouTuber who leaked iOS 26’s new “Liquid Glass” software redesign

“Defendants’ misconduct was brazen and egregious,” says Apple’s filing. “After Mr. Prosser learned that Mr. Ramacciotti needed money, and that his friend Ethan Lipnik worked at Apple on unreleased software designs, Defendants jointly planned to access Apple’s confidential and trade secret information through Mr. Lipnik’s Apple-owned development iPhone.”

Apple’s main source of information appears to be an audio message sent to Lipnik by Ramacciotti, which Lipnik then provided to Apple. An April 4 email from an anonymous source, also shared in the filing, named Lipnik as the source of the leaks and alleged the involvement of Ramaciotti and three other names that are blacked out.

According to the filing, Lipnik has been fired from Apple “for failing to follow Apple’s policies designed to protect its confidential information, including development devices and unreleased software and features.” The filing also accuses Lipnik of failing to report “multiple prior breaches” to Apple.

For his part, Prosser claims that Apple’s timeline of events is incorrect.

“This is not how the situation played out on my end,” Prosser posted to social media late yesterday. “Luckily have receipts for that. I did not ‘plot’ to access anyone’s phone. I did not have any passwords. I was unaware of how the information was obtained. Looking forward to speaking with Apple on this.”

Prosser then posted a screenshot from a messaging app, dated to February, which implies that he had been sent the information about the Liquid Glass redesign unsolicited.

Apple’s suit is seeking damages from Prosser and Ramacciotti, and it wants “to protect its trade secrets” and “prevent Messrs. Ramacciotti and Prosser from continuing to act unlawfully.” Even though the company has already publicly announced iOS 26 and the Liquid Glass design, Apple describes Prosser and Ramacciotti as “an ongoing threat” because Lipnik’s phone “contained other announced design elements that remain confidential.”

Apple sues YouTuber who leaked iOS 26’s new “Liquid Glass” software redesign Read More »

rocket-report:-spacex-won’t-land-at-johnston-atoll;-new-north-sea-launch-site

Rocket Report: SpaceX won’t land at Johnston Atoll; new North Sea launch site


All the news that’s fit to lift

“Europe is seizing the opportunity to lead.”

NASA astronauts Mike Fincke (left) and Zena Cardman (right), the pilot and commander of NASA’s SpaceX Crew-11 mission to the International Space Station, view a Falcon 9 rocket ahead of their spaceflight. Credit: SpaceX

Welcome to Edition 8.03 of the Rocket Report! We are at an interesting stage in Europe, with its efforts to commercialize spaceflight. Finally, it seems the long-slumbering continent is waking up to the need to leverage private capital to drive down the costs of space access, and we are seeing more investment flow into European companies. But it is critical that European policymakers make strategic investments across the industry or companies like PLD Space, which outlined big plans this week, will struggle to get off the launch pad.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Avio celebrates freedom from Arianespace. Representatives from Italy, Germany, and France met at the European Space Agency headquarters last week to sign the Launcher Exploitation Declaration, which officially began the transfer of Vega C launch operation responsibilities from Arianespace to the rocket’s builder, Avio, European Spaceflight reports. “It is a historic step that reinforces our nation’s autonomy in access to space and assigns us a strategic responsibility towards Europe,” said Avio CEO Giulio Ranzo. “We are ready to meet this challenge with determination, and we are investing in technologies, expertise, and infrastructure to ensure a competitive service.”

A breaking of long-term partnerships … In addition to securing control over the full exploitation of the Vega launch vehicle family, Italy, through Avio, is also investing in what comes next. The country has committed more than 330 million euros to the development of the MR60 methalox rocket engine and two demonstrator vehicles. These, along with the MR10 engine being developed under the Vega E programme, will support Avio’s preparation of a future reusable launch vehicle. Historically, France, Germany, and Italy have worked together on European launch vehicles. This appears to be another step in breaking up that long-term partnership toward more nationalistic efforts.

PLD Space outlines grand ambitions. PLD Space, Spain’s sole contestant in the European Launcher Challenge, unveiled its long-term strategy at the company’s Industry Days event this week, Payload reports. The company is targeting a production rate of 32 Miura 5 launchers annually by 2030. To achieve this output, PLD plans to deepen its vertical integration, consolidate its supplier network, and begin to serialize its manufacturing process beginning in 2027.

Building up the supply chain … The company’s production plans also call for the parallel development of Miura Next, a heavy-lift vehicle capable of bringing 13 tons to orbit. However, the company will start with the Miura 5 vehicle, which PLD expects to launch for the first time from French Guiana in 2026. Since the beginning of 2024, PLD has invested a total of 50 million euros in its Miura 5 supply chain, consisting of 397 industrial partners, many of which are located in Spain and other European countries.  These plans are great, but sooner or later, the 14-year-old company needs to start putting rockets in space. (submitted by EllPeaTea)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

New consortium will study space plane. A UK-based space and defense consultant group, Frazer-Nash, will lead a program to design a vehicle and its integrated systems with the goal of building and flying a Mach 5-capable aircraft at the edge of space by early 2031. This so-called INVICTUS program was funded with a 7 million-euro grant from the European Space Agency and is seen as a stepping stone toward developing a reusable space plane that takes off and lands horizontally from a runway.

Seeking to lead a new era of flight … Over 12 months, INVICTUS has been tasked to deliver the concept and elements of the preliminary design of the full flight system. It will attempt to demonstrate the efficacy of hydrogen-fueled, precooled air-breathing propulsion at hypersonic speeds, technology that will ultimately enable horizontal take-off. “With INVICTUS, Europe is seizing the opportunity to lead in technologies that will redefine how we move across the planet and reach beyond it,” said Tommaso Ghidini, head of the Mechanical Department at the European Space Agency. (submitted by Jid)

ESA backs North Sea launch site. A private company developing a launch site in the North Sea, EuroSpaceport, has secured support from the European Space Agency. The company, founded five years ago, is developing a sea-based launch platform built on a repurposed offshore wind turbine service vessel, European Spaceflight reports. Rockets are envisioned to launch from a position 50 to 100 km offshore from the port of Esbjerg, in Denmark.

Seeing the forest for the trees … On Wednesday, EuroSpaceport announced that it had signed an agreement with the European Space Agency and Polish rocket builder SpaceForest to support the first launch from its Spaceport North Sea platform. The company will receive support from the agency through its Boost! Program. SpaceForest has been a recipient of Boost! funding, receiving 2.4 million euros in October 2024. SpaceForest said the mission will be used to verify the launch procedures of its Perun rocket under nominal suborbital conditions. (submitted by EllPeaTea)

Amazon and SpaceX, best frenemies? Maybe not, but for the time being, they appear to be friends of convenience. A Falcon 9 rocket launched from Florida’s Space Coast early on Wednesday with a batch of Internet satellites for Amazon’s Project Kuiper network, thrusting a rival one step closer to competing with SpaceX’s Starlink broadband service. With this launch, Amazon now has 78 Kuiper satellites in orbit, Ars reports. The full Kuiper constellation will consist of 3,232 satellites to provide broadband Internet service to most of the populated world, bringing Amazon in competition with SpaceX’s Starlink network.

Launch is not cheap … Kuiper is an expensive undertaking, estimated at between $16.5 billion and $20 billion by the industry analytics firm Quilty Space. Quilty has concluded that Amazon is spending $10 billion on launch alone, exceeding the company’s original cost estimate for the entire program. Amazon has booked more than 80 launches to deploy the Kuiper constellation, but the company didn’t turn to SpaceX until it had to. A shareholder lawsuit filed in 2023 accused Amazon founder Jeff Bezos and the company’s board of directors of breaching their “fiduciary duty” by not considering SpaceX as an option for launching Kuiper satellites. The plaintiffs in the lawsuit alleged Amazon didn’t consider the Falcon 9 due to an intense and personal rivalry between Bezos and SpaceX founder Elon Musk. Amazon bowed to the allegations and announced a contract with SpaceX for three Falcon 9 launches in December 2023 to provide “additional capacity” for deploying the Kuiper network.

NASA targets end of July for Crew-11. NASA said Monday that it and SpaceX were targeting July 31 for the flight of SpaceX’s Crew-11 mission to the orbiting outpost, Spaceflight Now reports. The mission is led by NASA astronaut Zena Cardman. She will be flying along with fellow NASA astronaut Mike Fincke, Japan Aerospace Exploration Agency (JAXA) astronaut Kimiya Yui and Roscosmos cosmonaut Oleg Platonov.

Pushing Dragon reuse … The mission was moved up from its previously planned August launch window to create more room in the manifest for the arrival of the Cargo Dragon flying the CRS-33 mission. That Dragon will perform a boost of the space station as a demonstration of some of the capabilities SpaceX will use on its US Deorbit Vehicle currently in work. Crew-11 will fly to the orbiting outpost on Crew Dragon Endeavour, which will be its sixth trip to the ISS. This will be the first Crew Dragon spacecraft to fly for a sixth time.

SpaceX won’t use Johnston Atoll for rocket cargo tests. Johnston Atoll, an unincorporated US territory and Pacific island wildlife refuge with a complicated military history, will no longer become a SpaceX reusable rocket test site, Popular Science reports. “The Department of the Air Force has elected to hold the preparation of the Johnston Atoll Environmental Assessment for a proposed rocket cargo landing demonstration on Johnston Atoll in abeyance while the service explores alternative options for implementation,” Air Force spokesperson Laurel Falls said.

Taking a toll on the atoll … Located roughly 860 miles southwest of Hawaii, Johnston Atoll has served as a base for numerous US military operations for over 90 years. Despite this, the atoll remains a home for 14 tropical bird species as part of the Pacific Remote Islands Marine National Monument. The site had been under consideration for tests as part of a military program to deliver cargo around the planet, using suborbital missions on rocket such as SpaceX’s Starship vehicle. The Johnston Atoll plans included the construction of two landing pads that were met with public backlash from wildlife experts and indigenous representatives. (submitted by Tfargo04)

Blue Origin confirms ESCAPADE is up next. On Thursday, Blue Origin said on social media that the second launch of its New Glenn rocket will carry NASA’s ESCAPADE mission as its primary payload. This launch will support ESCAPADE’s science objectives as the twin spacecraft progress on their journey to the Red Planet. Also onboard is a technology demonstration from @Viasat in support of @NASASpaceOps’ Communications Services Project.

Left unsaid was when the launch will occur … The social media post confirms a report from Ars in June, which said the ESCAPADE spacecraft was up next on New Glenn. Previously, the company has said this second launch will take place no earlier than August 15. However, that is less than one month away. Late September is probably the earliest realistic launch date, with October or November more likely for the second flight of the company’s large rocket.

Next three launches

July 19: Falcon 9 | Starlink 17-3 | Vandenberg Space Force Base, California | 03: 44 UTC

July 21: Falcon 9 | O3b mPOWER 9 & 10 | Cape Canaveral Space Force Station, Florida | 21: 00 UTC

July 22: Falcon 9 | NASA’s Tandem Reconnection and Cusp Electrodynamics Reconnaissance Satellites | Vandenberg Space Force Base, California | 18: 05 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: SpaceX won’t land at Johnston Atoll; new North Sea launch site Read More »

the-pixel-watch-4-might-not-become-e-waste-if-you-damage-it

The Pixel Watch 4 might not become e-waste if you damage it

This would be an important upgrade, even if it’s not flashy. That glass dome on Google’s watches is just begging to get cracked or scratched—it’s standard Gorilla Glass rather than the sapphire glass you see on Apple and Samsung watches. For the first three models, a cracked screen or dissolving glue on the bottom has meant that the watch was effectively e-waste. Even if you were willing to pay Google to repair it, that was not an option.

Google never had a satisfactory answer for this approach to wearables. In lieu of repairs, it offered a protection plan, which guaranteed you a replacement watch if yours was damaged. For $4 per month and $49 per incident, Google would simply send a new watch. And it would actually be new; there are no officially refurbished Pixel Watches because Google doesn’t repair them.

Not supporting repairs was a bizarre decision for a company that so often promotes its commitment to sustainability. Google has, at times, fallen short of those ideals, perhaps most notably with the defective batteries in its A-series Pixel phones. But that’s at least theoretically an unforeseeable outcome. Google intentionally designed the Pixel Watches such that they could not be repaired.

Google should never have released a single watch that couldn’t be repaired, let alone three of them. Hopefully, this report is accurate, and Google will right this wrong with the Pixel Watch 4.

The Pixel Watch 4 might not become e-waste if you damage it Read More »

everything-we-learned-from-a-week-with-apple-carplay-ultra

Everything we learned from a week with Apple CarPlay Ultra


CarPlay Ultra takes over the main instrument display as well as the infotainment.

Aston Martin dashboard showing CarPlay ultra logo

Aston Martin is the first automaker to adopt Apple’a CarPlay Ultra, which takes over all the displays in the car. Credit: Michael Teo Van Runkle

Aston Martin is the first automaker to adopt Apple’a CarPlay Ultra, which takes over all the displays in the car. Credit: Michael Teo Van Runkle

For the 2025 model year, Aston Martin’s user interface took a major step forward across the lineup, with improvements to the physical controls and digital infotainment, as well as updated gauge cluster layouts. However, the big news dropped in the spring, when Aston and Apple announced the launch of CarPlay Ultra, the next generation of Apple’s nearly ubiquitous automotive operating system.

Ultra extends beyond the strictly “phone” functions of traditional CarPlay to now encompass more robust vehicular integration, including climate control, drive modes, and the entire gauge cluster readout. Running Ultra, therefore, requires a digital gauge cluster. So far, not many automakers other than Aston have signaled their intent to join the revolution: Kia/Hyundai/Genesis will adopt Ultra next, and Porsche may come after that.

Before future partnerships come to fruition, I spent a week with a DB12 Volante to test Ultra’s use cases and conceptual failure points, most critically to discover whether this generational leap actually enhances or detracts from an otherwise stellar driving experience.

Setup

The following gallery will take you through the setup process. Michael Teo Van Runkle

Connecting to Ultra via Bluetooth takes a minute or two longer than traditional CarPlay and includes more consent screens to cover the additional legal ramifications of the operating system sharing data with the car, and vice versa. Apple restricts this data to multimedia info, plus real-time speed and engine status, vehicle lights, and similar functions. Specifically, neither the iPhone nor third-party apps store any vehicle data after disconnecting from the car, and the car doesn’t keep personal data once the iPhone disconnects, either.

What about Siri? I generally keep Siri turned off so that accidental “Hey, Siri” activations don’t constantly interrupt my life—but by pushing the DB12’s steering wheel button, I could test simple tasks that went just about as well as typical for Siri (read: don’t expect much “Apple Intelligence” quite yet). Standard Siri data sharing with Apple therefore applies when used with Ultra.

I tested Ultra with an iPhone 16 Pro, but the software requires an iPhone 12 or newer and the latest iOS 18.5 update. As a type of simple failure exercise, I turned my phone off while driving more than once. Doing so reverts both the gauge cluster and infotainment screen to Aston’s native UI, the former almost instantly and the latter just a few seconds later. However, once I turned my phone back on, I struggled to reactivate either traditional CarPlay or Ultra until I forgot the device in my Bluetooth settings and started over from scratch. This held true for every attempt.

We didn’t love the fact that there was some latency with the needles on the dials. Michael Teo Van Runkle

Once initiated, though, Ultra fired up straightaway every time. Much faster than the typical lag to boot up traditional CarPlay. In fact, as soon as I unlocked the doors but before entering the DB12, the gauge cluster showed Ultra’s Apple-style readouts. These configurable designs, which Apple developed with Aston’s input, include a classic analog-style gauge view as well as layouts that allow for minimized data, navigation, and stylistic choices selectable through the center console screen or by swiping the haptic button on the DB12’s steering wheel.

Call me old-fashioned, but I still enjoy seeing a tachometer, speedometer, drive modes, and fuel level versus range remaining and a digital speed—especially on an engaging performance vehicle like the DB12 Volante. Apple might be skilled at making new tech easy to use, but it’s hard to beat the power of millions of minds adapting to analog gauges over the past century or so. And in this case, Ultra’s tach(s) showed a bit of latency or lag while ripping that 671-hp twin-turbo V8 up through the revs, something I never noticed in the native UI.

It’s much more holistic now

Ultra’s biggest improvements over preceding CarPlay generations are in the center console infotainment integration. Being able to access climate controls, drive modes, and traction settings without leaving the intuitive suite of CarPlay makes life much easier. In fact, changing between drive modes and turning traction control off or down via Aston’s nifty adjustable system caused less latency and lagging in the displays in Ultra. And for climate, Ultra actually brings up a much better screen after spinning the physical rotaries on the center console than you get through Aston’s UI—plus, I found a way to make the ventilated seats blow stronger, which I never located through the innate UI despite purposefully searching for a similar menu page.

There are different main instrument UIs to choose from, like this one. Michael Teo Van Runkle

Some specific functions do require dipping out of Ultra, though, including changing any audio settings for the spectacular Bowers & Wilkins sound system. I also found two glitches. Trying to bring down the DB12 Volante’s convertible top cued up a “Close trunk separator” alert, but the only way to close the trunk separator is via the same button as the convertible top. So instead, the windows only went up and down repeatedly as I tried to enjoy open-top motoring. This happened both in Ultra and without, however, so it could just be an Aston issue that Ultra couldn’t fix.

Plus, over the course of my eight days with Ultra, I experienced one moment where both the infotainment and gauge cluster went totally black. This resembled GM’s Ultium screen issues and lasted about 30 seconds or so before both flickered to life again. At first, I suspected an inadvertent attempt to activate nighttime driving mode. But again, this could have been an Aston issue, an Apple issue, or both.

Running around Los Angeles, I never found a spot with zero reception (I run e-sims, both Verizon and AT&T simultaneously, for this very reason), but I did purposefully enter airplane mode. This time, Ultra stayed active, and regardless, Apple assured me that essential functions, including navigation, can pre-load offline data for planned route guidance. But at the very worst, as with the phone turning off or battery dying, Ultra can simply revert to the onboard navigation.

Using Ultra regularly seemed to deplete my iPhone’s battery slightly more quickly than normal, and I noticed some warming of the iPhone—though without a controlled experiment, I can’t say with certainty whether these two symptoms happened quicker than simply running traditional CarPlay or Bluetooth. And in reality, most cars running Ultra (for Aston and beyond) should come equipped with wireless charge pads and plenty of USB-C ports anyhow to keep those batteries topped up. On hot summer days in LA, though, my iPhone seemed to get warmest while using inductive charging and Ultra simultaneously, to my admittedly unscientific touch.

Apple Maps is the only map that is allowed to go here in CarPlay Ultra. Michael Teo Van Runkle

For commuters who brave traffic using Advanced Driver Assistance Systems (ADAS), Ultra seemed to work smoothly with the DB12’s lane departure warnings, steering corrections, and adaptive cruise control—though I typically turn all this off via Aston’s handy single button, which helps to stave off frustration. This introduces a loophole or gap in regulations, however, whether CarPlay Ultra needs to meet the ISO’s ASIL-D standards or achieve some kind of National Highway Traffic Safety Administration certification.

Traditional CarPlay stuck with infotainment and basic “phone” functions, but now that the iPhone essentially accesses and displays ADAS, drive modes, and traction setting information, where does regulated consumer safety come in? And where does liability rest, in the event of a driver aid or corrective maneuver going awry? Somehow, this question seems most likely to wind up on the desk of an insurance adjuster sooner rather than later.

Can we try it in an EV?

For me, some disappointment arose from being unable to cue up either Waze or Google Maps in Ultra’s gauge cluster navigation screens rather than strictly Apple Maps. But in many ways, I suspect that Ultra might work even better when (or if) Hyundai/Kia/Genesis introduce compatible EVs, rather than Aston’s (so far) more classic ICE vehicles. And not just because the modern futurist aesthetic matches better, either, but more so thanks to the improved accuracy of range, charging, and navigation features.

The center infotainment screen’s integration with vehicular functions, therefore, stands out as much more of a pro for Aston Martins than Ultra’s gauge cluster readout, enhancing the driving experience through a more intuitive UI that decreases time spent glancing away from the road. For those who want to skip out on Ultra, it’s also worth noting that the iPhone allows for the choice to stick with traditional CarPlay only as well. However, I suspect car buyers will eventually begin to expect Ultra, even if the added jump to vehicular control represents somewhat less of a massive leap than simply picking between models equipped with CarPlay or not.

It’s unclear whether other automakers will find the advantages worthy of converting to Ultra, including Rivian, which offers neither CarPlay nor Android Auto, or GM, which skipped out on CarPlay for EVs. On the other hand, automakers may also decide to hesitate before handing over further control to Apple now that the Apple Car is officially dead. And in that regard, Ultra might just represent the final straw that inspires further improvements to proprietary user interfaces across the industry as well.

Everything we learned from a week with Apple CarPlay Ultra Read More »

kimi-k2

Kimi K2

While most people focused on Grok, there was another model release that got uniformly high praise: Kimi K2 from Moonshot.ai.

It’s definitely a good model, sir, especially for a cheap-to-run open model.

It is plausibly the best model for creative writing, outright. It is refreshingly different, and opens up various doors through which one can play. And it proves the value of its new architecture.

It is not an overall SoTA frontier model, but it is not trying to be one.

The reasoning model version is coming. Price that in now.

Introducing the latest model that matters, Kimi K2.

🚀 Hello, Kimi K2! Open-Source Agentic Model!

🔹 1T total / 32B active MoE model

🔹 SOTA on SWE Bench Verified, Tau2 & AceBench among open models

🔹Strong in coding and agentic tasks

🐤 Multimodal & thought-mode not supported for now

With Kimi K2, advanced agentic intelligence is more open and accessible than ever. We can’t wait to see what you build!

API is here: https://platform.moonshot.ai

– $0.15 / million input tokens (cache hit)

– $0.60 / million input tokens (cache miss)

– $2.50 / million output tokens

[Tech blog here, weights & code here, Github here.]

Try it now at http://Kimi.ai or via API!

Simeon: These costs 👀

K2 is based on the Muon optimizer, so it’s a unique offering. There were claims that the method would not scale or would be unstable, Kimi seems to have proven this false.

K2 takes DeepSeek’s extreme mixture of experts (MoE) with 671B total parameters and goes a bit further, taking the total size to 1T.

Despite that size you can get it running on Groq, Teortaxes reports you can get it to 185 tokens/second there at full context, and Aarush Sah says they then made it even faster than that.

By all accounts Kimi K2 is excellent for its size and cost, and at least competitive with DeepSeek’s v3, with many saying K2 is clearly ahead.

Presumably a reasoning model is coming. Please adjust your expectations (and if desired your stock portfolio) in advance of that event, and do not lose your head if they release an app with it and it gets popular for a time. Remember all the ways in which the DeepSeek Moment was misleading, and also the underreaction to v3. We do not want another massive overreaction to the wrong news.

I also once again warn against saying a release means a lab or country has ‘caught up’ if, at the time of the release, there are some aspects where the model is state of the art. There are those who actively prefer Kimi K2 over other models, even without reference to cost, especially for purposes related to creative writing. I can totally believe that the new method is excellent for that. A remarkable achievement. But keep that achievement in perspective.

Once again, an impressive result was made on the cheap by a modest team.

Teortaxes: Kimi is 200 people, very few of them with “frontier experience”, a platform (but you can buy such data) and a modest GPU budget. In theory there are many dozens of business entities that could make K2 in the West. It’s telling how none did. Not sure what it’s telling tho.

DeepSeek has redefined the LLM landscape, R1-0528 is substantially better than R1, V4 will redefine it again most likely.

Kimi will keep releasing strong models too.

My guess is that we primarily don’t do it because we don’t do it, but also because restrictions breed creativity and we don’t have to do it, and because we don’t have the incentive, or especially the felt incentive, to do it.

As in, if you are in China, then building a cheap (to train, and to run) model is on top of a short list of candidates for The Thing You Do in the space. Then you release it, with a basic clean implementation, and let others worry about features. A huge part of the motivation behind releasing these models is national prestige and national competition. Everyone around you is egging you on as is the government. That is a highly asymmetrical motivation.

Whereas in America, you could try to do that, but why would you? If you can do this, you can get a better valuation, and make more money, doing something else. The profit margins on the ultimate offering are very low and usually zero. Your lunch could get eaten by a top lab at any time, since ultimately no one cares what it cost to train the model, and your lunch will expire quickly regardless. If you are one of the cracked engineers that would join such a team, you’ll get a better offer to join a different team doing something else. Even if you got close you’d likely do better getting acqui-hired. There’s no need to skimp on compute.

It will be interesting to see how well OpenAI does when they release an open model.

Some basic ones:

Lech Mazur put Kimi through his paces. It did lousy on hallucinations, thematic generalization and extended word connections, and downright terribly in the elimination game of social skills. The system isn’t tuned for that sort of thing, but on short-story creative writing it is the new champion.

Harvard Ihle is there with WeirdML, it does well for its price point as a non-reasoning open model, although grok-3-mini (high) is cheaper and scores higher, and r1-0528 keeps the open model high score. But this metric favors reasoning models so there’s a lot of room to improve here by adding reasoning.

This isn’t a benchmark, but it also sort of is one and it’s pretty cool:

Hardmaru: Every ML Engineer’s dream loss curve:

“Kimi K2 was pre-trained on 15.5T tokens using MuonClip with zero training spike, demonstrating MuonClip as a robust solution for stable, large-scale LLM training.”

Paper Abstract: Recently, the Muon optimizer based on matrix orthogonalization has demonstrated strong results in training small-scale language models, but the scalability to larger models has not been proven.

We identify two crucial techniques for scaling up Muon: (1) adding weight decay and (2) carefully adjusting the per-parameter update scale.

These techniques allow Muon to work out-of-the-box on large-scale training without the need of hyper-parameter tuning. Scaling law experiments indicate that Muon achieves computational efficiency compared to AdamW with compute optimal training.

Aravind Srinivas (CEO Perplexity): Kimi models are looking good on internal evals. So we will likely to begin post training on it pretty soon. Congrats to @Kimi_Moonshot for delivering an incredible model.

Renji the whale maximalist: Kimi K2 is mindblowing. Holy fucking crap.

Did they really not even do any RL yet?

I can’t even believe how good it is.

What’s the main reason why it’s so good? Muon?

So far I’ve just tried general purpose tasks / creative writing / educational explanations. Does way better than even o3 and Gemini 2.5 pro so far.

Teortaxes: well they obviously did RL, maybe even another GRPO++ just not long-CoT. Let’s not allow this confusion to spread, I’ve had enough of «MoE from 4 finetuned experts» meme

Renji: Yup, my mistake. It definitely has RL.

Viemccoy: I think Kimi might actually be my new favorite model. Her vocabulary is off the charts, good epistemics, excellent storyteller, plays along but maintains good boundaries. There’s something very, very special here. I actually think this is a much bigger deal than most realize.

Grist: been having a blast with kimi.

love to seed a snippet or idea then be the token courier for r1 and kimi. back and forth. enjoy the little worlds they build with a little bit of organic slop i offer them.

John Pressman: Kimi K2 is very good. I just tried the instruct model as a base model (then switched to the base model on private hosting) and mostly wanted to give a PSA that you can just ignore the instruction format and use open weights instruct models as base models and they’re often good.

Teortaxes: For a wide range of tasks, K2 is probably the cheapest model by far right now, in terms of actual costs per task. It is just cheap, it has no long-CoT, and it does not yap. This is very refreshing. Like the best of Anthropic models, but cheaper and even more to the point.

Hannes: Interesting. For me it keeps inventing/hardcoding results and curves instead of actually running algorithms (tried it on unit square packing). Extremely high sycophancy in first 90 minutes of testing.

Teortaxes: It’s overconfident.

Hasan Can: Kimi K2 is definitely a good model, its world knowledge is on par with sota closed source models. It passed all my odd knowledge questions that aren’t in benchmarks. Next up is coding.

Eleventh Hour: Need more time with it, but it has weirdly Opus3-like themes so far.

Deckard: It’s on par with gpt4base. Enormous potential to allow the public to experiment with and explore SOTA base models – much lower probability of falling into a synthetic training data generator basin compared to llama. requires more skill to use than gpt4base.

Also it really seems to have a breadth of very precise and high resolution knowledge of the human information landscape.

Dominik Lukes: I almost didn’t bother – yet, another open model from China – what a yawn! But, no. This one is different. o3 feels on agentic choices (and the occasional lying) along with Claude 4 feels on coding and league of its own on writing.

Still, many gaps in performance – feels last gen (as in Claude 3-level) on some multilingual and long-context tasks.

Will be exciting to see what happens when they add reasoning and multimodal capabilities.

And can’t wait for the distills and finetunes – should be fun.

Tim Duffy: Smart model with a unique style, likely the best open model. My one complaint so far is that it has a tendency to hallucinate. A couple times it happened to me in the QT.

[From QT]: While in a conversation with Claude, Kimi K2 claims that they were asked by a Chinese student to justify the Tienanmen Square crackdown. Interesting as a hallucination but also for the forthright attitude.

Hrishi (video at the link): Kimi is the real deal. Unless it’s really Sonnet in a trench coat, this is the best agentic open-source model I’ve tested – BY A MILE.

Here’s a sliceof a 4 HOUR run (~1 second per minute) with not much more than ‘keep going’ from me every 90 minutes or so.

The task involved editing multiple files, reading new context, maintaining agentic state (not forgetting where you were or forgetting instructions). This is a repo with included prompts, notes, plans, lots of things to mistake as instructions and be poisoned by.

Tyler Cowen simply asked ‘Kimimania?’ and the comments section was generally impressed by its performance.

There were only a few places people reported being a bit let down, other than by it not yet being a reasoning model.

Echo Nolan: Failed my little private eval, a complex mathematical reasoning task based on understanding the math in a paper. Very stubborn when I tried to gently point it in the right direction, refused to realize it was wrong.

Leo Abstract: t bombed my private eval and could not be walked through it, but it humbly admitted fault when shown. did better on chinese-related subtests. overall i like that it’s less cringing and ‘glazing’, though.

Kromen: I have a suspicion a model extensively trained on o3 synthetic data.

Some very similar quirks.

deckard: Yeah big o3 vibes in terms of making shit up.

Open and cheap and unique and new and pretty good is a great combination, also note the very low market share here for xAI and also for OpenAI. This isn’t overall market share, it’s in a very specific context, but Kimi is definitely breaking through.

OpenRouter: Moonshot AI has surpassed xAI in token market share, just a few days after launching Kimi K2

🎁 We also just put up a free endpoint for Kimi – try it now!

Also this is another case where one should compare cost or compute, not tokens, since different models use radically different amounts of compute and have different orders of magnitude of cost. Anthropic’s share of tokens here represents quite a lot of the compute and dollars spent.

I see exactly why Teortaxes predicted this, yet so far I haven’t seen the reports of shortfalls, although various third-party benchmarks make it clear they are there:

Teortaxes: I predict that in a few days we’ll see reports on many stubborn shortfalls of K2 and a certain disenchantment. They don’t have a lot of experience at this level; it’ll become clear that the good old 0324 has it beat for many usecases. That’s fine. They’ll improve.

Sam Peach: Kimi-K2 just took top spot on both EQ-Bench3 and Creative Writing!

Another win for open models. Incredible job @Kimi_Moonshot

It’s edging out o3 at the top there, followed by Opus, R1-old and then Sonnet. R1-0528 is solid but does substantially worse. Here’s EQ-Bench 3:

Given how other models score on these benchmarks, this appears meaningful.

I find ‘coherent’ rather funny as a greatest weakness. But hey.

Here’s the (a little too narrow?) slop test, as in ‘not x, but y.’ Lower is better.

Lech Mazur has it taking the #1 spot over o3, Gemini 2.5 Pro and Claude Opus in Short-Story Creative Writing.

Lech Mazur: Across all six tasks, Kimi K2’s strengths are unmistakable: the model displays a sophisticated command of literary craft, consistently delivering stories that are lush with metaphor, structurally cohesive, and often thematically ambitious. Its greatest assets are its ability to integrate disparate prompts with apparent ease, weave objects and symbols into layered narrative functions, and compress complex ideas into tight, resonant pieces. The prose frequently aspires to—and sometimes achieves—publication-level lyricism, earning consistent praise for inventive metaphors, subtextual depth, and the purposeful unity of assigned elements.

However, these technical strengths are mirrored by several persistent, interconnected weaknesses. Kimi’s writing is often hampered by an overreliance on abstraction, ornamented metaphor, and poetic language that, while impressive, can overwhelm narrative clarity and blunt emotional impact.

Characters frequently serve as vehicles for theme or plot, lacking the idiosyncratic humanity and “messy” believability that define memorable fiction. Emotional arcs are apt to be summarized or symbolically dramatized rather than fully earned through concrete, lived experience—stories often reach for catharsis but settle for a tidy, intellectual satisfaction.

Similarly, plots and resolutions risk neatness and convenience, with endings that are more structural than surprising or hard-won. World-building flourishes, but sometimes at the expense of organic logic or clarity, resulting in “atmospheric wallpaper” rather than truly lived-in settings.

A recurring critique is the model’s “perfectionism”: stories rarely fail structurally and are rarely inept, but this very competence can sterilize the work, creating narratives that feel like artful answers to a prompt instead of necessary, lived stories. The result is a corpus of fiction that demands admiration for its craft but too often holds the reader at arm’s length—heady rather than affecting, elegant rather than unforgettable.

In summary:

Kimi K2 excels at literary compression, metaphorical invention, and unifying disparate elements, establishing a high technical baseline. But without risking mess, ambiguity, and emotional friction, it tends to “tell” its meaning rather than let it bloom naturally, ultimately producing stories that are admirable, sometimes moving, but rarely vital or transformative.

Those are important weaknesses but we’ve definitely reached ‘horse can talk at all’ territory to get to this point.

xl8harder: I had the impression that Kimi K2 uses a better, more diverse vocabulary than I was used to seeing, so I ran a quick linguistic diversity analysis on the SpeechMap data, and yep, Kimi K2 has the top score.

Method; I lemmatize the responses, and then for each response I calculate both root TTR and Maas index (two linguistic diversity metrics that control for response length) and average them together for each model.

Kimi K2 got top score on both metrics.

[More details in thread.]

Surprisingly, Sonnet didn’t make the top 30. First was opus 4 at 67. I’m not sure what explains this, because I have the perception of claude models as being quite good with language. Though perhaps not so much in generic assistant-y requests?

It’s a strange metric. Gemma-3 does remarkably well and better than Gemini-2.5-Pro.

John Pressman: So what stands out to me about [Kimi K2]. Is that it doesn’t do the thing language models normally do where they kind of avoid detail? Like, a human will write about things using specific names and places.

And if you pay close attention to LLM writing they usually avoid this. It’s one of the easiest ways to spot LLM writing. This model emphatically *does nothave this problem. It writes about people and events with the rich detail characteristic of histories and memoirs. Or fictional settings with good worldbuilding.

Doomslide: How beautiful it is to get public confirmation that optimizers with different targets actually produce different minds. Muon effectively optimizes for solutions that “restrict to spheres” (tho in practice it doesn’t quite). What if this is just strictly better.

Leo Abstract: Its writing reminds me of deepseek. something interesting going on with the training data they’re using over there.

My instinctive guess is it is less about what data is being used, and more what data is not being used or what training isn’t being done.

Another hypothesis is that the bilingual nature of Chinese models makes them, if not better, at least different, and when you’re used to an ocean of slop different is great.

Zeit: Matches my impression so far:

Difficult Yang: You know why people think Kimi K2 doesn’t sound like “botslop”? It’s because it’s… how should I put it… it’s very Chinese English (not in the Chinglish way… it’s hard to describe).

Perhaps the most accessible analogy I have is the first time you read Xianxia in English it feels so fresh, it feels so novel, the attitudes and the writing are so different than what you’ve read before.

And then you read your second and your third and you’re like “oh wait, this is just its own subculture with its own recognizable patterns.”

xl8harder: I’ve wondered if the bilinguality of these models has any durable effect. Are you saying that, or that it’s in the curation of post training data, etc?

Difficult Yang: The most straightforward explanation is it is RLHF induced. But I don’t actually know.

Hieu Pham: Yes. Exactly my take. Glad someone else feels the same way. I read Zhu Xian in Vietnamese and some chapters in English. K2’s answers feel similar.

Teortaxes: Makes sense.

A lot of what makes a hack writer a hack writer is that they keep doing the same things over and over again, and eventually everyone is in some sense a hack. So having a different writer can be a breath of fresh air even if they are a hack.

You could kind of say that any given author or model, or almost any other form or genre of creative work, has a ‘time to slop,’ before a reader sees the patterns. And different variations use up different amounts of that ‘time to slop’ for others, and the American models all sound the same so they all burn that fuse together.

There is still very much better and worse, some things really are slop and some things really aren’t. I am inclined to believe that Kimi K2 is doing something fundamentally ‘less slop-like,’ but also I am guessing a lot of this is that it is different, not only via being Chinese and culturally different but because it was trained differently, and thus it feels fresh and new.

Right now we have 10,000 outputs, all the same. If can we can instead get 10,000 outputs, all different, perhaps we’d have something.

We will continue to see what Kimi K2 can do, how best to use it, what its weaknesses are, and how much of its refreshing nature is being better in places versus being different. It is too early, and I haven’t had time with it directly.

Presumably Kimi will use this to create a reasoning model. If they don’t, there’s nothing stopping someone else from doing so instead. So far we’ve seen a remarkable lack of independent reasoning model conversions, but they’re remarkably cheap to do.

We will also see what other labs can do now that this architecture has been proven. What could OpenAI, Google, Meta or xAI do if they copied these methods but used orders of magnitude more compute? If they integrated this into what they already do? If they used this as part of a MoE? I presume we will find out.

Discussion about this post

Kimi K2 Read More »

donkey-kong-bananza-is-a-worthy-successor-to-super-mario-odyssey’s-legacy

Donkey Kong Bananza is a worthy successor to Super Mario Odyssey’s legacy


D-K… donkey kong is here!

Cathartic, punch-fueled land destruction is a great showcase for Switch 2 hardware.

Screenshots you can feel. Credit: Nintendo

Screenshots you can feel. Credit: Nintendo

When the Switch 2 was fully unveiled back in April, we weren’t alone in expecting the announcement of a true follow-up to Super Mario Odyssey—one of the original Switch’s best-selling games and our pick for the best game of 2017. Instead, we got our first look at Donkey Kong Bananza, the big ape’s first fully 3D adventure since the Rare-developed Donkey Kong 64 in 1999.

The fact that Nintendo wasn’t willing to commit its longstanding plumber mascot to its first first-party platformer on the Switch 2 could have been seen as a sign of a rushed, second-tier spin-off effort. After playing through Donkey Kong Bananza, though, I’m happy to report that nothing could be further from the truth for this deep and worthy spiritual successor to Super Mario Odyssey (from many of the same development staff). Donkey Kong Bananza captures the same sense of joyful movement and exploration as the best Mario games while adding an extremely satisfying terrain-destruction system that shows off the capabilities of the Switch 2 hardware.

Beat up the earth

It’s that terrain-destruction system that sets Donkey Kong Bananza apart from previous 3D platformers from Nintendo and others. Three of the four face buttons on the Switch 2 controllers are devoted to letting Donkey Kong punch either horizontally, upward, or downward, often taking out large chunks of the nearby scenery as he does.

Take that, rock!

Credit: Nintendo

Take that, rock! Credit: Nintendo

Punching through the terrain in this manner forms the fast, crunchy, and powerfully kinetic core of the game. It’s hard to overstate how incredibly cathartic it can be to quickly reduce a well-ordered chunk of dirt and rock into a mountain of valuable, collectible golden rubble (then gathering up all the nearby rubble with a quick tap of a shoulder button). Imagine a 3D Mario game by way of Traveller’s Tales Lego games, and you’ll have some idea of the extremely satisfying combination on offer here.

The semi-persistent changes in scenery also do a good job of highlighting the Switch 2’s hardware, which doesn’t seem to drop a single frame, even as the rubble flies and the ground’s shape morphs under Donkey Kong’s persistent punching. That extra hardware power also lends itself to some nice graphical touches, from the mirror-like shine on a pile of golden rubble to the gentle movement of fur that rustles in the breeze.

I get around

Donkey Kong can also pick up chunks of terrain, using them as impromptu melee weapons or hurling them to destroy far-off enemies, obstacles, or key switches. The aiming-and-throwing controls for this terrain-throwing system are just clunky enough to be annoying—this is a far cry from Gears of Donkey Kong or something. Still, the interactions between different types of hurled terrain end up forming the root of many interesting situational puzzles—throwing some snow to harden sections of a harmful lava lake into a solid platform, for instance, or using a chunk of explosive rock to destroy an otherwise impervious spiky enemy.

When you’re not tearing up the scenery to your benefit, simply getting around in Donkey Kong Bananza is a joy. Donkey Kong Country fans will be happy to know the classic roll is back and can be used to help extend jumps or quickly change mid-air direction (a la Cappy from Mario Odyssey). Donkey Kong can also slide along on chunks of terrain in a zippy, madcap land-surfing mode that’s wonderfully difficult to control effectively. The ability to climb along the edge of most surfaces adds a layer to the vertical gameplay dimension that doesn’t rely on precision jumping and which is utilized well to hide some of the game’s more out-of-the-way secrets.

This Kong’s got a funny face…

Credit: Nintendo

This Kong’s got a funny face… Credit: Nintendo

As the game progresses, you’ll also unlock a handful of animalistic “Bananza” transformations from a menagerie of gigantic animal DJs (don’t ask). These temporarily grant DK new powers—a quick-dashing Zebra or a fluttering, hovering ostrich, for instance. The game builds some specific gatekeeping challenges around each transformation, of course, but the extra locomotion options become a welcome part of your locomotion toolbelt when simply exploring generic areas.

Running around and smashing up the world isn’t all joy, though. Problems arise when you dig into thick patches of dirt, crafting a narrow, Kong-sized tunnel surrounded by opaque earth. The camera system does its best to deal with these tricky scenarios, making the ground opaque and highlighting only the notable features around you. Still, it’s easy to lose track of where your digging has taken you and how to get back to the surface, especially when the best way out of a jam is to “dig up, stupid.”

Oooh, Banana!

All this terrain destruction and digging is in service of the game’s primary goal: collecting a bunch of giant bananas. These are roughly as plentiful as the Power Moons scattered across Super Mario Odyssey and roughly as varied in their availability. Some sit out in the open, waiting to be stumbled on. Others are hidden in some of the game’s most out-of-the-way underground crevices and practically require the use of collectible in-game treasure maps to find. Many are hidden in elaborate challenge rooms that test your precision platforming, terrain destruction, or combat skills.

Unlike the Power Moons in Mario Odyssey, though, hunting down bananas is largely optional to progress down the succession of elaborate, wide-open, high-ceilinged layers (read: “levels”) on a quest toward the planet’s core. Instead, bananas are primarily used to unlock upgrades in a surprisingly deep skill tree or grant DK more health, more punching power, or longer Bananza transformations. Other collectibles can be used to buy stylish and protective outfits to further increase DK’s endurance.

You’d be forgiven for not believing that these large explorable “layers” are supposed to be underground.

Credit: Nintendo

You’d be forgiven for not believing that these large explorable “layers” are supposed to be underground. Credit: Nintendo

These upgrades provide ample incentive to go off the beaten path for those who like exploring and dozens of hours of enjoyable challenges for completionists to delve into after the credits roll. But the game’s structure also allows skillful and/or impatient players to zip to the game’s conclusion quite quickly, rushing through the visually inventive bosses that guard the game’s major chokepoints.

Those who rush, though, may end up struggling with the game’s final gauntlet of challenges, which quickly ramp up the difficulty while re-introducing some classic DK enemies (that we aren’t allowed to say more about at the moment).

Wait, that kid is Pauline?

Thus far, we’ve avoided talking about the ridiculously convoluted plot the game builds around Donkey Kong’s quest for bananas and the evil corporate forces that want to stop his journey deep into the planet’s core. The game’s underground world is populated with all sorts of talking animals, sentient rocks, and familiar Kong faces to assist DK or ask him for help with various ridiculous errands. They’re cute, but their chatter is more or less ignorable.

The reimagined Pauline is an adorable addition to the lineup.

Credit: Nintendo

The reimagined Pauline is an adorable addition to the lineup. Credit: Nintendo

The main exception is Pauline, the damsel-in-distress from the original Donkey Kong, recast here as a precocious child working with DK to find a way back to her home on the surface. Pauline’s effort to overcome inherent stage fright and embrace the magical power of her singing voice was surprisingly touching. That’s largely thanks to a winning voice-acting performance that forms the basis for some toe-tapping gibberish playing behind DK’s Bananza transformations.

The adorable relationship between young Pauline and the silent Donkey Kong is the icing on a very satisfying cake. Even though Mario is nowhere to be seen, Donkey Kong Bananza seems destined to be thought of in the same category as the Mario games that defined earlier Nintendo hardware launches.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Donkey Kong Bananza is a worthy successor to Super Mario Odyssey’s legacy Read More »

there-could-be-“dark-main-sequence”-stars-at-the-galactic-center

There could be “dark main sequence” stars at the galactic center


Dark matter particle and antiparticle collisions could make some stars immortal.

For a star, its initial mass is everything. It determines how quickly it burns through its hydrogen and how it will evolve once it starts fusing heavier elements. It’s so well understood that scientists have devised a “main sequence” that acts a bit like a periodic table for stars, correlating their mass and age with their properties.

The main sequence, however, is based on an assumption that’s almost always true: All of the energy involved comes from the gravity-driven fusion of lighter elements into heavier ones. However, three astrophysicists consider an alternative source of energy that may apply at the very center of our galaxy— energy released when dark matter particles and antiparticles collide and annihilate. While we don’t even know that dark matter can do that, it’s a hypothetical with some interesting consequences, like seemingly immortal stars, and others that move backward along the main sequence path.

Dark annihilations

We haven’t figured out what dark matter is, but there are lots of reasons to think that it is comprised of elementary particles. And, if those behave like all of the particles we understand well, then there will be both regular and antimatter versions. Should those collide, they should annihilate each other, releasing energy in the process. Given dark matter’s general propensity not to interact with anything, these collisions will be extremely rare except in locations with very high dark matter concentrations.

The only place that’s likely to happen is at the very center of our galaxy. And, for a while, there was an excess of radiation coming from the galactic core that people thought might be due to dark matter annihilations, although it eventually turned out to have a more mundane explanation.

At the extreme densities found within a light year of the supermassive black hole at the center of our galaxy, concentrations are high enough that these collisions could be a major source of energy. And so astronomers have considered what all that energy might do to stars that end up in a black hole’s orbit, finding that under the right circumstances, dark matter destruction could provide more energy to a star than fusion.

That prompted three astrophysicists (Isabelle John, Rebecca Leane, and Tim Linden) to try to look at things in an organized fashion, modeling a “dark main sequence” of stars as they might exist within a close proximity to the Milky Way’s center.

The intense gravity and radiation found near the galaxy’s core mean that stars can’t form there. So, anything that’s in a tight orbit had formed somewhere else before gravitational interactions had pushed it into the gravitational grasp of the galaxy’s central black hole. The researchers used a standard model of star evolution to build a collection of moderate-sized stars, from one to 20 solar masses at 0.05 solar mass intervals. These are allowed to ignite fusion at their cores and then shift into a dark-matter-rich environment.

Since we have no idea how often dark matter particles might run into each other, John, Leane, and Linden use two different collision frequencies. These determine how much energy is imparted into these stars by dark matter, which the researchers simply add as a supplement to the amount of fusion energy the stars are producing. Then, the stars are allowed to evolve forward in time.

(The authors note that stars that are thrown into the grasp of a supermassive black hole tend to have very eccentric orbits, so they spend a lot of time outside the zone where dark matter collisions take place with a significant frequency. So, what they’ve done is the equivalent of having these stars experience the energy input given their average orbital distance from the galaxy’s core. In reality, a star would spend some years with higher energy input and some years with lower input as it moves about its orbit.)

Achieving immortality

The physics of what happens is based on the same balance of forces that govern fusion-powered stars, but produces some very strange results. Given only fusion power, a star will exist at a balance point. If gravity compresses it, fusion speeds up, more energy is released, and that energy causes the star to expand outward again. That causes the density drop, slowing fusion back down again.

The dark matter annihilations essentially provide an additional source of energy that stays constant regardless of what happens to the star’s density. At the low end of the mass range the researchers considered, this can cause the star to nearly shut off fusion, essentially looking like a far younger star than it actually is. That has the effect of causing the star to move backward along the main sequence diagram.

The researchers note that even lighter stars could essentially get so much additional energy that they can’t hold together and end up dissipating, something that’s been seen in models run by other researchers.

As the mass gets higher, stars reach the point where they essentially give up on fusion and get by with nothing but dark matter annihilations. They have enough mass to hold together gravitationally, but end up too diffused for fusion to continue. And they’ll stay that way as long as they continue to get additional injections of energy. “A star like this might look like a young, still-forming star,” the authors write, “but has features of a star that has undergone nuclear fusion in the past and is effectively immortal.”

John, Leane, and Linden find that the higher mass stars remain dense enough for fusion to continue even in proximity to the galaxy’s black hole. But the additional energy kept that fusion happening at a moderate rate. They proceeded through the main sequence, but at a pace that was exceptionally slow, so that running the simulation for a total of 10 billion years didn’t see them change significantly.

The other strange thing here is that all of this is very sensitive to how much dark matter annihilation is taking place. A star that’s “immortal” at one average distance will progress slowly through the main sequence if its average distance is a light year further out. Similarly, stars that are too light to survive at one location will hold together if they are a bit further from the supermassive black hole.

Is there anything to this?

The big caution is that this work only looks at the average input from dark matter annihilation. In reality, a star that might be immortal at its average distance will likely spend a few years too hot to hold together, and then several years cooling off in conditions that should allow fusion to reignite. It would be nice to see a model run with this sort of pulsed input, perhaps basing it on the orbits of some of the stars we’ve seen that get close to the Milky Way’s central black hole.

In the meantime, John, Leane, and Linden write that their results are consistent with some of the oddities that are apparent in the stars we’ve observed at the galaxy’s center. These have two distinctive properties: They appear heavier than the average star in the Milky Way, and all seem to be quite young. If there is a “dark main sequence,” then the unusual heft can be explained simply by the fact that lower mass stars end up dissipating due to the additional energy. And the model would suggest that these stars simply appear to be young because they haven’t undergone much fusion.

The researchers suggest that we could have a clearer picture if we were able to spend enough time observing the stars at our galaxy’s core with a large enough telescope, allowing us to understand their nature and orbits.

Physical Review D, 2025. DOI: Not yet available  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

There could be “dark main sequence” stars at the galactic center Read More »

we-saw-the-heart-of-pluto-10-years-ago—it’ll-be-a-long-wait-to-see-the-rest

We saw the heart of Pluto 10 years ago—it’ll be a long wait to see the rest


A 50-year wait for a second mission wouldn’t be surprising. Just ask Uranus and Neptune.

Four images from New Horizons’ Long Range Reconnaissance Imager (LORRI) were combined with color data from the spacecraft’s Ralph instrument to create this enhanced color global view of Pluto. Credit: NASA/Johns Hopkins University/SWRI

NASA’s New Horizons spacecraft got a fleeting glimpse of Pluto 10 years ago, revealing a distant world with a picturesque landscape that, paradoxically, appears to be refreshing itself in the cold depths of our Solar System.

The mission answered numerous questions about Pluto that have lingered since its discovery by astronomer Clyde Tombaugh in 1930. As is often the case with planetary exploration, the results from New Horizons’ flyby of Pluto on July 14, 2015, posed countless more questions. First and foremost, how did such a dynamic world come to be so far from the Sun?

For at least the next few decades, the only resources available for scientists to try to answer these questions will be either the New Horizons mission’s archive of more than 50 gigabits of data recorded during the flyby, or observations from billions of miles away with powerful telescopes on the ground or space-based observatories like Hubble and James Webb.

That fact is becoming abundantly clear. Ten years after the New Horizons encounter, there are no missions on the books to go back to Pluto and no real prospects for one.

A mission spanning generations

In normal times, with a stable NASA budget, scientists might get a chance to start developing another Pluto mission in perhaps 10 or 20 years, after higher-priority missions like Mars Sample Return, a spacecraft to orbit Uranus, and a probe to orbit and land on Saturn’s icy moon Enceladus. In that scenario, perhaps a new mission could reach Pluto and enter orbit before the end of the 2050s.

But these aren’t normal times. The Trump administration has proposed cutting NASA’s science budget in half, jeopardizing not only future missions to explore the Solar System but also threatening to shut down numerous operating spacecraft, including New Horizons itself as it speeds through an uncharted section of the Kuiper Belt toward interstellar space.

The proposed cuts are sapping morale within NASA and the broader space science community. If implemented, the budget reductions would affect more than NASA’s actual missions. They would also slash NASA’s funding available for research, eliminating grants that could pay for scientists to analyze existing data stored in the New Horizons archive or telescopic observations to peer at Pluto from afar.

The White House maintains funding for newly launched missions like Europa Clipper and an exciting mission called Dragonfly to soar through the skies of Saturn’s moon Titan. Instead, the Trump administration’s proposed budget, which still must be approved by Congress, suggests a reluctance to fund new missions exploring anything beyond the Moon or Mars, where NASA would focus efforts on human exploration and bankroll an assortment of commercial projects.

NASA’s New Horizons spacecraft undergoing launch preparations at Kennedy Space Center, Florida, in September 2005. Credit: NASA

In this environment, it’s difficult to imagine the development of a new Pluto mission to begin any time in the next 20 years. Even if Congress or a future presidential administration restores NASA’s planetary science budget, a Pluto mission wouldn’t be near the top of the agency’s to-do list.

The National Academies’ most recent decadal survey prioritized Mars Sample Return, a Uranus orbiter, and an Enceladus “Orbilander” mission in their recommendations to NASA’s planetary science program through 2032. None of these missions has a realistic chance to launch by 2032, and it seems more likely than not that none of them will be in any kind of advanced stage of development by then.

The panel of scientists participating in the latest decadal survey—released in 2022—determined that a second mission to Pluto did not merit a technical risk and cost evaluation report, meaning it wasn’t even shortlisted for consideration as a science priority for NASA.

There’s a broad consensus in the scientific community that a follow-up mission to Pluto should be an orbiter, and not a second flyby. New Horizons zipped by Pluto at a relative velocity of nearly 31,000 mph (14 kilometers per second), flying as close as 7,750 miles (12,500 kilometers).

At that range and velocity, the spacecraft’s best camera was close enough to resolve something the size of a football field for less than an hour. Pluto was there, then it was gone. New Horizons only glimpsed half of Pluto at decent resolution, but what it saw revealed a heart-shaped sheet of frozen nitrogen and methane with scattered mountains of water ice, all floating on what scientists believe is likely a buried ocean of liquid water.

Pluto must harbor a wellspring of internal heat to keep from freezing solid, something researchers didn’t anticipate before the arrival of New Horizons.

New Horizons revealed Pluto as a mysterious world with icy mountains and very smooth plains. Credit: NASA

So, what is Pluto’s ocean like? How thick are Pluto’s ice sheets? Are any of Pluto’s suspected cryovolcanoes still active today? And, what secrets are hidden on the other half of Pluto?

These questions, and more, could be answered by an orbiter. Some of the scientists who worked on New Horizons have developed an outline for a conceptual mission to orbit Pluto. This mission, named Persephone for the wife of Pluto in classical mythology, hasn’t been submitted to NASA as a real proposal, but it’s worth illustrating the difficulties in not just reaching Pluto, but maneuvering into orbit around a dwarf planet so far from the Earth.

Nuclear is the answer

The initial outline for Persephone released in 2020 called for a launch in 2031 on NASA’s Space Launch System Block 2 rocket with an added Centaur kick stage. Again, this isn’t a realistic timeline for such an ambitious mission, and the rocket selected for this concept doesn’t exist. But if you assume Persephone could launch on a souped-up super heavy-lift SLS rocket in 2031, it would take more than 27 years for the spacecraft to reach Pluto before sliding into orbit in 2058.

Another concept study led by Alan Stern, also the principal investigator on the New Horizons mission, shows how a future Pluto orbiter could reach its destination by the late 2050s, assuming a launch on an SLS rocket around 2030. Stern’s concept, called the Gold Standard, would reserve enough propellant to leave Pluto and go on to fly by another more distant object.

Persephone and Gold Standard both assume a Pluto-bound spacecraft can get a gravitational boost from Jupiter. But Jupiter moves out of alignment from 2032 until the early 2040s, adding a decade or more to the travel time for any mission leaving Earth in those years.

It took nine years for New Horizons to make the trip from Earth to Pluto, but the spacecraft was significantly smaller than an orbiter would need to be. That’s because an orbiter has to carry enough power and fuel to slow down on approach to Pluto, allowing the dwarf planet’s weak gravity to capture it into orbit. A spacecraft traveling too fast, without enough fuel, would zoom past Pluto just like New Horizons.

The Persephone concept would use five nuclear radioisotope power generators and conventional electric thrusters, putting it within reach of existing technology. A 2020 white paper authored by John Casani, a longtime project manager at the Jet Propulsion Laboratory who died last month, showed the long-term promise of next-generation nuclear electric propulsion.

A relatively modest 10-kilowatt nuclear reactor to power electric thrusters would reduce the flight time to Pluto by 25 to 30 percent, while also providing enough electricity to power a radio transmitter to send science data back to Earth at a rate four times faster, according to the mission study report on the Persephone concept.

However, nuclear electric propulsion technologies are still early in the development phase, and Trump’s budget proposal also eliminates any funding for nuclear rocket research.

A concept for a nuclear electric propulsion system to power a spacecraft toward the outer Solar System. Credit: NASA/JPL-Caltech

A rocket like SpaceX’s Starship might eventually be capable of accelerating a probe into the outer Solar System, but detailed studies of Starship’s potential for a Pluto mission haven’t been published yet. A Starship-launched Pluto probe would have its own unique challenges, and it’s unclear whether it would have any advantages over nuclear electric propulsion.

How much would all of this cost? It’s anyone’s guess at this point. Scientists estimated the Persephone concept would cost $3 billion, excluding launch costs, which might cost $1 billion or more if a Pluto mission requires a bespoke launch solution. Development of a nuclear electric propulsion system would almost certainly cost billions of dollars, too.

All of this suggests 50 years or more might elapse between the first and second explorations of Pluto. That is in line with the span of time between the first flybys of Uranus and Neptune by NASA’s Voyager spacecraft in 1986 and 1989, and the earliest possible timeline for a mission to revisit those two ice giants.

So, it’s no surprise scientists are girding for a long wait—and perhaps taking a renewed interest in their own life expectancies—until they get a second look at one of the most seductive worlds in our Solar System.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

We saw the heart of Pluto 10 years ago—it’ll be a long wait to see the rest Read More »

a-new-martian-climate-model-suggest-a-mostly-cold,-harsh-environment

A new Martian climate model suggest a mostly cold, harsh environment

“Very early in Mars’ history, maybe 4 billion years ago, the planet was warm enough to support lakes and river networks,” Kite told Ars. “There were seas, and some of those seas were as big as the Caspian Sea, maybe bigger. It was a wet place.” This wet period, though, didn’t last long—it was too short to make the landscape deeply weathered and deeply eroded.

Kite’s team used their model to focus on what happened as the planet got colder, when the era of salts started. “Big areas of snowmelts created huge salt flats, which eventually built up over time, accumulating into a thick sedimentary deposit Curiosity rover is currently exploring,” Kite said. But the era of salts did not mark the end of liquid water on the Martian surface.

Flickering habitability

The landscape turned arid, judging by Earth’s standards, roughly 3.5 billion years ago. “There were long periods when the planet was entirely dry,” Kite said. During these dry periods, Mars was almost as cold as it is today. But once in a while, small areas with liquid water appeared on the Martian surface like oases amidst an otherwise unwelcoming desert. It was a sterile planet with flickering, transient habitable spots with water coming from melted snow.

This rather bleak picture of the Martian landscape’s evolution makes questions about our chances for finding traces of life in there tricky.

“You can do a thought experiment where you take a cup of water from the Earth’s ocean and pour it into one of those transient lakes on Mars,” Kite said. “Some microbes in this cup of water would do fine in such conditions.” The bigger question, he thinks, is whether life could originate (rather than just survive) on ancient Mars. And, perhaps more critically, whether hypothetical life that originated even before the salts era, when the planet was warm and wet, could persist in the oases popping up in the Kite’s model.

The answer, sadly, is probably not.

A new Martian climate model suggest a mostly cold, harsh environment Read More »