Features

ars-technica’s-top-20-video-games-of-2024

Ars Technica’s top 20 video games of 2024


A relatively light year still had its fair share of interactive standouts.

Credit: Aurich Lawson | Getty Images

When we introduced last year’s annual list of the best games in this space, we focused on how COVID delays led to a 2023 packed with excellent big-name blockbusters and intriguing indies that seemed to come out of nowhere. The resulting flood of great titles made it difficult to winnow the year’s best down to our traditional list of 20 titles.

In 2024 we had something close to the opposite problem. While there were definitely a few standout titles that were easy to include on this year’s list (Balatro, UFO 50, and Astro Bot likely chief among them), rounding out the list to a full 20 proved more challenging than it has in perhaps any other year during my tenure at Ars Technica (way back in 2012!). The games that ended up on this year’s list are all strong recommendations, for sure. But many of them might have had more trouble making a Top 20 list in a packed year like 2023.

We’ll have to wait to see if the release calendar seesaws back to a quality-packed one in 2025, but the forecast for big games like Civilization 7, Avowed, Doom: The Dark Ages, Grand Theft Auto 6, and many, many more has us thinking that it might. In the meantime, here are our picks for the 20 best games that came out in 2024, in alphabetical order.

Animal Well

Billy Basso; Windows, PS5, Xbox X/S, Switch

The Metroidvania genre has started to feel a little played out of late. Go down this corridor, collect that item, go back to the wall that can only be destroyed by said item, explore a new corridor for the next item, etc. Repeat until you’ve seen the entire map or get too bored to continue.

Animal Well eschews this paint-by-numbers design and brings back the sense of mystery inherent to the best games in the genre. This is done in part by some masterful pixel-art graphics, which incorporate some wild 2D lighting effects and detailed, painterly sprite world. The animations—from subtle movements of the lowliest flower to terrifying, screen-filling actions from the game’s titular giant animals—are handled with equal aplomb.

But Animal Well really shines in its often inscrutable map and item design. Many key items in the game have multiple uses that aren’t fully explained in the game itself, requiring a good deal of guessing, checking, and observation to figure out how to exploit fully. Uncovering the arcane secrets of the game’s multiple environmental blocks is often far from obvious and rewards players who like to experiment and explore.

Those arcane secrets can sometimes seem too obtuse for their own good—don’t be surprised if you have to consult an outside walkthrough or work with someone to bust past some of the most inscrutable barriers put in your way. If you soldier through, though, you’ll have been on one of the most memorable journeys of its type.

-Kyle Orland

Astro Bot

Team Asobi; PS5

Astro Bot is an unlikely success story. The team that made it, Studio Asobi, was for years dedicated to making small-scale projects that were essentially glorified tech demos for Sony’s latest hardware. First there was The Playroom, which was just a collection of small experiences made to show off the features of the PlayStation 4’s camera peripheral. Then there was Astro Bot Rescue Mission, which acted as a showcase for the first PlayStation VR headset.

But momentum really picked up with Astro’s Playroom, the bite-size 2020 3D platformer that was bundled with every PlayStation 5—again to show off the hardware features. When I played it, my main thought was, “I really wish this team would make a full-blown game.”

That’s exactly what Astro Bot is: a 15-hourlong 3D platformer with AAA production values, with no goal other than just being an excellent game. Like its predecessors, it fully leverages all the hardware features of the PlayStation 5, and it’s loaded with Easter eggs and fan service for players who’ve been playing PlayStation consoles for three decades.

Like many 3D platformers, it’s a collect-a-thon. In this case, you’re gathering more than 300 little robot friends. All of them are modeled after characters from other games that defined the PlayStation platform, from Resident Evil to Ico to The Last of Us, from the obscure to the well-known.

Between those Easter eggs, the tightly designed gameplay, and the upbeat music, there’s an ever-present air of joy and celebration in Astro Bot—especially for players who get the references. But even if you’ve never played any of the games it draws on, it’s an excellent 3D platformer—perhaps the best released on any platform in the seven years since 2017’s Super Mario Odyssey.

The PlayStation 5 will arguably be best remembered for beefy open-world games, serious narrative titles, and multiplayer shooters. Amid all that, I don’t think anybody expected one of the best games ever released for the console to be a platformer that in some ways would feel more at home on the PlayStation 2—but that’s what happened, and I’m grateful for the time I spent with it.

The only negative thing I have to say about it is that because of how it leverages the specific features of the DualSense controller, it’s hard to imagine it’ll ever be playable for anyone who doesn’t own that device.

Is it worth buying a PS5 just to play Astro Bot? Probably not—as beefy as it is compared to Astro’s Playroom, there’s not enough here to justify that. But if you have one and you haven’t played it yet, get on it, because you’re missing out.

-Samuel Axon

Balatro

LocalThunk; Windows, PS4/5, Xbox One/Series, Switch, MacOS, iOS, Android

At first glance, video poker probably seems a bit too random to serve as the basis of yet another rogue-like deck-builder experience. As anyone who’s been to Atlantic City can tell you, video poker’s hold-and-draw hand-building involves only the barest hint of strategy and is designed so the house always wins.

The genius of Balatro, then, is in the layers of strategy it adds to this simple, easy-to-grasp poker hand base. The wide variety of score or hand-modifying jokers that you purchase in between hands can be arranged in literally millions of combinations, each of which can change the way a particular run goes in ways both large and small. Even the most powerful jokers can become nearly useless if you run into the wrong debuffing Boss Blind, forcing you to modify your strategy mid-run to keep the high-scoring poker hands coming.

Then you add in a complex in-game economy, powerful deck-altering arcana cards, dozens of Deck and Stake difficulty options, and a Challenge Mode whose hardest options have continued to thwart me even after well over 100 hours of play. The result is a game that’s instantly compelling and as addictive as a heater at a casino, only without the potential to lose your mortgage payment to a series of bad bets.

-Kyle Orland

The Crimson Diamond

Julia Minamata; Windows, Mac

Would you like to spend some time in a rural vacation town in Ontario, Canada, doing mineralogy fieldwork in the off-season? A better question, then: Would you like to go there in 16-color EGA, wandering through a classic adventure game, text parser and all?

The Crimson Diamond is one of the most intriguing gaming trips I took this year. It’s an achievement in creative constraints, a cozy mystery, and an ear-catching soundtrack. It was made by a solo Canadian developer, inside Adventure Game Studio, with some real work put into upgrading the text parser and (optional) mouse experience with reasonable quality-of-life concessions. The charming but mysterious plot gently pulls you along from one wonderfully realized 1980s-era IBM backdrop to the next. It feels like playing a game you forgot to unbox, except this one actually plays without a dozen compatibility tricks.

We are awash in game remasters and light remakes that toy with our memories of old systems and forgotten genres (and having a lot more time to play games). The Crimson Diamond does something much more interesting, finding just the right new story and distinct style to port backward to a bygone era. It’s worth the clicks for any fan of pointing, clicking, and investigating.

-Kevin Purdy

Elden Ring: Shadow of the Erdtree

From Software; Windows, PS4/5, Xbox One/Series

Credit: Bandai Namco

You can tell downloadable content has come a long way when we deign to put a piece of DLC on our best-games-of-the-year list in 2024.

Released two years after the megaton zeitgeist hit that was the original Elden Ring, Shadow of the Erdtree bucks just as many modern gaming conventions as its base game did. Yes, it’s DLC because it’s digitally distributed add-on content, but while most AAA games get DLC that adds about 5–10 hours of stuff to do, Shadow of the Erdtree‘s scope actually fell somewhere between a full sequel and the expansions you’d buy in separate retail boxes for PC games in the 1990s and 2000s. It adds a large new landmass to explore, with multiple additional “legacy dungeons” and tentpole bosses, plus new mechanics and weapons aplenty.

Additionally, the game’s designers cleverly mapped a whole new system on top of the base game’s player levels and equipment, ensuring that you could still get the same satisfying sense of power progression, regardless of how deeply you’d gotten into the Elden Ring base game previously. Shadow of the Erdtree attracted some criticism for its sharp difficulty—even more so than the base game—but the same satisfying progression from hopelessness to triumph through perseverance as we enjoyed in 2022 is in play here.

As such, everything that was great about Elden Ring is also great about Shadow of the Erdtree. You could cynically call it more of the same, but when what we’re getting more of is so delicious, it’s hard to complain—especially given that this expansion includes some of the most compelling and original bosses in From Software history.

It didn’t convert anyone who didn’t dig the original, but fortunately that still left it with an audience of millions of people who were excited for a new challenge. Its impressive polish and scope land it on this list.

-Samuel Axon

Frostpunk 2

11 bit studios; Windows, Mac (ARM-based); PlayStation, Xbox (coming in 2025)

Credit: 11 Bit Studios

The first Frostpunk asked if you could make the terrible decisions necessary for survival in a never-ending blizzard. Frostpunk 2 asks you to manage something different, but no less dire: helping these people who managed to hold on keep their nascent civilization together. You can go deep with the fascists, the mystics, the hard-nosed realists, or the science nerds or try to play them all off one another to keep the furnace going.

It’s stressful, and it’s not at all easy, and the developers may have done too good a job of recreating the insane demands and interplay of human factions. The interface and navigation, sore spots on launch, have received a lot of attention, and the roadmap for the game into 2025 looks intriguing. I’d probably recommend starting with the first game before diving into this, but Frostpunk 2 is an accomplished, confident game in its own right. Human failings amidst an unfeeling snowpocalypse make for some engaging scenes.

-Kevin Purdy

Halls of Torment

Chasing Carrots; Windows, Linux, iOS, Android

Credit: Chasing Carrots

The isometric demon-killing of the old-school Diablo games has endured over the decades, especially among those who remember playing in their youth. But the whole concept has been in need of a bit of an update now that we’re in the age of Vampire Survivors and its “Bullet Heaven” auto-shooter ilk.

Enter Halls of Torment, a game that is probably as close as it comes to aping old-school Diablo‘s visual style without being legally actionable. Here, though, all the lore and story and exploration of the Diablo games has been replaced by a lot more enemies, all streaming toward your protagonist at the rate of up to 50,000 per hour. Much like Vampire Survivors, the name of the game here is dodging through those small holes in those swarms of enemies while your character automatically fills the screen with devastating attacks (that slowly level up as you play).

The wide variety of different playable classes—each with their own distinct strengths, weaknesses, and unique attack patterns—help each run feel distinct, even after you’ve smashed through the game’s limited set of six environments. But there’s plenty of cathartic replay value here for anyone who just wants to cause as much on-screen carnage as possible in a very short period of time.

-Kyle Orland

Helldivers 2

Arrowhead Game Studios; Windows, PS5

Helldivers 2 player aiming a laser reticule into a massive explosion.

Credit: PlayStation/Arrowhead

Every so often, a multiplayer game releases to almost universal praise, and for a few months, seemingly everyone is talking about it. This year, that game was Helldivers 2. The game converted the 2015 original’s top-down shoot-em-up gameplay into third-person shooter action, and the switch-up was enough to bring in tons of players. Taking more than a few cues from Starship Troopers, the game asks you to “spread democracy” throughout the galaxy by mowing down hordes of alien bugs or robots during short-ish missions on various planets. Work together with your team, and the rest of the player base, to slowly liberate the galaxy.

I played Helldivers 2 mostly as a “hang out game,” something to do with my hands and eyes as I chatted with friends. You can play the game “seriously,” I guess, but that would be missing the point for me. My favorite part of Helldivers 2 is just blowing stuff up—bugs, buildings, and, yes, even teammates. Friendly fire is a core part of the experience, and whether by accident or on purpose, you will inevitably end up turning your munitions on your friends. My bad!

Some controversial balance patches put the game into a bit of an identity crisis for a while, but things seem to be back on track. I’ll admit the game didn’t have the staying power for me that it seemed to for others, but it was undeniably a highlight of the year.

-Aaron Zimmerman

Indiana Jones and the Great Circle

MachineGames; Windows, PS5, Xbox Series X/S

Credit: Bethesda / MachineGames

A new game based on the Indiana Jones license has a lot to live up to, both in terms of the films and TV shows that inspired it and the well-remembered games that preceded it. The Great Circle manages to live up to those expectations, crafting the most enjoyable Indy adventure in years.

The best part of the game is contained in the main storyline, captured mainly in exquisitely madcap cut scenes featuring plenty of the pun-filled, devil-may-care quipiness that Indy is known for. Voice actor Troy Baker does a great job channeling Harrison Ford (by way of Jeff Goldblum) as Indy, while antagonist Emmerich Voss provides all the scenery-chewing Nazi shenanigans you could want from the ridiculous, magical-realist storyline.

The stealth-action gameplay is a little more pedestrian but still manages to distinguish itself with suitably crunchy melee combat and the enjoyable improvisation of attacking Nazis with everyday items, from wrenches to plungers. And while the puzzles and side-quests can feel a bit basic at times, there are enough hidden trinkets in out-of-the-way corners to encourage completionists to explore every inch of the game’s intricately detailed open-world environments.

It’s just the kind of light-hearted, escapist, exploratory fun we need in these troubled times. Welcome back, Indy! We missed you!

-Kyle Orland

The Legend of Zelda: Echoes of Wisdom

Nintendo; Switch

After decades and decades of Zelda games where you don’t actually play as Zelda, there was a lot of pressure on a game where you finally get to control the princess herself as the protagonist (those CD-i monstrosities from the ’90s are best forgotten). Fortunately, Zelda’s turn in full control of her own series captures the franchise’s old-school, light-hearted adventuring fun with a few modern twists.

The main draw here is the titular “echo” abilities, which let Zelda copy enemies and objects that can be summoned in multiple copies with a special wand. This eventually opens up to allow for a number of inventive ways to solve some intricate puzzles in what feels like a heavily simplified version of Tears of the Kingdom‘s more complex crafting tools.

My favorite bit in Echoes of Wisdom, though, might be summoning copies of defeated enemies to fight new enemies in a kind of battle royale. As much as I love Link’s sword-swinging antics (which are partially captured here), just watching these magical minions do my combat for me is more than half of the fun in Echoes of Wisdom.

Even without those twists, though, Echoes of Wisdom provides all the old-school 2D Zelda dungeon exploring you could hope for, and the lighthearted storyline to match. Here’s hoping this isn’t the last time we’ll see Zelda taking a starring role in her own legends.

-Kyle Orland

Llamasoft: The Jeff Minter Story

Digital Eclipse; Windows, PS4/5, Xbox One/Series, Switch

I went into this latest Digital Eclipse playable museum as someone who had only a passing familiarity with Jeff Minter. I knew him mainly as a revered game development elder with a penchant for psychedelic graphics and an association with Tempest 2000. Then I spent hours devouring The Jeff Minter Story on my Steam Deck during a long flight, soaking in the full history of a truly unique character in the annals of gaming history.

The emulated games on this collection are actually pretty hit or miss, from a modern game design perspective. But the collection of interviews and supporting material on offer here are top-notch, putting each game into the context of the time and of Minter’s personal journey through an industry that was still in its infancy. I especially liked the scanned versions of Minter’s newsletter, sent to his early fans, which included plenty of diatribes and petty dramas about his game design peers and the industry as a whole.

There are few other figures in the early history of games that would merit this kind of singular focus—even early games were too often the collaborative product of larger companies with a more corporate focus (as seen in Digital Eclipse’s previous Atari 50. But I reveled in this opportunity to get to know Minter better for his unique and quixotic role in early gaming history.

-Kyle Orland

Lorelei and the Laser Eyes

Simogo; Windows, PS4/5, Switch

You’d be forgiven for finding Lorelei and the Laser Eyes at least a little pretentious. Everything from the black-and-white presentation—laced with only the occasional flash of red for emphasis —to the Twin Peaks-style absurdist writing to the grand pronouncements on the Importance and Beauty of Art make for a game that feels like it’s trying a little too hard, at points.

Push past that surface, though, and you’ll find one of the most intricately designed interactive puzzle boxes ever committed to bits and bytes. Lorelei goes well beyond the simple tile-pushing and lock-picking tasks that are laughably called “puzzles” in most other adventure games. The mind-teasers here require real out-of-the-box thinking, careful observation of the smallest environmental details, and multi-step deciphering of arcane codes.

This is a game that’s not afraid to cut you off from massive chunks of its content if you’re not able to get past a single near-inscrutable locked door puzzle—don’t feel bad if you need to consult a walkthrough at some point to move on. It’s also a game that practically requires a pen and paper notes to keep track of all the moving pieces—your notebook will look like the scribblings of a madman by the time you’re done.

And you may actually feel a little mad as you try to unravel the meaning of the game’s multiple labyrinthine layers and self-aware, time-bending, magical-realist storyline. When it all comes together at the end, you may just find yourself surprisingly moved not just by the intricate design, but by that oft-pretentious plot as well.

-Kyle Orland

Metal Slug Tactics

Leikir Studio; Windows, PS4/5, Xbox One/Series, Switch

Credit: Dotemu

Good tactics games are all about how the game plays in your mind. How many ways can you overcome this obstacle, maximize this turn, and synergize your squad’s abilities? So it’s a very nice bonus when such a well-made pile of engaging decisions also happens to look absolutely wonderful and capture the incredibly detailed sense of motion of a legendary run-and-gun franchise.

That’s Metal Slug Tactics, and it’s one of the most surprising successes of 2024. It delivers the look and feel of a franchise that isn’t easy to get right, and it translates those games’ feeling of continuous motion into turn-based tactics. The more you move, shoot, and team up each turn, the better you’ll do. You can do a level or two on a subway ride, crank out a rogue-ish run on a lunch break, and keep getting rewarded with new characters, unlocks, and skill tree branches.

If you’re always contemplating a replay of Final Fantasy Tactics, but might like a new challenge, consider giving this unlikely combination of goofy arcade revival and deep strategy a go.

-Kevin Purdy

Parking Garage Rally Circuit

Walaber Entertainment; Windows

The rise of popular, ultra-detailed racing sims like Gran Turismo and Forza Motorsport has coincided with a general decline in the drift-heavy, arcadey side of the genre. Titles like Ridge Racer Sega Rally or even Crazy Taxi now belong more in the industry’s nostalgic memories than its present top-sellers.

Parking Garage Rally Circuit is doing its best to bring the arcade racer genre back single-handedly. The game’s perfectly tuned drifting controls make every turn a joyful sliding experience, complete with chainable post-drift boosts for those who want to tune that perfect speedy line. It captures that great feeling of being just on the edge of losing control, while still holding onto the edge of that perfect drift.

It all takes place, as the name implies, winding up, down, over, and through some well-designed parking garages. Each track’s short laps (which take a minute or less) ensure you can practically memorize the best paths after just a little bit of play. But the stopped cars and large traffic dividers provide for some hilarious physics-based crashes when your racing line does go wrong.

The game earns extra nostalgia points for a variety of visual effects and graphics options that accurately mimic Dreamcast-era consoles and/or emulation-era PC hardware. But the game’s extensive online leaderboards and ghost-racers help it feel like a decidedly modern take on a classic genre.

-Kyle Orland

Pepper Grinder

Ahr Ech; Windows, MacOS, Linux, PS4/5, Xbox One/Series, Switch

It’s amazing how far a platform game can get on nothing more than a novel control scheme. Pepper Grinder is a case in point here, based around a tiny, blue-haired protagonist who uses an oversized drill to tunnel through soft ground like some kind of human-machine-snake hybrid.

Navigating through the dirt in large, undulating curves is fun enough, but the game really shines when Pepper bursts out through the top layer of dirt in large, arcing jumps. Chaining these together, from dirt clump to dirt clump, is the most instantly compelling new 2D navigation scheme we’ve seen in years and creates some beautiful, almost balletic curves through the levels once you’ve mastered it.

The biggest problem with Pepper Grinder is that the game is over practically before it really gets going. And while some compelling time-attack and item-collection challenges help to extend the experience a bit, we really hope some new DLC or a proper sequel is coming soon to give us a new excuse to wind our way through the dirt.

-Kyle Orland

The Rise of the Golden Idol

Color Gray Games; Windows, MacOS, iOS, Android, PS4/5, Xbox One/Series, Switch

In 2022, The Case of the Golden Idol proved that details like a controllable protagonist or elaborate cut scenes are unnecessary for a good murder mystery-solving puzzle adventure. All you need is a series of lightly animated vignettes, a way to uncover the smallest hidden details in those vignettes, and a fill-in-the-blanks interface to let you piece together the disparate clues.

As a follow-up, Rise moves the 18th-century setting of the original into the 20th century, bringing the mysterious, powerful idol to the attention of both academic scientists and mass media executives who seek to exploit its mind-altering powers. From your semi-omniscient perspective, you have to figure out not just the names and motivations of those pulled into the idol’s orbit, but the somewhat inscrutable powers of the idol itself.

Rise adds a few interesting new twists, like the ability to scrub through occasional video animations for visual clues and the ability to track certain scenes throughout multiple times of day. At its core, though, this is an extension of the best-in-class, pure deductive reasoning gameplay we saw in the original game, with a slightly more modern twist. This is yet another 2024 favorite that requires strong attention to detail and logical inference from very small hints.

It all comes together in a satisfying conclusion that leaves the door wide open for a sequel that we can’t wait for.

-Kyle Orland

Satisfactory

Coffee Stain Studios; Windows

Danish publisher Coffee Stain makes gaming success seem so simple. Put a game with a goofy feel, a corporate dystopia, and complex systems into early access. Iterate, cultivate feedback and a sense of ownership with the community. Take your time, refine, and then release the game, without any revenue-grabbing transactions or add-ons, beyond fun cosmetics. They did it with Deep Rock Galactic, and they’ve done it again with Satisfactory. By the time the game hit 1.0, the only thing left to add was “Premium plumbing.”

It’s remarkably fun to live on the bad-guy side of The Lorax, exploiting a planet’s natural resources to create a giant factory system producing widgets for corporate wellbeing. The first-person perspective might seem odd for game with such complex systems, but it heightens your sense of accomplishment. You didn’t just choose to put a conveyor belt between that ore extractor and that fabricator; you personally staked out that deposit, and your ran the track yourself.

The systems are incredibly deep, but it can be quite relaxing to wander around your planet-sized industrial park, thinking up ways that things might run better, faster, with no interruptions. It’s the kind of second job you’re happy to buy into, giving you a deep sense of accomplishment for learning the ins and outs of this system, even as it gently mocks you for engaging with it. Satisfactory has itself worked for years to refine the most efficient gameplay for its bravest fans, and now it’s ready to employ the rest of us.

-Kevin Purdy

Sixty Four

Oleg Danilov; Windows, Mac

I try not to think about, or write about, games in the manner of “dollars to enjoyment ratio.” Games often get cheaper over time, everybody enjoys them differently, and they’re art, too, as well as commerce. But, folks, come on: $6 for Sixty Four? If you play it for one hour and just smile a few times at its oddities and tiny cubes, that was less than a big-city latte or beer.

But you will almost certainly play Sixty Four for more than one hour, and maybe many more hours than that if you enjoy games with systems, building, and resources. You build and place machines to extract resources, use those resources to fund new and better machines, rearrange your machines, and eventually create beautiful workflows that are largely automated. Why do you do this? It’s a fun, dark mystery.

The game looks wonderful in its SimCity 2000/3000-esque style. It can be mentally taxing, but you can’t really lose; you can even leave the game window open in the background while you convince your boss or remote work software that you’re otherwise productive. It’s a fever dream I’d recommend to most anybody, unless they dread a repeat of the many lost days to games like Factorio, Satisfactory, or even Universal Paperclips. Just wishlist it, in that case; what could go wrong?

-Kevin Purdy

Tactical Breach Wizards

Suspicious Developments; Windows

Credit: Suspicious Developments

What can you do to spice up turn-based tactics, a rather mature genre?

Tactical Breach Wizards adds future-seeing, time-bending, hex-placing wizards, for one thing. It refines the heck out of grid combat, for another, adding window-tossing and door-sealing into the mix, and giving enemies a much wider array of attacks than area-of-effect variations. Finally, it wraps this all up in an inventive sci-fi narrative, one with an engaging plot, characters that reveal themselves one quip at a time, and an overall sense of wonderment at a charming, bizarre world of militarized magic.

In other words, you could put some joy into your turn-based combat, while still offering intricate challenges and clever levels.

Tom Francis’ unique sense of humor, seen previously in Gunpoint and Heat Signature, is given space here to shine, and it’s a wonderful wrapper for all the missions and upgrade decisions. It’s pretty ridiculous to be a wizard, wielding a laser-scoped rifle that fires crystal energy, plotting how to hit three guards at once with your next blast. Tactical Breach Wizards knows this, jokes about it, and then celebrates with you when you pull it off. It’s both a hoot, and a very good shoot.

-Kevin Purdy

UFO 50

Mossmouth; Windows

In recent years, modern games have started evoking the blocky polygons and smeary textures of early 3D games to appeal to nostalgic 20- and 30-somethings. UFO 50 has its nostalgic foot placed firmly in an earlier generation of ’80s and ’90s console gaming, with a bit of early ’00s Flash game design thrown in for good measure.

Flipping your way through the extremely wide variety of games on offer here is like an eminently enjoyable trip through random titles in an emulator’s (legally obtained) classic ROMs folder—just set in an alternate universe. There are plenty of shmups and platform games befitting the ostensible gaming era being recreated—but you also get full-fledged strategy, puzzle, arcade, racing, adventure, and RPG titles, on top of a few so unique that I can’t find any real historical genre analog for. These well-designed titles evoke the classics—everything from Bad Dudes and Bubble Bobble to Super Dodge Ball and Smash TV—without ever feeling like a simple rehash of games you remember from your youth.

While there’s the usual spread of quality you’d expect from such a wide-ranging collection, even the worst-made title in UFO 50 shows a level of care and attention to detail that will delight anyone with even a passing interest in game design and/or history. Not every game in UFO 50 will be one of your all-time favorites, but I’d be willing to wager that any gamer of a certain age will find quite a few that will eat away plenty of pleasant, nostalgic hours.

-Kyle Orland

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Ars Technica’s top 20 video games of 2024 Read More »

why-ai-language-models-choke-on-too-much-text

Why AI language models choke on too much text


Compute costs scale with the square of the input size. That’s not great.

Credit: Aurich Lawson | Getty Images

Large language models represent text using tokens, each of which is a few characters. Short words are represented by a single token (like “the” or “it”), whereas larger words may be represented by several tokens (GPT-4o represents “indivisible” with “ind,” “iv,” and “isible”).

When OpenAI released ChatGPT two years ago, it had a memory—known as a context window—of just 8,192 tokens. That works out to roughly 6,000 words of text. This meant that if you fed it more than about 15 pages of text, it would “forget” information from the beginning of its context. This limited the size and complexity of tasks ChatGPT could handle.

Today’s LLMs are far more capable:

  • OpenAI’s GPT-4o can handle 128,000 tokens (about 200 pages of text).
  • Anthropic’s Claude 3.5 Sonnet can accept 200,000 tokens (about 300 pages of text).
  • Google’s Gemini 1.5 Pro allows 2 million tokens (about 2,000 pages of text).

Still, it’s going to take a lot more progress if we want AI systems with human-level cognitive abilities.

Many people envision a future where AI systems are able to do many—perhaps most—of the jobs performed by humans. Yet many human workers read and hear hundreds of millions of words during our working years—and we absorb even more information from sights, sounds, and smells in the world around us. To achieve human-level intelligence, AI systems will need the capacity to absorb similar quantities of information.

Right now the most popular way to build an LLM-based system to handle large amounts of information is called retrieval-augmented generation (RAG). These systems try to find documents relevant to a user’s query and then insert the most relevant documents into an LLM’s context window.

This sometimes works better than a conventional search engine, but today’s RAG systems leave a lot to be desired. They only produce good results if the system puts the most relevant documents into the LLM’s context. But the mechanism used to find those documents—often, searching in a vector database—is not very sophisticated. If the user asks a complicated or confusing question, there’s a good chance the RAG system will retrieve the wrong documents and the chatbot will return the wrong answer.

And RAG doesn’t enable an LLM to reason in more sophisticated ways over large numbers of documents:

  • A lawyer might want an AI system to review and summarize hundreds of thousands of emails.
  • An engineer might want an AI system to analyze thousands of hours of camera footage from a factory floor.
  • A medical researcher might want an AI system to identify trends in tens of thousands of patient records.

Each of these tasks could easily require more than 2 million tokens of context. Moreover, we’re not going to want our AI systems to start with a clean slate after doing one of these jobs. We will want them to gain experience over time, just like human workers do.

Superhuman memory and stamina have long been key selling points for computers. We’re not going to want to give them up in the AI age. Yet today’s LLMs are distinctly subhuman in their ability to absorb and understand large quantities of information.

It’s true, of course, that LLMs absorb superhuman quantities of information at training time. The latest AI models have been trained on trillions of tokens—far more than any human will read or hear. But a lot of valuable information is proprietary, time-sensitive, or otherwise not available for training.

So we’re going to want AI models to read and remember far more than 2 million tokens at inference time. And that won’t be easy.

The key innovation behind transformer-based LLMs is attention, a mathematical operation that allows a model to “think about” previous tokens. (Check out our LLM explainer if you want a detailed explanation of how this works.) Before an LLM generates a new token, it performs an attention operation that compares the latest token to every previous token. This means that conventional LLMs get less and less efficient as the context grows.

Lots of people are working on ways to solve this problem—I’ll discuss some of them later in this article. But first I should explain how we ended up with such an unwieldy architecture.

The “brains” of personal computers are central processing units (CPUs). Traditionally, chipmakers made CPUs faster by increasing the frequency of the clock that acts as its heartbeat. But in the early 2000s, overheating forced chipmakers to mostly abandon this technique.

Chipmakers started making CPUs that could execute more than one instruction at a time. But they were held back by a programming paradigm that requires instructions to mostly be executed in order.

A new architecture was needed to take full advantage of Moore’s Law. Enter Nvidia.

In 1999, Nvidia started selling graphics processing units (GPUs) to speed up the rendering of three-dimensional games like Quake III Arena. The job of these PC add-on cards was to rapidly draw thousands of triangles that made up walls, weapons, monsters, and other objects in a game.

This is not a sequential programming task: triangles in different areas of the screen can be drawn in any order. So rather than having a single processor that executed instructions one at a time, Nvidia’s first GPU had a dozen specialized cores—effectively tiny CPUs—that worked in parallel to paint a scene.

Over time, Moore’s Law enabled Nvidia to make GPUs with tens, hundreds, and eventually thousands of computing cores. People started to realize that the massive parallel computing power of GPUs could be used for applications unrelated to video games.

In 2012, three University of Toronto computer scientists—Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton—used a pair of Nvidia GTX 580 GPUs to train a neural network for recognizing images. The massive computing power of those GPUs, which had 512 cores each, allowed them to train a network with a then-impressive 60 million parameters. They entered ImageNet, an academic competition to classify images into one of 1,000 categories, and set a new record for accuracy in image recognition.

Before long, researchers were applying similar techniques to a wide variety of domains, including natural language.

RNNs worked fairly well on short sentences, but they struggled with longer ones—to say nothing of paragraphs or longer passages. When reasoning about a long sentence, an RNN would sometimes “forget about” an important word early in the sentence. In 2014, computer scientists Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio discovered they could improve the performance of a recurrent neural network by adding an attention mechanism that allowed the network to “look back” at earlier words in a sentence.

In 2017, Google published “Attention Is All You Need,” one of the most important papers in the history of machine learning. Building on the work of Bahdanau and his colleagues, Google researchers dispensed with the RNN and its hidden states. Instead, Google’s model used an attention mechanism to scan previous words for relevant context.

This new architecture, which Google called the transformer, proved hugely consequential because it eliminated a serious bottleneck to scaling language models.

Here’s an animation illustrating why RNNs didn’t scale well:

This hypothetical RNN tries to predict the next word in a sentence, with the prediction shown in the top row of the diagram. This network has three layers, each represented by a rectangle. It is inherently linear: it has to complete its analysis of the first word, “How,” before passing the hidden state back to the bottom layer so the network can start to analyze the second word, “are.”

This constraint wasn’t a big deal when machine learning algorithms ran on CPUs. But when people started leveraging the parallel computing power of GPUs, the linear architecture of RNNs became a serious obstacle.

The transformer removed this bottleneck by allowing the network to “think about” all the words in its input at the same time:

The transformer-based model shown here does roughly as many computations as the RNN in the previous diagram. So it might not run any faster on a (single-core) CPU. But because the model doesn’t need to finish with “How” before starting on “are,” “you,” or “doing,” it can work on all of these words simultaneously. So it can run a lot faster on a GPU with many parallel execution units.

How much faster? The potential speed-up is proportional to the number of input words. My animations depict a four-word input that makes the transformer model about four times faster than the RNN. Real LLMs can have inputs thousands of words long. So, with a sufficiently beefy GPU, transformer-based models can be orders of magnitude faster than otherwise similar RNNs.

In short, the transformer unlocked the full processing power of GPUs and catalyzed rapid increases in the scale of language models. Leading LLMs grew from hundreds of millions of parameters in 2018 to hundreds of billions of parameters by 2020. Classic RNN-based models could not have grown that large because their linear architecture prevented them from being trained efficiently on a GPU.

See all those diagonal arrows between the layers? They represent the operation of the attention mechanism. Before a transformer-based language model generates a new token, it “thinks about” every previous token to find the ones that are most relevant.

Each of these comparisons is cheap, computationally speaking. For small contexts—10, 100, or even 1,000 tokens—they are not a big deal. But the computational cost of attention grows relentlessly with the number of preceding tokens. The longer the context gets, the more attention operations (and therefore computing power) are needed to generate the next token.

This means that the total computing power required for attention grows quadratically with the total number of tokens. Suppose a 10-token prompt requires 414,720 attention operations. Then:

  • Processing a 100-token prompt will require 45.6 million attention operations.
  • Processing a 1,000-token prompt will require 4.6 billion attention operations.
  • Processing a 10,000-token prompt will require 460 billion attention operations.

This is probably why Google charges twice as much, per token, for Gemini 1.5 Pro once the context gets longer than 128,000 tokens. Generating token number 128,001 requires comparisons with all 128,000 previous tokens, making it significantly more expensive than producing the first or 10th or 100th token.

A lot of effort has been put into optimizing attention. One line of research has tried to squeeze maximum efficiency out of individual GPUs.

As we saw earlier, a modern GPU contains thousands of execution units. Before a GPU can start doing math, it must move data from slow shared memory (called high-bandwidth memory) to much faster memory inside a particular execution unit (called SRAM). Sometimes GPUs spend more time moving data around than performing calculations.

In a series of papers, Princeton computer scientist Tri Dao and several collaborators have developed FlashAttention, which calculates attention in a way that minimizes the number of these slow memory operations. Work like Dao’s has dramatically improved the performance of transformers on modern GPUs.

Another line of research has focused on efficiently scaling attention across multiple GPUs. One widely cited paper describes ring attention, which divides input tokens into blocks and assigns each block to a different GPU. It’s called ring attention because GPUs are organized into a conceptual ring, with each GPU passing data to its neighbor.

I once attended a ballroom dancing class where couples stood in a ring around the edge of the room. After each dance, women would stay where they were while men would rotate to the next woman. Over time, every man got a chance to dance with every woman. Ring attention works on the same principle. The “women” are query vectors (describing what each token is “looking for”) and the “men” are key vectors (describing the characteristics each token has). As the key vectors rotate through a sequence of GPUs, they get multiplied by every query vector in turn.

In short, ring attention distributes attention calculations across multiple GPUs, making it possible for LLMs to have larger context windows. But it doesn’t make individual attention calculations any cheaper.

The fixed-size hidden state of an RNN means that it doesn’t have the same scaling problems as a transformer. An RNN requires about the same amount of computing power to produce its first, hundredth and millionth token. That’s a big advantage over attention-based models.

Although RNNs have fallen out of favor since the invention of the transformer, people have continued trying to develop RNNs suitable for training on modern GPUs.

In April, Google announced a new model called Infini-attention. It’s kind of a hybrid between a transformer and an RNN. Infini-attention handles recent tokens like a normal transformer, remembering them and recalling them using an attention mechanism.

However, Infini-attention doesn’t try to remember every token in a model’s context. Instead, it stores older tokens in a “compressive memory” that works something like the hidden state of an RNN. This data structure can perfectly store and recall a few tokens, but as the number of tokens grows, its recall becomes lossier.

Machine learning YouTuber Yannic Kilcher wasn’t too impressed by Google’s approach.

“I’m super open to believing that this actually does work and this is the way to go for infinite attention, but I’m very skeptical,” Kilcher said. “It uses this compressive memory approach where you just store as you go along, you don’t really learn how to store, you just store in a deterministic fashion, which also means you have very little control over what you store and how you store it.”

Perhaps the most notable effort to resurrect RNNs is Mamba, an architecture that was announced in a December 2023 paper. It was developed by computer scientists Dao (who also did the FlashAttention work I mentioned earlier) and Albert Gu.

Mamba does not use attention. Like other RNNs, it has a hidden state that acts as the model’s “memory.” Because the hidden state has a fixed size, longer prompts do not increase Mamba’s per-token cost.

When I started writing this article in March, my goal was to explain Mamba’s architecture in some detail. But then in May, the researchers released Mamba-2, which significantly changed the architecture from the original Mamba paper. I’ll be frank: I struggled to understand the original Mamba and have not figured out how Mamba-2 works.

But the key thing to understand is that Mamba has the potential to combine transformer-like performance with the efficiency of conventional RNNs.

In June, Dao and Gu co-authored a paper with Nvidia researchers that evaluated a Mamba model with 8 billion parameters. They found that models like Mamba were competitive with comparably sized transformers in a number of tasks, but they “lag behind Transformer models when it comes to in-context learning and recalling information from the context.”

Transformers are good at information recall because they “remember” every token of their context—this is also why they become less efficient as the context grows. In contrast, Mamba tries to compress the context into a fixed-size state, which necessarily means discarding some information from long contexts.

The Nvidia team found they got the best performance from a hybrid architecture that interleaved 24 Mamba layers with four attention layers. This worked better than either a pure transformer model or a pure Mamba model.

A model needs some attention layers so it can remember important details from early in its context. But a few attention layers seem to be sufficient; the rest of the attention layers can be replaced by cheaper Mamba layers with little impact on the model’s overall performance.

In August, an Israeli startup called AI21 announced its Jamba 1.5 family of models. The largest version had 398 billion parameters, making it comparable in size to Meta’s Llama 405B model. Jamba 1.5 Large has seven times more Mamba layers than attention layers. As a result, Jamba 1.5 Large requires far less memory than comparable models from Meta and others. For example, AI21 estimates that Llama 3.1 70B needs 80GB of memory to keep track of 256,000 tokens of context. Jamba 1.5 Large only needs 9GB, allowing the model to run on much less powerful hardware.

The Jamba 1.5 Large model gets an MMLU score of 80, significantly below the Llama 3.1 70B’s score of 86. So by this measure, Mamba doesn’t blow transformers out of the water. However, this may not be an apples-to-apples comparison. Frontier labs like Meta have invested heavily in training data and post-training infrastructure to squeeze a few more percentage points of performance out of benchmarks like MMLU. It’s possible that the same kind of intense optimization could close the gap between Jamba and frontier models.

So while the benefits of longer context windows is obvious, the best strategy to get there is not. In the short term, AI companies may continue using clever efficiency and scaling hacks (like FlashAttention and Ring Attention) to scale up vanilla LLMs. Longer term, we may see growing interest in Mamba and perhaps other attention-free architectures. Or maybe someone will come up with a totally new architecture that renders transformers obsolete.

But I am pretty confident that scaling up transformer-based frontier models isn’t going to be a solution on its own. If we want models that can handle billions of tokens—and many people do—we’re going to need to think outside the box.

Tim Lee was on staff at Ars from 2017 to 2021. Last year, he launched a newsletter, Understanding AI, that explores how AI works and how it’s changing our world. You can subscribe here.

Photo of Timothy B. Lee

Timothy is a senior reporter covering tech policy and the future of transportation. He lives in Washington DC.

Why AI language models choke on too much text Read More »

buying-a-tv-in-2025?-expect-lower-prices,-more-ads,-and-an-os-war.

Buying a TV in 2025? Expect lower prices, more ads, and an OS war.


“I do fear that the pressure to make better TVs will be lost…”

If you’re looking to buy a TV in 2025, you may be disappointed by the types of advancements TV brands will be prioritizing in the new year. While there’s an audience of enthusiasts interested in developments in tech like OLED, QDEL, and Micro LED, plus other features like transparency and improved audio, that doesn’t appear to be what the industry is focused on.

Today’s TV selection has a serious dependency on advertisements and user tracking. In 2025, we expect competition in the TV industry to center around TV operating systems (OSes) and TVs’ ability to deliver more relevant advertisements to viewers.

That yields a complicated question for shoppers: Are you willing to share your data with retail conglomerates and ad giants to save money on a TV?

Vizio is a Walmart brand now

One of the most impactful changes to the TV market next year will be Walmart owning Vizio. For Walmart, the deal, which closed on December 3 for approximately $2.3 billion, is about owning the data collection capabilities of Vizio’s SmartCast OS. For years, Vizio has been shifting its business from hardware sales to Platform+, “which consists largely of its advertising business” and “now accounts for all the company’s gross profit,” as Walmart noted when announcing the acquisition.

Walmart will use data collected from Vizio TVs to fuel its ad business, which sells ads on the OSes of its TVs (including Vizio and Onn brand TVs) and point-of-sale machines in Walmart stores. In a December 3 statement, Walmart confirmed its intentions with Vizio:

The acquisition… allows Walmart to serve its customers in new ways to enhance their shopping journeys. It will also bring to market new and differentiated ways for advertisers to meaningfully connect with customers at scale and boost product discovery, helping brands achieve greater impact from their advertising investments with Walmart Connect—the company’s retail media business in the US.

In 2025, buying a Vizio TV won’t just mean buying a TV from a company that’s essentially an ad business. It will mean fueling Walmart’s ad business. With Walmart also owning Onn and Amazon owning Fire TVs, that means there’s one less TV brand that isn’t a cog in a retail giant’s ever-expanding ad machine. With a history that includes complaints around working conditions and questionable products, including some that are straight scams, some people (including numerous Ars commenters) try to avoid commerce giants like Walmart and Amazon. In 2025, that will be harder for people looking for a new TV, especially an inexpensive one.

“Roku is at grave risk”

Further, Walmart has expressed a goal of becoming one of the 10 biggest ad companies, with the ad business notably having higher margins than groceries. It could use Vizio, via more plentiful and/or intrusive ads, to fuel those goals.

And Walmart’s TV market share is set to grow in the new year. Paul Gray, research director of consumer electronics and devices at Omdia, told Ars Technica he expects that “the new combined sales (Vizio plus Walmart’s white label) will be bigger than the current market leader Samsung.”

There are also potential implications related to how Walmart decides to distribute TVs post-acquisition. As Patrick Horner, practice leader of consumer electronics at Omdia, told Ars:

One of the possibilities is that Walmart could make use of the Vizio operating system a condition for placement in stores. This could change not only the Onn/Vizio TVs but may also include the Chinese brands. The [Korean] and Japanese brands may resist, as they have premium brand positioning, but the Chinese brands would be vulnerable. Roku is at grave risk.

Roku acquisition?

With Walmart set to challenge Roku, some analysts anticipate that Roku will be acquired in 2025. In December, Guggenheim analysts predicted that ad tech firm The Trade Desk, which is launching its own TV OS, will look to buy Roku to scale its OS business.

Needham & Company’s Laura Martin also thinks an acquisition—by The Trade Desk or possibly one of Walmart’s retail competitors—could be on the horizon.

‘’Walmart has told you by buying Vizio that these large retailers need a connected television advertising platform to tie purchases to,” Martin told Bloomberg. “That means Target and other large retailers have that reason to buy Roku to tie Roku’s connected television ad units to their sales in their retail stores. And by the way, Roku has much higher margins than any retailer.’”

She also pointed to Amazon as a potential buyer, noting that it might be able to use Roku’s user data to feed large language models.

Roku was already emboldened enough in 2024 to introduce home screen video ads to its TVs and streaming devices and has even explored technology for showing ads over anything plugged into a Roku set. Imagine how using Roku devices might further evolve if owned by a company like The Trade Desk or Amazon with deep interests in ads and tracking.

TV owners accustomed to being tracked

TV brands have become so dependent on ads that some are selling TVs at a loss to push ads. How did we get to the point where TV brands view their hardware as a way to track and sell to viewers? Part of the reason TV OSes are pushing the limits on ads is that many viewers seem willing to accept them, especially in the name of saving money.

Per the North American Q2 2024 TiVo Video Trends Report, 64.3 percent of subscription video-on-demand users subscribe to an ad-supported tier (compared to 48 percent in Q2 2023). And users are showing more tolerance to ads, with 77.8 percent saying they are “tolerant” or “in favor of” ads, up from 74 percent in Q2 2023. This is compared to 22.2 percent of respondents saying they’re “averse” to ads. TiVo surveyed 4,490 people in the US and Canada ages 18 and up for the report.

“Based on streaming services, many consumers see advertising as a small price to pay for lower cash costs,” Horner said.

The analyst added:

While some consumers will be sensitive to privacy issues or intrusive advertising, at the same time, most people have shown themselves entirely comfortable with being tracked by (for example) social media.

Alan Wolk, co-founder and lead analyst at the TVREV TV and streaming analyst group, agreed that platforms like Instagram have proven people’s willingness to accept ads and tracking, particularly if it leads to them seeing more relevant advertisements or giving shows or movies better ratings. According to the analyst, customers seem to think, “Google is tracking my finances, my porn habits, my everything. Why do I care if NBC knows that I watch football and The Tonight Show?”

While Ars readers may be more guarded about Google having an insider look at their data, many web users have a more accepting attitude. This has opened the door for TVs to test users’ max tolerance for ads and tracking to deliver more relevant ads.

That said, there’s a fine line.

“Companies have to be careful of… finding that line between taking in advertising, especially display ads on the home screen or whatnot, and it becoming overwhelming [for viewers],” Wolk said.

One of the fastest-growing ad vehicles for TVs currently and into 2025 is free, ad-supported streaming television (FAST) channels that come preloaded and make money from targeted ads. TCL is already experimenting with what viewers will accept here. It recently premiered movies made with generative AI that it hopes will fuel its FAST business while saving money. TCL believes that passive viewers will accept a lot of free content, even AI-generated movies and shows. But some viewers are extremely put off by such media, and there’s a risk of souring the reputation of some FAST services.

OS wars

We can expect more competition from TV OS operators in 2025, including from companies that traditionally have had no place in consumer hardware, like ad tech giant The Trade Desk. These firms face steep competition, though. Ultimately, the battle of TV OSes could end up driving improvements around usability, content recommendations, and, for better or worse, ad targeting.

Following heightened competition among TV OSes, Omdia’s Gray expects winners to start emerging, followed by consolidation.

“I expect that the final state will be a big winner, a couple of sizeable players, and some niche offerings,” he said.

Companies without backgrounds in consumer tech will have difficulty getting a foot into an already crowded market, which means we may not have to worry much about companies like The Trade Desk taking over our TVs.

“I have yet to meet a single person who hasn’t looked at me quizzically and said, ‘Wait, what are they thinking?’ Because the US market for the operating system is very tight,” Wolk said. “… So for American consumers, I don’t think we’ll see too many new entrants.”

You can also expect Comcast and Charter to push deeper into TV software as they deal with plummeting cable businesses. In November, they made a deal to put their joint venture’s TV OS, Xumo OS, in Hisense TVs that will be sold in Target. Xumo TVs are already available in almost 8,000 locations, Comcast and Charter said in November. The companies claimed that the retailers selling Xumo TVs “represent nearly 75 percent of all smart TV sales in the US.”

Meanwhile, Xperi Corp. said in November that it expected its TiVo OS to be in 2 million TVs by the end of 2024 and 7 million TVs by the end of 2025. At the heart of Tivo OS is TiVo One, which TiVo describes as a “cross-screen ad platform for new inventory combined with audience targeting and monetization” that is available in TVs and car displays. Announcing TiVo One in May, Xperi declared that the “advertising market is projected to reach [$36] billion” by 2026, meaning that “advertising on smart TVs has never been more imperative.”

But as competition intensifies and pushes the market into selecting a few “sizeable players,” as Gray put it, there’s more pressure for companies to make their OSes stand out to TV owners. This is due to advertising interests, but it also means more focus on making TVs easier to use and better able to help people find something to watch.

Not a lot of options

At the start of this article, we asked if you’d be willing to share your data with retail conglomerates and ad giants to save money on a TV. But the truth is there aren’t many alternative options beyond disconnecting your TV from the Internet or paying for an Apple TV streaming device in addition to your TV. Indeed, amid a war among OSes, many Ars readers will opt not to leverage ad-filled software at all. This shows a disconnect between TV makers and a core audience while suggesting limits in terms of new TV experiences next year.

Still, analysts agree that even among more expensive TV brands, there has been a shift toward building out ad businesses and OSes over improving hardware features like audio.

“This is a low-margin business, and even in the premium segment, the revenues from ads and data are significant. Also, the sort of consumer who buys a premium TV is likely to be especially interesting to advertisers,” Gray said.

Some worry about what this means for TV innovation. With software being at the center of TV businesses, there seems to be less incentive to drive hardware-related advancements. Gray echoed this sentiment while acknowledging that the current state of TVs is at least driving down TV prices.

“I do fear that the pressure to make better TVs will be lost and that matters such as… durability and performance risk being de-prioritized,” he said.

Vendors are largely leaving shoppers to drive improvements themselves, such as by buying additional gadgets like soundbars, Wolk noted.

In 2025, TVs will continue focusing innovation around software, which has immediate returns via ad sales compared to new hardware, which can take years to develop and catch on with shoppers. For some, this is creating a strong demand for dumb TVs, but unfortunately, there are no immediate signs of that becoming a trend.

As Horner put it, “This is an advertising/e-commerce-driven market, not a consumer-driven market. TV content is just the bait in the trap.”

Photo of Scharon Harding

Scharon is Ars Technica’s Senior Product Reviewer writing news, reviews, and analysis on consumer technology, including laptops, mechanical keyboards, and monitors. She’s based in Brooklyn.

Buying a TV in 2025? Expect lower prices, more ads, and an OS war. Read More »

the-us-military-is-now-talking-openly-about-going-on-the-attack-in-space

The US military is now talking openly about going on the attack in space

Mastalir said China is “copying the US playbook” with the way it integrates satellites into more conventional military operations on land, in the air, and at sea. “Their specific goals are to be able to track and target US high-value assets at the time and place of their choosing,” Mastalir said.

China’s strategy, known as Anti-Access/Area Denial, or A2AD, is centered on preventing US forces from accessing international waters extending hundreds or thousands of miles from mainland China. Some of the islands occupied by China within the last 15 years are closer to the Philippines, another treaty ally, than to China itself.

The A2AD strategy first “extended to the first island chain (bounded by the Philippines), and now the second island chain (extending to the US territory of Guam), and eventually all the way to the West Coast of California,” Mastalir said.

US officials say China has based anti-ship, anti-air, and anti-ballistic weapons in the region, and many of these systems rely on satellite tracking and targeting. Mastalir said his priority at Indo-Pacific Command, headquartered in Hawaii, is to defend US and allied satellites, or “blue assets,” and challenge “red assets” to break the Chinese military’s “long-range kill chains and protect the joint force from space-enabled attack.”

What this means is the Space Force wants to have the ability to disable or destroy the satellites China would use to provide communication, command, tracking, navigation, or surveillance support during an attack against the US or its allies.

Buildings and structures are seen on October 25, 2022, on an artificial island built by China on Subi Reef in the Spratly Islands of the South China Sea. China has progressively asserted its claim of ownership over disputed islands in the region. Credit: Ezra Acayan/Getty Images

Mastalir said he believes China’s space-based capabilities are “sufficient” to achieve the country’s military ambitions, whatever they are. “The sophistication of their sensors is certainly continuing to increase—the interconnectedness, the interoperability. They’re a pacing challenge for a reason,” he said.

“We’re seeing all signs point to being able to target US aircraft carriers… high-value assets in the air like tankers, AWACS (Airborne Warning And Control System),” Mastalir said. “This is a strategy to keep the US from intervening, and that’s what their space architecture is.”

That’s not acceptable to Pentagon officials, so Space Force personnel are now training for orbital warfare. Just don’t expect to know the specifics of any of these weapons systems any time soon.

“The details of that? No, you’re not going to get that from any war-fighting organization—’let me tell you precisely how I intend to attack an adversary so that they can respond and counter that’—those aren’t discussions we’re going to have,” Saltzman said. “We’re still going to protect some of those (details), but broadly, from an operational concept, we are going to be ready to contest space.”

A new administration

The Space Force will likely receive new policy directives after President-elect Donald Trump takes office in January. The Trump transition team hasn’t identified any changes coming for the Space Force, but a list of policy proposals known as Project 2025 may offer some clues.

Published by the Heritage Foundation, a conservative think tank, Project 2025 calls for the Pentagon to pivot the Space Force from a mostly defensive posture toward offensive weapons systems. Christopher Miller, who served as acting secretary of defense in the first Trump administration, authored the military section of Project 2025.

Miller wrote that the Space Force should “reestablish offensive capabilities to guarantee a favorable balance of forces, efficiently manage the full deterrence spectrum, and seriously complicate enemy calculations of a successful first strike against US space assets.”

Trump disavowed Project 2025 during the campaign, but since the election, he has nominated several of the policy agenda’s authors and contributors to key administration posts.

Saltzman met with Trump last month while attending a launch of SpaceX’s Starship rocket in Texas, but he said the encounter was incidental. Saltzman was already there for discussions with SpaceX officials, and Trump’s travel plans only became known the day before the launch.

The conversation with Trump at the Starship launch didn’t touch on any policy details, according to Saltzman. He added that the Space Force hasn’t yet had any formal discussions with the Trump transition team.

Regardless of the direction Trump takes with the Space Force, Saltzman said the service is already thinking about what to do to maintain what the Pentagon now calls “space superiority”—a twist on the term air superiority, which might have seemed equally as fanciful at the dawn of military aviation more than a century ago.

“That’s the reason we’re the Space Force,” Saltzman said. “So administration to administration, that’s still going to be true. Now, it’s just about resourcing and the discussions about what we want to do and when we want to do it, and we’re ready to have those discussions.”

The US military is now talking openly about going on the attack in space Read More »

2025-lamborghini-urus-se-first-drive:-the-total-taurean-package

2025 Lamborghini Urus SE first drive: The total taurean package


A 789-horsepower Goldilocks moment

Adding electric power and a battery turns the Urus from hit-or-miss to just right.

The original Urus was an SUV that nobody particularly wanted, even if the market was demanding it. With luxury manufacturers tripping over themselves to capitalize on a seemingly limitless demand for taller all-around machines, Lamborghini was a little late to the party.

The resulting SUV has done its job, boosting Lamborghini’s sales and making up more than half of the company’s volume last year. Even so, the first attempt was just a bit tame. That most aggressive of supercar manufacturers produced an SUV featuring the air of the company’s lower, more outrageous performance machines, but it didn’t quite deliver the level of prestige that its price demanded.

The Urus Performante changed that, adding enough visual and driving personality to make itself a legitimately exciting machine to drive or to look at. Along the way, though, it lost a bit of the most crucial aspect of an SUV: everyday livability. On paper, the Urus SE is just a plug-in version of the Urus, with a big battery adding some emissions-free range. In reality, it’s an SUV with more performance and more flexibility, too. This is the Urus’ Goldilocks moment.

the front half of an orange Lamborghini Urus

If you’re looking for something subtle, you shouldn’t be looking at an Urus. Credit: Tim Stevens

The what

The Urus SE starts with the same basic platform as the other models in the line, including a 4.0 L turbocharged V8 that drives all four wheels through an eight-speed automatic and an all-wheel-drive system.

All that has received a strong dose of electrification, starting with a 25.9 kWh battery pack sitting far out back that helps to offset the otherwise nose-heavy SUV while also adding a playful bit of inertia to its tail. More on that in a moment.

That battery powers a 189 hp (141 kW) permanent-magnet synchronous electric motor fitted between the V8 and its transmission. The positioning means it has full access to all eight speeds and can drive the car at up to 81 mph (130 km/h). That, plus a Lamborghini-estimated 37 miles (60 km) of range, means this is a large SUV that could feasibly cover a lot of people’s commutes emissions-free.

Lamborghini urus engine bay

The V8 lives here. Credit: Tim Stevens

But when that electric motor’s power is paired with the 4.0 V8, the result is 789 hp (588 kW) total system power delivered to all four wheels. And with the electric torque coming on strong and early, it not only adds shove but throttle response, too.

Other updates

At a glance, the Urus SE looks more or less the same as the earlier renditions of the same SUV. Look closer, though, and you’ll spot several subtle changes, including a hood that eases more gently into the front fenders and a new spoiler out back that Lamborghini says boosts rear downforce by 35 percent over the Urus S.

Far and away the most striking part of the car, though, are the 22-inch wheels wrapped around carbon-ceramic brakes. They give this thing the look of a rolling caricature of a sport SUV in the best way possible. On the body of the machine itself, you’ll want to choose a properly eye-catching color, like the Arancio Egon you see here. I’ve been lucky to drive some pretty special SUVs over the years, and none have turned heads like this one did when cruising silently through a series of small Italian towns.

Things are far more same-y on the inside. At first blush, nothing has changed inside the Urus SE, and that’s OK. You have a few new hues of Technicolor hides to choose from—the car you see here is outfitted in a similarly pungent orange to its exterior color, making it a citrus dream through and through. The sports seats aren’t overly aggressive, offering more comfort than squeeze, but I’d say that’s just perfect.

Buttons and touchscreens vie with less conventional controls inside the Urus. Tim Stevens

But that’s all much the same as prior Urus versions. The central infotainment screen is slightly larger at 12.3 inches, and the software is lightly refreshed, but it’s the same Audi-based system as before. A light skinning full of hexagons makes it look and feel a little more at home in a car with a golden bull on the nose.

Unfortunately, while the car is quicker than the original model, the software isn’t. The overall experience is somewhat sluggish, especially when moving through the navigation system. Even the regen meter on the digital gauge cluster doesn’t change until a good half-second after you’ve pressed the brake pedal, an unfortunate place for lag.

The Urus SE offers six drive modes: Strada (street), Sport, Corsa (track), Sabbia (sand), Terra (dirt), and Neve (snow). There’s also a seventh, customizable Ego mode. As on earlier Urus models, these modes must be selected in that sequence. So if you want to go from Sport back to Strada, you need to cycle the mode selector knob five times—or go digging two submenus deep on the touchscreen.

Those can be further customized via a few buttons added beneath the secondary drive mode lever on the right. The top button enables standard Hybrid mode, where the gasoline and electric powertrains work together as harmoniously as possible for normal driving. The second button enters Recharge mode, which instructs the car to prioritize battery charge. The third and lowest button enters Performance mode, which gives you maximum performance from the hybrid system at the expense of charge.

Finally, a quick tug on the mode selector on the right drops the Urus into EV Drive.

Silent running

I started my time in the Urus SE driving into the middle of town, which was full of narrow streets, pedestrian-friendly speed limits, and aggressively piloted Fiats. Slow and steady is the safest way in these situations, so I was happy to sample the Urus’ all-electric mode.

To put it simply, it delivers. There’s virtually no noise from the drivetrain, a near-silent experience at lower speeds that help assuage the stress such situations can cause. The experience was somewhat spoiled by some tire noise, but I’ll blame that on the Pirelli Scorpion Winter 2 tires outfitted here. I can’t, however, blame the tires for a few annoying creaks and rattles, which isn’t exactly what I’d expect from an SUV at this price point.

Though there isn’t much power at your disposal in this mode, the Urus can still scoot away from lights and stop signs quickly and easily, even ducking through small gaps in tiny roundabouts.

Lamborghini Urus cargo area

It might not be subtle, but it can be practical. Credit: Tim Stevens

Dip more than three-quarters of the way into the throttle, though, and that V8 fires up and quickly joins the fun. The hand-off here can be a little less than subtle as power output surges quickly, but in a moment, the car goes from a wheezy EV to a roaring Lamborghini. And unlike a lot of plug-ins that stubbornly refuse to shut their engines off again when this happens, another quick pull of the EV lever silences the thing.

When I finally got out of town, I shifted over to Strada mode, the default mode for the Urus. I found this mode a little too lazy for my tastes, as it was reluctant to shift down unless I dipped far into the throttle, resulting in a bucking bull of acceleration when the eight-speed automatic finally complied.

The car only really came alive when I put it into Sport mode and above.

Shifting to Sport

Any hesitation or reluctance to shift is quickly obliterated as soon as you tug the drive mode lever into Sport. The SUV immediately forgets all about trying to be efficient, dropping a gear or two and making sure you’re never far from the power band, keeping the turbo lag from the V8 to a minimum.

The tachometer gets some red highlights in this mode, but you won’t need to look at it. There’s plenty of sound from the exhaust, augmented by some digital engine notes I found to be more distracting and unnecessary than anything. Most importantly, the overall feel of the car changes dramatically. It leaps forward with the slightest provocation of the right pedal, really challenging the grip of the tires.

In my first proper sampling of the full travel of that throttle pedal, I was surprised at how quickly this latest Urus got frisky, kicking its tail out with an eager wag on a slight bend to the right. It wasn’t scary, but it was just lively enough to make me smile and feel like I was something more than a passenger in a hyper-advanced, half-electric SUV.

Credit: Tim Stevens

In other words, it felt like a Lamborghini, an impression only reinforced as I dropped the SUV down to Corsa mode and really let it fly. The transmission is incredibly eager to drop gears on the slightest bit of deceleration, enough so that I rarely felt the need to reach for the column-mounted shift paddles.

But despite the eagerness, the suspension remained compliant and everyday-livable in every mode. I could certainly feel the (many) imperfections in the rural Italian roads more when the standard air suspension was dialed over to its stiffest, but even then, it was never punishing. And in the softest setting, the SUV was perfectly comfortable despite those 22-inch wheels and tires.

I didn’t get a chance to sample the SUV’s off-road prowess, but the SE carries a torque-vectoring rear differential like the Performante, which should mean it will be as eager to turn and drift on loose surfaces as that other, racier Urus.

Both the Urus Performante and the SE start at a bit over $260,000, which means choosing between the two isn’t a decision to be made on price alone. Personally, I’d much prefer the SE. It offers plenty of the charm and excitement of the Performante mixed with even better everyday capability than the Urus S. This one’s just right.

2025 Lamborghini Urus SE first drive: The total taurean package Read More »

indiana-jones-and-the-great-circle-is-pitch-perfect-archaeological-adventuring

Indiana Jones and the Great Circle is pitch-perfect archaeological adventuring


Review: Amazing open-world environs round out a tight, fun-filled adventure story.

No need to put Harrison Ford through the de-aging filter here! Credit: Bethesda / MachineGames

No need to put Harrison Ford through the de-aging filter here! Credit: Bethesda / MachineGames

Historically, games based on popular film or TV franchises have generally been seen as cheap cash-ins, slapping familiar characters and settings on a shovelware clone of a popular genre and counting on the license to sell enough copies to devoted fans. Indiana Jones and the Great Circle clearly has grander ambitions than that, putting a AAA budget behind a unique open-world exploration game built around stealth, melee combat, and puzzle solving.

Building such a game on top of such well-loved source material comes with plenty of challenges. The developers at MachineGames need to pay homage to the source material without resorting to the kind of slavish devotion that amounts to a mere retread of a familiar story. At the same time, any new Indy adventure carries with it the weight not just of the character’s many film and TV appearances but also well-remembered games like Indiana Jones and the Fate of Atlantis. Then there are game franchises like Tomb Raider and Uncharted, which have already put their own significant stamps on the Indiana Jones formula of action-packed, devil-may-care treasure-hunting.

No, this is not a scene from a new Uncharted game. Credit: Bethesda / MachineGames

Surprisingly, Indiana Jones and the Great Circle bears all this pressure pretty well. While the stealth-exploration gameplay and simplistic puzzles can feel a bit trite at points, the game’s excellent presentation, top-notch world-building, and fun-filled, campy storyline drive one of Indy’s most memorable adventures since the original movie trilogy.

A fun-filled adventure

The year is 1937, and Indiana Jones has already Raided a Lost Ark but has yet to investigate the Last Crusade. After a short introductory flashback that retells an interactive version of Raiders of the Lost Ark‘s famous golden idol extraction, Professor Jones gets unexpectedly drawn away from preparations for midterms when a giant of a man breaks into Marshall College’s antiquities wing and steals a lone mummified cat.

Investigating that theft takes Jones on a globetrotting tour of locations along “The Great Circle,” a ring of archaeologically significant sites around the world that house ancient artifacts rumored to hold great and mysterious power. Those rumors have attracted the attention of the Nazis (who else would you expect?), dragging Indy into a race to secure the artifacts before they threaten to alter the course of an impending world war.

You see a whip, I see a grappling hook. Credit: Bethesda / MachineGames

The game’s overarching narrative—told mainly through lengthy cut scenes that serve as the most captivating reward for in-game achievements—does a pitch-perfect job of replicating the campy, madcap, fun-filled, adventurous tone Indy is known for. The writing is full of all the pithy one-liners and cheesy puns you could hope for, as well as countless overt and subtle references to Indy movie moments that will be familiar to even casual fans.

Indy here is his usual mix of archaeological superhero and bumbling everyman. One moment, he’s using his whip and some hard-to-believe upper body strength to jump around some quickly crumbling ruins. The next, he’s avoiding death during a madcap fight scene through a combination of sheer dumb luck and overconfident opposition. The next, he’s solving ancient riddles with reams of historical techno-babble and showing a downright supernatural ability to decipher long-dead languages in an instant when the plot demands it.

You have to admit it, this circle is pretty great! Credit: Bethesda / MachineGames

It all works in large part thanks to Troy Baker’s excellent vocal performance as Jones, which he somehow pulls off as a compelling cross between Harrison Ford and Jeff Goldblum. The music does some heavy lifting in setting the tone, too; it’s full of downright cinematic stirring horns and tension-packed strings that fade in and out perfectly in sync with the on-screen action. The game even shows some great restraint in its sparing use of the famous Indiana Jones theme, which I ended up humming to myself as I played more often than I actually heard it referenced in the game’s score.

Indy quips well off of Gina, a roving reporter searching for her missing sister who serves as the obligatory love interest/globetrotting exploration partner. But the game’s best scenes all involve Emmerich Voss, the Nazi archaeologist antagonist who makes an absolute meal out of his scenery chewing. From his obsession with cranial shapes to his preening diatribes about the inferiority of American culture, Voss makes the perfect foil for Indy’s no-nonsense, homespun apple pie forthrightness.

Voss steals literally every scene he’s in. Credit: Bethesda / MachineGames

By the time the plot descends into an inevitable mess of pseudo-religious magical mysticism, it’s clear that this is a story that doesn’t take itself too seriously. You may cringe a bit at how over the top it all gets, but you’ll probably be having too much fun to care.

Take a look around

In between the cut scenes—which together could form the basis for a strong Indiana Jones-themed episodic streaming miniseries—there’s an actual interactive game to play here as well. That game primarily plays out across three decently sized maps—one urban, one desert, and one water-logged marsh—that you can explore relatively freely, broken up by shorter, more linear interludes in between.

Following the main story quests in each of these locales generally has you zigzagging across the map through a series of glorified fetch quests. Go to location A to collect some mystical doodad, then return it to unlock some fun exposition and a reason to go to location B. Repeat as necessary.

I say “point A” there, but it’s usually more accurate to say the game points you toward “circle A” on the map. Once you get there, you often have to do a bit of unguided exploring to find the hidden trinket or secret entry point you need.

Am I going in the right direction? Credit: Bethesda / MachineGames

At their best, these exploration bits made me feel more like an archaeological detective than the usual in-game tourist blindly following a waypoint from location to location. At its worst, I spent 15 minutes searching through one of these map circles before finding my in-game partner Gina standing right next to the target I was probably intended to find immediately. So it goes.

Traipsing across the map in this way slowly reveals the sizable scale of the game’s environments, which often extend beyond what’s first apparent on the map to multi-floor buildings and gigantic subterranean caverns. Unlocking and/or figuring out all of the best paths through these labyrinthine locales—which can involve climbing across rooftops or crawling through enemy barracks—is often half the fun.

As you crisscross the map, you also invariably stumble on a seemingly endless array of optional sidequests, mysteries, and “fieldwork,” which you keep track of in a dynamically updated journal. While there’s an attempt at a plot justification for each of these optional fetch quests, the ones I tried ended up being much less compelling than the main plot, which seems to have taken most of the writers’ attention.

Indiana Jones, famous Vatican tourist. Credit: Bethesda / MachineGames

As you explore, a tiny icon in the corner of the screen will also alert you to photo opportunities, which can unlock important bits of lore or context for puzzles. I thoroughly enjoyed these quick excuses to appreciate the game’s well-designed architecture and environments, even as it made Indy feel a bit more like a random tourist than a badass archaeologist hero.

Quick, hide!

Unfortunately, your ability to freely explore The Great Circle‘s environments is often hampered by large groups of roaming Nazi and/or fascist soldiers. Sometimes, you can put on a disguise to walk among them unseen, but even then, certain enemies can pick you out of the crowd, something that was not clear to me until I had already been plucked out of obscurity more than a few times.

When undisguised, you’ll spend a lot of time kneeling and sneaking silently just outside the soldiers’ vision cones or patiently waiting for them to move so you can sneak through a newly safe path. Remaining unseen also lets you silently take out enemies from behind, which includes pushing unsuspected enemy sentries off of ledges in a hilarious move that never, ever gets old.

They’ll never find me up here. Credit: Bethesda / MachineGames

When your sneaking skills fail you amid a large group of enemies, the best and easiest thing to do is immediately run and hide. For the most part, the enemies are incredibly inept in their inevitable pursuit; dodge around a couple of corners and hide in a dark alley and they’ll usually quickly lose track of you. While I appreciated that being spotted wasn’t an instant death sentence, the ease with which I could outsmart these soldiers made the sneaking a lot less tense.

If you get spotted by a group of just one or two enemy soldiers, though, it’s time for some first-person melee combat, which draws heavy inspiration from the developers’ previous work on the early ’00s Chronicles of Riddick games. These fights usually play out like the world’s most overdesigned game of Punch-Out!!—you stand there waiting for a heavily telegraphed punch to come in, at which point you throw up a quick block or dodge and then counter with a series of rapid, crunchy punches of your own. Repeat until the enemy goes down.

You can spice things up a bit here by disarming and/or unbalancing your foes with your whip or by grabbing a wide variety of nearby objects to use as improvised melee weapons. After a while, though, all the fistfights start to feel pretty rote and unmemorable. The first time you hit a Nazi upside the head with a plunger is hilarious. The fifth time is a bit tiresome.

It’s always a good time to punch a Nazi. Credit: Bethesda / MachineGames

While you can also pull out a trusty revolver to simply shoot your foes, the racket the shots make usually leads to so much unwelcome enemy attention that it’s rarely worth the trouble. Aside from a handful of obligatory sections where the game practically forces you into a shooting gallery situation, I found little need to engage in the serviceable but unexciting gun combat.

And while The Great Circle is far from a horror game, there are a few combat moments of genuine terror with foes more formidable than the average grunt. I don’t want to give away too much, but those with fear of underwater creatures, the dark, or confined spaces will find some parts of the game incredibly tense.

Not so puzzling

My favorite gameplay moments in The Great Circle were the extended sections where I didn’t have to worry about stealth or combat and could just focus on exploring massive underground ruins. These feature some of the game’s most interesting traversal challenges, where looking around and figuring out just how to make it to the next objective is engaging on its own terms. There’s little of the Uncharted-style gameplay of practically highlighting every handhold and jump with a flashing red sign.

When giant mechanical gears need placing, you know who to call! Credit: Bethesda / MachineGames

These exploratory bits are broken up by some obligatory puzzles, usually involving Indiana Jones’ trademark of unbelievably intricate ancient stone machinery. Arrange the giant stone gears so the door opens, put the right relic in the right spot, shine a light on some emblems with a few mirrors, and so on. You know the drill if you’ve played any number of similar action-adventure games, and you probably won’t be all that engaged if you know how to perform some basic logic and exploration (though snapping pictures with the in-game camera offers hints for those who get unexpectedly stuck).

But even during the least engaging puzzles or humdrum fights in The Great Circle, I was compelled forward by the promise of some intricate ruin or pithy cut scene quip to come. Like the best Indiana Jones movies, there’s a propulsive force to the game’s most exciting scenes that helps you push past any brief feelings of tedium in between. Here’s hoping we see a lot more of this version of Indiana Jones in the future.

A note on performance

Indiana Jones and the Great Circle has received some recent negative attention for having relatively beefy system requirements, including calling for GPUs that have some form of real-time ray-tracing acceleration. We tested the game on a system with an Nvidia RTX 2080 Ti and an Intel i7-8700K CPU with 32 GB of RAM, which puts it roughly between the “minimum” and “recommended” specs suggested by the publisher.

Trace those rays. Credit: Bethesda / MachineGames

Despite this, we were able to run the game at 1440p resolution and “High” graphical settings at a steady 60 fps throughout. The game did occassionally suffer some heavy frame stuttering when loading new scenes, and far-off background elements had a tendency to noticeably “pop in” when running, but otherwise, we had few complaints about the graphical performance.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Indiana Jones and the Great Circle is pitch-perfect archaeological adventuring Read More »

how-did-the-ceo-of-an-online-payments-firm-become-the-nominee-to-lead-nasa?

How did the CEO of an online payments firm become the nominee to lead NASA?


Expect significant changes for America’s space agency.

A young man smiles while sitting amidst machinery.

Jared Isaacman at SpaceX Headquarters in Hawthorne, California. Credit: SpaceX

Jared Isaacman at SpaceX Headquarters in Hawthorne, California. Credit: SpaceX

President-elect Donald Trump announced Wednesday his intent to nominate entrepreneur and commercial astronaut Jared Isaacman as the next administrator of NASA.

For those unfamiliar with Isaacman, who at just 16 years old founded a payment processing company in his parents’ basement that ultimately became a major player in online payments, it may seem an odd choice. However, those inside the space community welcomed the news, with figures across the political spectrum hailing Isaacman’s nomination variously as “terrific,” “ideal,” and “inspiring.”

This statement from Isaac Arthur, president of the National Space Society, is characteristic of the response: “Jared is a remarkable individual and a perfect pick for NASA Administrator. He brings a wealth of experience in entrepreneurial enterprise as well as unique knowledge in working with both NASA and SpaceX, a perfect combination as we enter a new era of increased cooperation between NASA and commercial spaceflight.”

So who is Jared Isaacman? Why is his nomination being welcomed in most quarters of the spaceflight community? And how might he shake up NASA? Read on.

Meet Jared

Isaacman is now 41 years old, about half the age of current NASA Administrator Bill Nelson. He has founded a couple of companies, including the publicly traded Shift4 (look at the number 4 on a keyboard to understand the meaning of the name), as well as Draken International, a company that trained pilots of the US Air Force.

Throughout his career, Isaacman has shown a passion for flying and adventure. About five years ago, he decided he wanted to fly into space and bought the first commercial mission on a SpaceX Dragon spacecraft. But this was no joy ride. Some of his friends assumed Isaacman would invite them along. Instead, he brought a cancer survivor, a science educator, and a raffle winner. As part of the flight, this Inspiration4 mission raised hundreds of millions of dollars for research into childhood cancer.

After this mission, Isaacman set about a more ambitious project he named Polaris. The nominal plan was to fly two additional missions on Dragon and then become the first person to fly on SpaceX’s Starship. He flew the first of these missions, Polaris Dawn, in September. He brought along a pilot, Scott “Kidd” Poteet, and two SpaceX engineers, Anna Menon and Sarah Gillis. They were the first SpaceX employees to ever fly into orbit.

The mission was characteristic of Isaacman’s goal to expand the horizon of what is possible for humans in space. Polaris Dawn flew to an altitude of 1,408.1 km on the first day, the highest Earth-orbit mission ever flown and the farthest humans have traveled from our planet since Apollo. On the third day of the flight, the four crew members donned spacesuits designed and developed by SpaceX within the last two years. After venting the cabin’s atmosphere into space, first Isaacman and then Gillis spent several minutes extending their bodies out of the Dragon spacecraft.

This was the first private spacewalk in history and underscored Isaacman’s commitment to accelerating the transition of spaceflight as rare and government-driven to more publicly accessible.

Why does the space community welcome him?

In the last five years, Isaacman has impressed most of those within the spaceflight community he has interacted with. He has taken his responsibilities seriously, training hard for his Dragon missions and using NASA facilities such as a pressure chamber at NASA’s Johnson Space Center when appropriate.

Through these interactions—based upon my interviews with many people—Isaacman has demonstrated that he is not a billionaire seeking a joyride but someone who wants to change spaceflight for the better. In his spaceflights, he has also demonstrated himself to be a thoughtful and careful leader.

Two examples illustrate this. The ride to space aboard a Crew Dragon vehicle is dynamic, with the passengers pulling in excess of 3 Gs during the initial ascent, the abrupt cutoff of the main Falcon 9 rocket’s engines, stage separation, and then the grinding thrust of the upper stage engines just behind the capsule. In interviews, each of the Polaris Dawn crew members remarked about how Isaacman calmly called out these milestones in advance, with a few words about what to expect. It had a calming, reassuring effect and demonstrated that his crew’s health and safety were foremost among his concerns.

Another way in which Isaacman shows care for his crew and families is through an annual event called “Fighter Jet Training.” Cognizant of the time crew members spend away from their families training, he invites them and SpaceX employees who have supported his flights to an airstrip in Montana. Over the course of two days, family members get to ride in jets, go on a zero-gravity flight, and participate in other fun activities to get a taste of what flying on the edge is like. Isaacman underwrites all of this as a way of thanking all who are helping him.

The bottom line is that Isaacman, through his actions and words, appears to be a caring person who wants the US spaceflight enterprise to advance to greater heights.

Why would Isaacman want the job?

So why would a billionaire who has been to space twice (and plans to go at least two more times) want to run a federal agency? I have not asked Isaacman this question directly, but in interviews over the years, he has made it clear that he is passionate about spaceflight and views his role as a facilitator desiring to move things forward.

Most likely, he has accepted the job because he wants to modernize NASA and put the space agency in the best position to succeed in the future. NASA is no longer the youthful agency that took the United States to the Moon during the Apollo program. That was more than half a century ago, and while NASA is still capable of great things, it is living with one foot in the past and beholden to large, traditional contractors.

The space agency has a budget of about $25 billion, and no one could credibly argue that all of those dollars are spent efficiently. Several major programs at NASA were created by Congress with the intent of ensuring maximum dollars flowed to certain states and districts. It seems likely that Isaacman and the Trump administration will take a whack at some of these sacred cows.

High on the list is the Space Launch System rocket, which Congress created more than a dozen years ago. The rocket, and its ground systems, have been a testament to the waste inherent in large government programs funded by cost-plus contracts. NASA’s current administrator, Nelson, had a hand in creating this SLS rocket. Even he has decried the effect of this type of contracting as a “plague” on the space agency.

Currently, NASA plans to use the SLS rocket as the means of launching four astronauts inside the Orion spacecraft to lunar orbit. There, they will rendezvous with SpaceX’s Starship vehicle, go down to the Moon for a few days, and then come back to Orion. The spacecraft will then return to Earth.

So long, SLS?

Multiple sources have told Ars that the SLS rocket—which has long had staunch backing from Congress—is now on the chopping block. No final decisions have been made, but a tentative deal is in place with lawmakers to end the rocket in exchange for moving US Space Command to Huntsville, Alabama.

So how would NASA astronauts get to the Moon without the SLS rocket? Nothing is final, and the trade space is open. One possible scenario being discussed for future Artemis missions is to launch the Orion spacecraft on a New Glenn rocket into low-Earth orbit. There, it could dock with a Centaur upper stage that would launch on a Vulcan rocket. This Centaur stage would then boost Orion toward lunar orbit.

NASA’s Space Launch System rocket is seen on the launch pad at Kennedy Space Center in April 2022.

Credit: Trevor Mahlmann

NASA’s Space Launch System rocket is seen on the launch pad at Kennedy Space Center in April 2022. Credit: Trevor Mahlmann

Such a scenario is elegant because it uses rockets that would cost a fraction of the SLS and also includes all key contractors currently involved in the Artemis program, with the exception of Boeing, which would lose out financially. (Northrop Grumman will still make solids for Vulcan, and Aerojet Rocketdyne will make the RL-10 upper stage engines for that rocket.)

As part of the Artemis program, NASA is competing with China to not only launch astronauts to the south pole of the Moon but also to develop a sustainable base of operations there. While there is considerable interest in Mars, sources told Ars that the focus of the space agency is likely to remain on a program that goes to the Moon first and then develops plans for Mars.

This competition is not one between Elon Musk, who founded SpaceX, and Jeff Bezos, who founded Blue Origin. Rather, they are both seen as players on the US team. The Trump administration seems to view entrepreneurial spirit as the key advantage the United States has over China in its competition with China. This op-ed in Space News offers a good overview of this sentiment.

So whither NASA? Under the Trump administration, NASA’s role is likely to focus on stimulating the efforts by commercial space entrepreneurs. Isaacman’s marching orders for NASA will almost certainly be two words: results and speed. NASA, they believe, should transition to become more like its roots in the National Advisory Committee for Aeronautics, which undertook, promoted, and institutionalized aeronautical research—but now for space.

It is not easy to turn a big bureaucracy, and there will undoubtedly be friction and pain points. But the opportunity here is enticing: NASA should not be competing with things that private industry is already doing better, such as launching big rockets. Rather, it should find difficult research and development projects at the edge of the possible. This will certainly be Isaacman’s most challenging mission yet.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

How did the CEO of an online payments firm become the nominee to lead NASA? Read More »

flour,-water,-salt,-github:-the-bread-code-is-a-sourdough-baking-framework

Flour, water, salt, GitHub: The Bread Code is a sourdough baking framework

One year ago, I didn’t know how to bake bread. I just knew how to follow a recipe.

If everything went perfectly, I could turn out something plain but palatable. But should anything change—temperature, timing, flour, Mercury being in Scorpio—I’d turn out a partly poofy pancake. I presented my partly poofy pancakes to people, and they were polite, but those platters were not particularly palatable.

During a group vacation last year, a friend made fresh sourdough loaves every day, and we devoured it. He gladly shared his knowledge, his starter, and his go-to recipe. I took it home, tried it out, and made a naturally leavened, artisanal pancake.

I took my confusion to YouTube, where I found Hendrik Kleinwächter’s “The Bread Code” channel and his video promising a course on “Your First Sourdough Bread.” I watched and learned a lot, but I couldn’t quite translate 30 minutes of intensive couch time to hours of mixing, raising, slicing, and baking. Pancakes, part three.

It felt like there had to be more to this. And there was—a whole GitHub repository more.

The Bread Code gave Kleinwächter a gratifying second career, and it’s given me bread I’m eager to serve people. This week alone, I’m making sourdough Parker House rolls, a rosemary olive loaf for Friendsgiving, and then a za’atar flatbread and standard wheat loaf for actual Thanksgiving. And each of us has learned more about perhaps the most important aspect of coding, bread, teaching, and lots of other things: patience.

Hendrik Kleinwächter on his Bread Code channel, explaining his book.

Resources, not recipes

The Bread Code is centered around a book, The Sourdough Framework. It’s an open source codebase that self-compiles into new LaTeX book editions and is free to read online. It has one real bread loaf recipe, if you can call a 68-page middle-section journey a recipe. It has 17 flowcharts, 15 tables, and dozens of timelines, process illustrations, and photos of sourdough going both well and terribly. Like any cookbook, there’s a bit about Kleinwächter’s history with this food, and some sourdough bread history. Then the reader is dropped straight into “How Sourdough Works,” which is in no way a summary.

“To understand the many enzymatic reactions that take place when flour and water are mixed, we must first understand seeds and their role in the lifecycle of wheat and other grains,” Kleinwächter writes. From there, we follow a seed through hibernation, germination, photosynthesis, and, through humans’ grinding of these seeds, exposure to amylase and protease enzymes.

I had arrived at this book with these specific loaf problems to address. But first, it asks me to consider, “What is wheat?” This sparked vivid memories of Computer Science 114, in which a professor, asked to troubleshoot misbehaving code, would instead tell students to “Think like a compiler,” or “Consider the recursive way to do it.”

And yet, “What is wheat” did help. Having a sense of what was happening inside my starter, and my dough (which is really just a big, slow starter), helped me diagnose what was going right or wrong with my breads. Extra-sticky dough and tightly arrayed holes in the bread meant I had let the bacteria win out over the yeast. I learned when to be rough with the dough to form gluten and when to gently guide it into shape to preserve its gas-filled form.

I could eat a slice of each loaf and get a sense of how things had gone. The inputs, outputs, and errors could be ascertained and analyzed more easily than in my prior stance, which was, roughly, “This starter is cursed and so am I.” Using hydration percentages, measurements relative to protein content, a few tests, and troubleshooting steps, I could move closer to fresh, delicious bread. Framework: accomplished.

I have found myself very grateful lately that Kleinwächter did not find success with 30-minute YouTube tutorials. Strangely, so has he.

Sometimes weird scoring looks pretty neat. Kevin Purdy

The slow bread of childhood dreams

“I have had some successful startups; I have also had disastrous startups,” Kleinwächter said in an interview. “I have made some money, then I’ve been poor again. I’ve done so many things.”

Most of those things involve software. Kleinwächter is a German full-stack engineer, and he has founded firms and worked at companies related to blogging, e-commerce, food ordering, travel, and health. He tried to escape the boom-bust startup cycle by starting his own digital agency before one of his products was acquired by hotel booking firm Trivago. After that, he needed a break—and he could afford to take one.

“I went to Naples, worked there in a pizzeria for a week, and just figured out, ‘What do I want to do with my life?’ And I found my passion. My passion is to teach people how to make amazing bread and pizza at home,” Kleinwächter said.

Kleinwächter’s formative bread experiences—weekend loaves baked by his mother, awe-inspiring pizza from Italian ski towns, discovering all the extra ingredients in a supermarket’s version of the dark Schwarzbrot—made him want to bake his own. Like me, he started with recipes, and he wasted a lot of time and flour turning out stuff that produced both failures and a drive for knowledge. He dug in, learned as much as he could, and once he had his head around the how and why, he worked on a way to guide others along the path.

Bugs and syntax errors in baking

When using recipes, there’s a strong, societally reinforced idea that there is one best, tested, and timed way to arrive at a finished food. That’s why we have America’s Test Kitchen, The Food Lab, and all manner of blogs and videos promoting food “hacks.” I should know; I wrote up a whole bunch of them as a young Lifehacker writer. I’m still a fan of such things, from the standpoint of simply getting food done.

As such, the ultimate “hack” for making bread is to use commercial yeast, i.e., dried “active” or “instant” yeast. A manufacturer has done the work of selecting and isolating yeast at its prime state and preserving it for you. Get your liquids and dough to a yeast-friendly temperature and you’ve removed most of the variables; your success should be repeatable. If you just want bread, you can make the iconic no-knead bread with prepared yeast and very little intervention, and you’ll probably get bread that’s better than you can get at the grocery store.

Baking sourdough—or “naturally leavened,” or with “levain”—means a lot of intervention. You are cultivating and maintaining a small ecosystem of yeast and bacteria, unleashing them onto flour, water, and salt, and stepping in after they’ve produced enough flavor and lift—but before they eat all the stretchy gluten bonds. What that looks like depends on many things: your water, your flours, what you fed your starter, how active it was when you added it, the air in your home, and other variables. Most important is your ability to notice things over long periods of time.

When things go wrong, debugging can be tricky. I was able to personally ask Kleinwächter what was up with my bread, because I was interviewing him for this article. There were many potential answers, including:

  • I should recognize, first off, that I was trying to bake the hardest kind of bread: Freestanding wheat-based sourdough
  • You have to watch—and smell—your starter to make sure it has the right mix of yeast to bacteria before you use it
  • Using less starter (lower “inoculation”) would make it easier not to over-ferment
  • Eyeballing my dough rise in a bowl was hard; try measuring a sample in something like an aliquot tube
  • Winter and summer are very different dough timings, even with modern indoor climate control.

But I kept with it. I was particularly susceptible to wanting things to go quicker and demanding to see a huge rise in my dough before baking. This ironically leads to the flattest results, as the bacteria eats all the gluten bonds. When I slowed down, changed just one thing at a time, and looked deeper into my results, I got better.

Screenshot of Kleinwaechter's YouTube page, with video titles like

The Bread Code YouTube page and the ways in which one must cater to algorithms.

Credit: The Bread Code

The Bread Code YouTube page and the ways in which one must cater to algorithms. Credit: The Bread Code

YouTube faces and TikTok sausage

Emailing and trading video responses with Kleinwächter, I got the sense that he, too, has learned to go the slow, steady route with his Bread Code project.

For a while, he was turning out YouTube videos, and he wanted them to work. “I’m very data-driven and very analytical. I always read the video metrics, and I try to optimize my videos,” Kleinwächter said. “Which means I have to use a clickbait title, and I have to use a clickbait-y thumbnail, plus I need to make sure that I catch people in the first 30 seconds of the video.” This, however, is “not good for us as humans because it leads to more and more extreme content.”

Kleinwächter also dabbled in TikTok, making videos in which, leaning into his German heritage, “the idea was to turn everything into a sausage.” The metrics and imperatives on TikTok were similar to those on YouTube but hyperscaled. He could put hours or days into a video, only for 1 percent of his 200,000 YouTube subscribers to see it unless he caught the algorithm wind.

The frustrations inspired him to slow down and focus on his site and his book. With his community’s help, The Bread Code has just finished its second Kickstarter-backed printing run of 2,000 copies. There’s a Discord full of bread heads eager to diagnose and correct each other’s loaves and occasional pull requests from inspired readers. Kleinwächter has seen people go from buying what he calls “Turbo bread” at the store to making their own, and that’s what keeps him going. He’s not gambling on an attention-getting hit, but he’s in better control of how his knowledge and message get out.

“I think homemade bread is something that’s super, super undervalued, and I see a lot of benefits to making it yourself,” Kleinwächter said. “Good bread just contains flour, water, and salt—nothing else.”

Loaf that is split across the middle-top, with flecks of olives showing.

A test loaf of rosemary olive sourdough bread. An uneven amount of olive bits ended up on the top and bottom, because there is always more to learn.

Credit: Kevin Purdy

A test loaf of rosemary olive sourdough bread. An uneven amount of olive bits ended up on the top and bottom, because there is always more to learn. Credit: Kevin Purdy

You gotta keep doing it—that’s the hard part

I can’t say it has been entirely smooth sailing ever since I self-certified with The Bread Code framework. I know what level of fermentation I’m aiming for, but I sometimes get home from an outing later than planned, arriving at dough that’s trying to escape its bucket. My starter can be very temperamental when my house gets dry and chilly in the winter. And my dough slicing (scoring), being the very last step before baking, can be rushed, resulting in some loaves with weird “ears,” not quite ready for the bakery window.

But that’s all part of it. Your sourdough starter is a collection of organisms that are best suited to what you’ve fed them, developed over time, shaped by their environment. There are some modern hacks that can help make good bread, like using a pH meter. But the big hack is just doing it, learning from it, and getting better at figuring out what’s going on. I’m thankful that folks like Kleinwächter are out there encouraging folks like me to slow down, hack less, and learn more.

Flour, water, salt, GitHub: The Bread Code is a sourdough baking framework Read More »

are-any-of-apple’s-official-magsafe-accessories-worth-buying?

Are any of Apple’s official MagSafe accessories worth buying?


When MagSafe was introduced, it promised an accessories revolution. Meh.

Apple’s current lineup of MagSafe accessories. Credit: Samuel Axon

When Apple introduced what it currently calls MagSafe in 2020, its marketing messaging suggested that the magnetic attachment standard for the iPhone would produce a boom in innovation in accessories, making things possible that simply weren’t before.

Four years later, that hasn’t really happened—either from third-party accessory makers or Apple’s own lineup of branded MagSafe products.

Instead, we have a lineup of accessories that matches pretty much what was available at launch in 2020: chargers, cases, and just a couple more unusual applications.

With the launch of the iPhone 16 just behind us and the holidays just in front of us, a bunch of people are moving to phones that support MagSafe for the first time. Apple loves an upsell, so it offers some first-party MagSafe accessories—some useful, some not worth the cash, given the premiums it sometimes charges.

Given all that, it’s a good time to check in and quickly point out which (if any) of these first-party MagSafe accessories might be worth grabbing alongside that new iPhone and which ones you should skip in favor of third-party offerings.

Cases with MagSafe

Look, we could write thousands of words about the variety of iPhone cases available, or even just about those that support MagSafe to some degree or another—and we still wouldn’t really scratch the surface. (Unless that surface was made with Apple’s leather-replacement FineWoven material—hey-o!)

It’s safe to say there’s a third-party case for every need and every type of person out there. If you want one that meets your exact needs, you’ll be able to find it. Just know that cases that are labeled as MagSafe-ready will allow charge through and will let the magnets align correctly between a MagSafe charger and an iPhone—that’s really the whole point of the “MagSafe” name.

But if you prefer to stick with Apple’s own cases, there are currently two options: the clear cases and the silicone cases.

A clear iPhone case on a table

The clear case is definitely the superior of Apple’s two first-party MagSafe cases. Credit: Samuel Axon

The clear cases actually have a circle where the edges of the MagSafe magnets are, which is pretty nice for getting the magnets to snap without any futzing—though it’s really not necessary, since, well, magnets attract. They have a firm plastic shell that is likely to do a good job of protecting your phone when you drop it.

The Silicone case is… fine. Frankly, it’s ludicrously priced for what it is. It offers no advantages over a plethora of third-party cases that cost exactly half as much.

Recommendation: The clear case has its advantages, but the silicone case is awfully expensive for what it is. Generally, third party is the way to go. There are lots of third-party cases from manufacturers who got licensed by Apple, and you can generally trust those will work with wireless charging just fine. That was the whole point of the MagSafe branding, after all.

The MagSafe charger

At $39 or $49 (depending on length, one meter or two), these charging cables are pretty pricey. But they’re also highly durable, relatively efficient, and super easy to use. In most cases, you might as well just use any old USB-C cable.

There are some situations where you might prefer this option, though—for example, if you prop your iPhone up against your bedside lamp like a nightstand clock, or if you (like me) listen to audiobooks on wired earbuds while you fall asleep via the USB-C port, but you want to make sure the phone is still charging.

A charger with cable sits on a table

The MagSafe charger for the iPhone. Credit: Samuel Axon

So the answer on Apple’s MagSafe charger is that it’s pretty specialized, but it’s arguably the best option for those who have some specific reason not to just use USB-C.

Recommendation: Just use a USB-C cable, unless you have a specific reason to go this route—shoutout to my fellow individuals who listen to audiobooks while falling asleep but need headphones so as not to keep their spouse awake but prefer wired earbuds that use the USB-C port over AirPods to avoid losing AirPods in the bed covers. I’m sure there are dozens of us! If you do go this route, Apple’s own cable is the safest pick.

Apple’s FineWoven Wallet with MagSafe

While I’d long known people with dense wallet cases for their iPhones, I was excited about Apple’s leather (and later FineWoven) wallet with MagSafe when it was announced. I felt the wallet cases I’d seen were way too bulky, making the phone less pleasant to use.

Unfortunately, Apple’s FineWoven Wallet with MagSafe might be the worst official MagSafe product.

The problem is that the “durable microtwill” material that Apple went with instead of leather is prone to scratching, as many owners have complained. That’s a bit frustrating for something that costs nearly $60.

Apple's MagSafe wallet on a table

The MagSafe wallet has too many limitations to be worthwhile for most people. Credit: Samuel Axon

The wallet also only holds a few cards, and putting cards here means you probably can’t or at least shouldn’t try to use wireless charging, because the cards would be between the charger and the phone. Apple itself warns against doing this.

For those reasons, skip the FineWoven Wallet. There are lots of better-designed iPhone wallet cases out there, even though they might not be so minimalistic.

Recommendation: Skip this one. It’s a great idea in theory, but in practice and execution, it just doesn’t deliver. There are zillions of great wallet cases out there if you don’t mind a bit of bulk—just know you’ll have some wireless charging issues with many cases.

Other categories offered by third parties

Frankly, a lot of the more interesting applications of MagSafe for the iPhone are only available through third parties.

There are monitor mounts for using the iPhone as a webcam with Macs; bedside table stands for charging the phone while it acts as a smart display; magnetic phone stands for car dashboards that let you use GPS while you drive using MagSafe; magnetic versions for attaching power banks and portable batteries; and of course, multi-device chargers similar to the infamously canceled Airpower charging pad Apple had planned to release at one point. (I have the Belkin Boost Charge Pro 3-in-1 on my desk, and it works great.)

It’s not the revolution of new applications that some imagined when MagSafe was launched, but that’s not really a surprise. Still, there are some quality products out there. It’s both strange and a pity that Apple hasn’t made most of them itself.

No revolution here

Truthfully, MagSafe never seemed like it would be a huge smash. iPhones already supported Qi wireless charging before it came along, so the idea of magnets keeping the device aligned with the charger was always the main appeal—its existence potentially saved some users from ending up with chargers that didn’t quite work right with their phones, provided those users bought officially licensed MagSafe accessories.

Apple’s MagSafe accessories are often overpriced compared to alternatives from Belkin and other frequent partners. MagSafe seemed to do a better job bringing some standards to certain third-party products than it did bringing life to Apple’s offerings, and it certainly did not bring about a revolution of new accessory categories to the iPhone.

Still, it’s hard to blame anyone for choosing to go with Apple’s versions; the world of third-party accessories can be messy, and going the first-party route is generally a surefire way to know you’re not going to have many problems, even if the sticker’s a bit steep.

You could shop for third-party options, but sometimes you want a sure thing. With the possible exception of the FineWoven Wallet, all of these Apple-made MagSafe products are sure things.

Photo of Samuel Axon

Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Are any of Apple’s official MagSafe accessories worth buying? Read More »

how-physics-moves-from-wild-ideas-to-actual-experiments

How physics moves from wild ideas to actual experiments


Science often accommodates audacious proposals.

Instead of using antennas, could we wire up trees in a forest to detect neutrinos? Credit: Claire Gillo/PhotoPlus Magazine/Future via Getty Images

Neutrinos are some of nature’s most elusive particles. One hundred trillion fly through your body every second, but each one has only a tiny chance of jostling one of your atoms, a consequence of the incredible weakness of the weak nuclear force that governs neutrino interactions. That tiny chance means that reliably detecting neutrinos takes many more atoms than are in your body. To spot neutrinos colliding with atoms in the atmosphere, experiments have buried 1,000 tons of heavy water, woven cameras through a cubic kilometer of Antarctic ice, and planned to deploy 200,000 antennas.

In a field full of ambitious plans, a recent proposal by Steven Prohira, an assistant professor at the University of Kansas, is especially strange. Prohira suggests that instead of using antennas, we could detect the tell-tale signs of atmospheric neutrinos by wiring up a forest of trees. His suggestion may turn out to be impossible, but it could also be an important breakthrough. To find out which it is, he’ll need to walk a long path, refining prototypes and demonstrating his idea’s merits.

Prohira’s goal is to detect so-called ultra-high-energy neutrinos. Each one of these tiny particles carries more than fifty million times the energy released by uranium during nuclear fission. Their origins are not fully understood, but they are expected to be produced by some of the most powerful events in the Universe, from collapsing stars and pulsars to the volatile environments around the massive black holes at the centers of galaxies. If we could detect these particles more reliably, we could learn more about these extreme astronomical events.

Other experiments, like a project called GRAND, plan to build antennas to detect these neutrinos, watching for radio signals that come from their reactions with our atmosphere. However, finding places to place these antennas can be a challenge. Motivated by this experiment, Prohira dug up old studies by the US Army that suggested an alternative: instead of antennas, use trees. By wrapping a wire around each tree, army researchers found that the trees were sensitive to radio waves, which they hoped to use to receive radio signals in the jungle. Prohira argues that the same trick could be useful for neutrino detection.

Crackpot or legit science?

People suggest wacky ideas every day. Should we trust this one?

At first, you might be a bit suspicious. Prohira’s paper is cautious on the science but extremely optimistic in other ways. He describes the proposal as a way to help conserve the Earth’s forests and even suggests that “a forest detector could also motivate the large-scale reforesting of land, to grow a neutrino detector for future generations.”

Prohira is not a crackpot, though. He has a track record of research in detecting neutrinos via radio waves in more conventional experiments, and he even received an $800,000 MacArthur genius grant a few years ago to support his work.

More generally, studying particles from outer space often demands audacious proposals, especially ones that make use of the natural world. Professor Albrecht Karle works on the IceCube experiment, an array of cameras that detect neutrinos whizzing through a cubic kilometer of Antarctic ice.

“In astroparticle physics, where we often cannot build the entire experiment in a laboratory, we have to resort to nature to help us, to provide an environment that can be used to build a detector. For example, in many parts of astroparticle physics, we are using the atmosphere as a medium, or the ocean, or the ice, or we go deep underground because we need a shield because we cannot construct an artificial shield. There are even ideas to go into space for extremely energetic neutrinos, to build detectors on Jupiter’s moon Europa.”

Such uses of nature are common in the field. India’s GRAPES experiments were designed to measure muons, but they have to filter out anything that’s not a muon to do so. As Professor Sunil Gupta of the Tata Institute explained, the best way to do that was with dirt from a nearby hill.

“The only way we know you can make a muon detector work is by filtering out other radiation […] so what we decided is that we’ll make a civil structure, and we’ll dump three meters of soil on top of that, so those three meters of soil could act as a filter,” he said.

The long road to an experiment

While Prohira’s idea isn’t ridiculous, it’s still just an idea (and one among many). Prohira’s paper describing the idea was uploaded to arXiv.org, a pre-print server, in January. Physicists use pre-print servers to give access to their work before it’s submitted to a scientific journal. That gives other physicists time to comment on the work and suggest revisions. In the meantime, the journal will send the work out to a few selected reviewers, who are asked to judge both whether the paper is likely to be correct and whether it is of sufficient interest to the community.

At this stage, reviewers may find problems with Prohira’s idea. These may take the form of actual mistakes, such as if he made an error in his estimates of the sensitivity of the detector. But reviewers can also ask for more detail. For example, they could request a more extensive analysis of possible errors in measurements caused by the different shapes and sizes of the trees.

If Prohira’s idea makes it through to publication, the next step toward building an actual forest detector would be convincing the larger community. This kind of legwork often takes place at conferences. The International Cosmic Ray Conference is the biggest stage for the astroparticle community, with conferences every two years—the next is scheduled for 2025 in Geneva. Other more specialized conferences, like ARENA, focus specifically on attempts to detect radio waves from high-energy neutrinos. These conferences can offer an opportunity to get other scientists on board and start building a team.

That team will be crucial for the next step: testing prototypes. No matter how good an idea sounds in theory, some problems only arise during a real experiment.

An early version of the GRAPES experiment detected muons by the light they emit passing through tanks of water. To find how much water was needed, the researchers did tests, putting a detector on top of a tank and on the bottom and keeping track of how often both detectors triggered for different heights of water based on the muons that came through randomly from the atmosphere. After finding that the tanks of water would have to be too tall to fit in their underground facility, they had to find wavelength-shifting chemicals that would allow them to use shorter tanks and novel ways of dissolving these chemicals without eroding the aluminum of the tank walls.

“When you try to do something, you run into all kinds of funny challenges,” said Gupta.

The IceCube experiment has a long history of prototypes going back to early concepts that were only distantly related to the final project. The earliest, like the proposed DUMAND project in Hawaii, planned to put detectors in the ocean rather than ice. BDUNT was an intermediate stage, a project that used the depths of Lake Baikal to detect atmospheric neutrinos. While the detectors were still in liquid water, the ability to drive on the lake’s frozen surface made BDUNT’s construction easier.

In a 1988 conference, Robert March, Francis Halzen, and John G. Learned envisioned a kind of “solid state DUMAND” that would use ice instead of water to detect neutrinos. While the idea was attractive, the researchers cautioned that it would require a fair bit of luck. “In summary, this is a detector that requires a number of happy accidents to make it feasible. But if these should come to pass, it may provide the least expensive route to a truly large neutrino telescope,” they said.

In the case of the AMANDA experiment, early tests in Greenland and later tests at the South Pole began to provide these happy accidents. “It was discovered that the ice was even more exceptionally clear and has no radioactivities—absolutely quiet, so it is the darkest and quietest and purest place on Earth,” said Karle.

AMANDA was much smaller than the IceCube experiment, and theorists had already argued that to see cosmic neutrinos, the experiment would need to cover a cubic kilometer of ice. Still, the original AMANDA experiment wasn’t just a prototype; if neutrinos arrived at a sufficient rate, it would spot some. In this sense, it was like the original LIGO experiment, which ran for many years in the early 2000s with only a minimal chance of detecting gravitational waves, but it provided the information needed to perform an upgrade in the 2010s that led to repeated detections. Similarly, the hope of pioneers like Halzen was that AMANDA would be able to detect cosmic neutrinos despite its prototype status.

“There was the chance that, with the knowledge at the time, one might get lucky. He certainly tried,” said Karle.

Prototype experiments often follow this pattern. They’re set up in the hope that they could discover something new about the Universe, but they’re built to at least discover any unexpected challenges that would stop a larger experiment.

Major facilities and the National Science Foundation

For experiments that don’t need huge amounts of funding, these prototypes can lead to the real thing, with scientists ratcheting up their ambition at each stage. But for the biggest experiments, the governments that provide the funding tend to want a clearer plan.

Since Prohira is based in the US, let’s consider the US government. The National Science Foundation has a procedure for its biggest projects, called the Major Research Equipment and Facilities Construction program. Since 2009, it has had a “no cost overrun” policy. In the past, if a project ended up costing more than expected, the NSF could try to find additional funding. Now, projects are supposed to estimate beforehand how the cost could increase and budget extra for the risk. If the budget goes too high anyway, projects should compensate by reducing scope, shrinking the experiment until it falls under costs again.

To make sure they can actually do this, the NSF has a thorough review process.

First, the NSF expects that the scientists proposing a project have done their homework and have already put time and money into prototyping the experiment. The general expectation is that about 20 percent of the experiment’s total budget should have been spent testing out the idea before the NSF even starts reviewing it.

With the prototypes tested and a team assembled, the scientists will get together to agree on a plan. This often means writing a report to hash out what they have in mind. The IceCube team is in the process of proposing a second generation of their experiment, an expansion that would cover more ice with detectors and achieve further scientific goals. The team recently finished the third part of a Technical Design Report, which details the technical case for the experiment.

After that, experiments go into the NSF’s official experiment design process. This has three phases, conceptual design, preliminary design, and final design. Each phase ends with a review document summarizing the current state of the plans as they firm up, going from a general scientific case to a specific plan to put an experiment in a specific place. Risks are estimated in detail and list estimates of how likely risks are and how much they will cost, a process that sometimes involves computer simulations. By the end of the process, the project has a fully detailed plan and construction can begin.

Over the next few years, Prohira will test out his proposal. He may get lucky, like the researchers who dug into Antarctic ice, and find a surprisingly clear signal. He may be unlucky instead and find that the complexities of trees, with different spacings and scatterings of leaves, makes the signals they generate unfit for neutrino science. He, and we, cannot know in advance which will happen.

That’s what science is for, after all.

How physics moves from wild ideas to actual experiments Read More »

the-key-moment-came-38-minutes-after-starship-roared-off-the-launch-pad

The key moment came 38 minutes after Starship roared off the launch pad


SpaceX wasn’t able to catch the Super Heavy booster, but Starship is on the cusp of orbital flight.

The sixth flight of Starship lifts off from SpaceX’s Starbase launch site at Boca Chica Beach, Texas. Credit: SpaceX.

SpaceX launched its sixth Starship rocket Tuesday, proving for the first time that the stainless steel ship can maneuver in space and paving the way for an even larger, upgraded vehicle slated to debut on the next test flight.

The only hiccup was an abortive attempt to catch the rocket’s Super Heavy booster back at the launch site in South Texas, something SpaceX achieved on the previous flight on October 13. The Starship upper stage flew halfway around the world, reaching an altitude of 118 miles (190 kilometers) before plunging through the atmosphere for a pinpoint slow-speed splashdown in the Indian Ocean.

The sixth flight of the world’s largest launcher—standing 398 feet (121.3 meters) tall—began with a lumbering liftoff from SpaceX’s Starbase facility near the US-Mexico border at 4 pm CST (22: 00 UTC) Tuesday. The rocket headed east over the Gulf of Mexico, propelled by 33 Raptor engines clustered on the bottom of its Super Heavy first stage.

A few miles away, President-elect Donald Trump joined SpaceX founder Elon Musk to witness the launch. The SpaceX boss became one of Trump’s closest allies in this year’s presidential election, giving the world’s richest man extraordinary influence in US space policy. Sen. Ted Cruz (R-Texas) was there, too, among other lawmakers. Gen. Chance Saltzman, the top commander in the US Space Force, stood nearby, chatting with Trump and other VIPs.

Elon Musk, SpaceX’s CEO, President-elect Donald Trump, and Gen. Chance Saltzman of the US Space Force watch the sixth launch of Starship Tuesday. Credit: Brandon Bell/Getty Images

From their viewing platform, they watched Starship climb into a clear autumn sky. At full power, the 33 Raptors chugged more than 40,000 pounds of super-cold liquid methane and liquid oxygen per second. The engines generated 16.7 million pounds of thrust, 60 percent more than the Soviet N1, the second-largest rocket in history.

Eight minutes later, the rocket’s upper stage, itself also known as Starship, was in space, completing the program’s fourth straight near-flawless launch. The first two test flights faltered before reaching their planned trajectory.

A brief but crucial demo

As exciting as it was, we’ve seen all that before. One of the most important new things engineers wanted to test on this flight occurred about 38 minutes after liftoff.

That’s when Starship reignited one of its six Raptor engines for a brief burn to make a slight adjustment to its flight path. The burn lasted only a few seconds, and the impulse was small—just a 48 mph (77 km/hour) change in velocity, or delta-V—but it demonstrated that the ship can safely deorbit itself on future missions.

With this achievement, Starship will likely soon be cleared to travel into orbit around Earth and deploy Starlink Internet satellites or conduct in-space refueling experiments, two of the near-term objectives on SpaceX’s Starship development roadmap.

Launching Starlinks aboard Starship will allow SpaceX to expand the capacity and reach of its commercial consumer broadband network, which, in turn, provides revenue for Musk to reinvest into Starship. Orbital refueling enables Starship voyages beyond low-Earth orbit, fulfilling SpaceX’s multibillion-dollar contract with NASA to provide a human-rated Moon lander for the agency’s Artemis program. Likewise, transferring cryogenic propellants in orbit is a prerequisite for sending Starships to Mars, making real Musk’s dream of creating a settlement on the red planet.

Artist’s illustration of Starship on the surface of the Moon. Credit: SpaceX

Until now, SpaceX has intentionally launched Starships to speeds just shy of the blistering velocities needed to maintain orbit. Engineers wanted to test the Raptor’s ability to reignite in space on the third Starship test flight in March, but the ship lost control of its orientation, and SpaceX canceled the engine firing.

Before going for a full orbital flight, officials needed to confirm that Starship could steer itself back into the atmosphere for reentry, ensuring it wouldn’t present any risk to the public with an unguided descent over a populated area. After Tuesday, SpaceX can check this off its to-do list.

“Congrats to SpaceX on Starship’s sixth test flight,” NASA Administrator Bill Nelson posted on X. “Exciting to see the Raptor engine restart in space—major progress towards orbital flight. Starship’s success is Artemis’ success. Together, we will return humanity to the Moon & set our sights on Mars.”

While it lacks the pizzazz of a fiery launch or landing, the engine relight unlocks a new phase of Starship development. SpaceX has now proven that the rocket is capable of reaching space with a fair measure of reliability. Next, engineers will fine-tune how to reliably recover the booster and the ship and learn how to use them.

Acid test

SpaceX appears well on its way to doing this. While SpaceX didn’t catch the Super Heavy booster with the launch tower’s mechanical arms Tuesday, engineers have shown they can do it. The challenge of catching Starship itself back at the launch pad is more daunting. The ship starts its reentry thousands of miles from Starbase, traveling approximately 17,000 mph (27,000 km/hour), and must thread the gap between the tower’s catch arms within a matter of inches.

The good news is that SpaceX has now twice proven it can bring Starship back to a precision splashdown in the Indian Ocean. In October, the ship settled into the sea in darkness. SpaceX moved the launch time for Tuesday’s flight to the late afternoon, setting up for splashdown shortly after sunrise northwest of Australia.

The shift in time paid off with some stunning new visuals. Cameras mounted on the outside of Starship beamed dazzling live views back to SpaceX through the Starlink network, showing a now-familiar glow of plasma encasing the spacecraft as it plowed deeper into the atmosphere. But this time, daylight revealed the ship’s flaps moving to control its belly-first descent toward the ocean. After passing through a deck of low clouds, Starship reignited its Raptor engines and tilted from horizontal to vertical, making contact with the water tail-first within view of a floating buoy and a nearby aircraft in position to observe the moment.

Here’s a replay of the spacecraft’s splashdown around 65 minutes after launch.

Splashdown confirmed! Congratulations to the entire SpaceX team on an exciting sixth flight test of Starship! pic.twitter.com/bf98Va9qmL

— SpaceX (@SpaceX) November 19, 2024

The ship made it through reentry despite flying with a substandard heat shield. Starship’s thermal protection system is made up of thousands of ceramic tiles to protect the ship from temperatures as high as 2,600° Fahrenheit (1,430° Celsius).

Kate Tice, a SpaceX engineer hosting the company’s live broadcast of the mission, said teams at Starbase removed 2,100 heat shield tiles from Starship ahead of Tuesday’s launch. Their removal exposed wider swaths of the ship’s stainless steel skin to super-heated plasma, and SpaceX teams were eager to see how well the spacecraft held up during reentry. In the language of flight testing, this approach is called exploring the corners of the envelope, where engineers evaluate how a new airplane or rocket performs in extreme conditions.

“Don’t be surprised if we see some wackadoodle stuff happen here,” Tice said. There was nothing of the sort. One of the ship’s flaps appeared to suffer some heating damage, but it remained intact and functional, and the harm looked to be less substantial than damage seen on previous flights.

Many of the removed tiles came from the sides of Starship where SpaceX plans to place catch fittings on future vehicles. These are the hardware protuberances that will catch on the top side of the launch tower’s mechanical arms, similar to fittings used on the Super Heavy booster.

“The next flight, we want to better understand where we can install catch hardware, not necessarily to actually do the catch but to see how that hardware holds up in those spots,” Tice said. “Today’s flight will help inform ‘does the stainless steel hold up like we think it may, based on experiments that we conducted on Flight 5?'”

Musk wrote on his social media platform X that SpaceX could try to bring Starship back to Starbase for a catch on the eighth test flight, which is likely to occur in the first half of 2025.

“We will do one more ocean landing of the ship,” Musk said. “If that goes well, then SpaceX will attempt to catch the ship with the tower.”

The heat shield, Musk added, is a focal point of SpaceX’s attention. The delicate heat-absorbing tiles used on the belly of the space shuttle proved vexing to NASA technicians. Early in the shuttle’s development, NASA had trouble keeping tiles adhered to the shuttle’s aluminum skin. Each of the shuttle tiles was custom-machined to fit on a specific location on the orbiter, complicating refurbishment between flights. Starship’s tiles are all hexagonal in shape and agnostic to where technicians place them on the vehicle.

“The biggest technology challenge remaining for Starship is a fully & immediately reusable heat shield,” Musk wrote on X. “Being able to land the ship, refill propellant & launch right away with no refurbishment or laborious inspection. That is the acid test.”

This photo of the Starship vehicle for Flight 6, numbered Ship 31, shows exposed portions of the vehicle’s stainless steel skin after tile removal. Credit: SpaceX

There were no details available Tuesday night on what caused the Super Heavy booster to divert from its planned catch on the launch tower. After detaching from the Starship upper stage less than three minutes into the flight, the booster reversed course to begin the journey back to Starbase.

Then SpaceX’s flight director announced the rocket would fly itself into the Gulf rather than back to the launch site: “Booster offshore divert.”

The booster finished its descent with a seemingly perfect landing burn using a subset of its Raptor engines. As expected after the water landing, the booster—itself 233 feet (71 meters) tall—toppled and broke apart in a dramatic fireball visible to onshore spectators.

In an update posted to its website after the launch, SpaceX said automated health checks of hardware on the launch and catch tower triggered the aborted catch attempt. The company did not say what system failed the health check. As a safety measure, SpaceX must send a manual command for the booster to come back to land in order to prevent a malfunction from endangering people or property.

Turning it up to 11

There will be plenty more opportunities for more booster catches in the coming months as SpaceX ramps up its launch cadence at Starbase. Gwynne Shotwell, SpaceX’s president and chief operating officer, hinted at the scale of the company’s ambitions last week.

“We just passed 400 launches on Falcon, and I would not be surprised if we fly 400 Starship launches in the next four years,” she said at the Barron Investment Conference.

The next batch of test flights will use an improved version of Starship designated Block 2, or V2. Starship Block 2 comes with larger propellant tanks, redesigned forward flaps, and a better heat shield.

The new-generation Starship will hold more than 11 million pounds of fuel and oxidizer, about a million pounds more than the capacity of Starship Block 1. The booster and ship will produce more thrust, and Block 2 will measure 408 feet (124.4 meters) tall, stretching the height of the full stack by a little more than 10 feet.

Put together, these modifications should give Starship the ability to heave a payload of up to 220,000 pounds (100 metric tons) into low-Earth orbit, about twice the carrying capacity of the first-generation ship. Further down the line, SpaceX plans to introduce Starship Block 3 to again double the ship’s payload capacity.

Just as importantly, these changes are designed to make it easier for SpaceX to recover and reuse the Super Heavy booster and Starship upper stage. SpaceX’s goal of fielding a fully reusable launcher builds on the partial reuse SpaceX pioneered with its Falcon 9 rocket. This should dramatically bring down launch costs, according to SpaceX’s vision.

With Tuesday’s flight, it’s clear Starship works. Now it’s time to see what it can do.

Updated with additional details, quotes, and images.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

The key moment came 38 minutes after Starship roared off the launch pad Read More »

i,-too,-installed-an-open-source-garage-door-opener,-and-i’m-loving-it

I, too, installed an open source garage door opener, and I’m loving it


Open source closed garage

OpenGarage restored my home automations and gave me a whole bunch of new ideas.

Hark! The top portion of a garage door has entered my view, and I shall alert my owner to it. Credit: Kevin Purdy

Like Ars Senior Technology Editor Lee Hutchinson, I have a garage. The door on that garage is opened and closed by a device made by a company that, as with Lee’s, offers you a way to open and close it with a smartphone app. But that app doesn’t work with my preferred home automation system, Home Assistant, and also looks and works like an app made by a garage door company.

I had looked into the ratgdo Lee installed, and raved about, but hooking it up to my particular Genie/Aladdin system would have required installing limit switches. So I instead installed an OpenGarage unit ($50 plus shipping). My garage opener now works with Home Assistant (and thereby pretty much anything else), it’s not subject to the whims of API access, and I’ve got a few ideas how to make it even better. Allow me to walk you through what I did, why I did it, and what I might do next.

Thanks, I’ll take it from here, Genie

Genie, maker of my Wi-Fi-capable garage door opener (sold as an “Aladdin Connect” system), is not in the same boat as the Chamberlain/myQ setup that inspired Lee’s project. There was a working Aladdin Connect integration in Home Assistant, until the company changed its API in January 2024. Genie said it would release its own official Home Assistant integration in June, and it did, but then it was quickly pulled back, seemingly for licensing issues. Since then, no updates on the matter. (I have emailed Genie for comment and will update this post if I receive reply.)

This is not egregious behavior, at least on the scale of garage door opener firms. And Aladdin’s app works with Google Home and Amazon Alexa, but not with Home Assistant or my secondary/lazy option, HomeKit/Apple Home. It also logs me out “for security” more often than I’d like and tells me this only after an iPhone shortcut refuses to fire. It has some decent features, but without deeper integrations, I can’t do things like have the brighter ceiling lights turn on when the door opens or flash indoor lights if the garage door stays open too long. At least not without Google or Amazon.

I’ve seen OpenGarage passed around the Home Assistant forums and subreddits over the years. It is, as the name implies, fully open source: hardware design, firmware, and app code, API, everything. It is a tiny ESP board that has an ultrasonic distance sensor and circuit relay attached. You can control and monitor it from a web browser, mobile or desktop, from IFTTT, MQTT, and with the latest firmware, you can get email alerts. I decided to pull out the 6-foot ladder and give it a go.

Prototypes of the OpenGarage unit. To me, they look like little USB-powered owls, just with very stubby wings. Credit: OpenGarage

Installing the little watching owl

You generally mount the OpenGarage unit to the roof of your garage, so the distance sensor can detect if your garage door has rolled up in front of it. There are options for mounting with magnetic contact sensors or a side view of a roll-up door, or you can figure out some other way in which two different sensor depth distances would indicate an open or closed door. If you’ve got a Security+ 2.0 door (the kind with the yellow antenna, generally), you’ll need an adapter, too.

The toughest part of an overhead install is finding a spot that gives the unit a view of your garage door, not too close to rails or other obstructing objects, but then close enough for the contact wires and USB micro cable to reach. Ideally, too, it has a view of your car when the door is closed and the car is inside, so it can report its presence. I’ve yet to find the right thing to do with the “car is inside or not” data, but the seed is planted.

OpenGarage’s introduction and explanation video.

My garage setup, like most of them, is pretty simple. There’s a big red glowing button on the wall near the door, and there are two very thin wires running from it to the opener. On the opener, there are four ports that you can open up with a screwdriver press. Most of the wires are headed to the safety sensor at the door bottom, while two come in from the opener button. After stripping a bit of wire to expose more cable, I pressed the contact wires from the OpenGarage into those same opener ports.

Wires running from terminal points in the back of a garage door opener, with one set of wires coming in from the bottom and pressed into the same press-fit holes.

The wire terminal on my Genie garage opener. The green and pink wires lead to the OpenGarage unit. Credit: Kevin Purdy

After that, I connected the wires to the OpenGarage unit’s screw terminals, then did some pencil work on the garage ceiling to figure out how far I could run the contact and micro-USB power cable, getting the proper door view while maintaining some right-angle sense of order up there. When I had reached a decent compromise between cable tension and placement, I screwed the sensor into an overhead stud and used a staple gun to secure the wires. It doesn’t look like a pro installed it, but it’s not half bad.

A garage ceiling, with drywall stud paint running across, a small device with wires running at right angles to the opener, and an opener rail beneath.

Where I ended up installing my OpenGarage unit. Key points: Above the garage door when open, view of the car below, not too close to rails, able to reach power and opener contact. Credit: Kevin Purdy

A very versatile board

If you’ve got everything placed and wired up correctly, opening the OpenGarage access point or IP address should give you an interface that shows you the status of your garage, your car (optional), and its Wi-Fi and external connections.

Image of OpenGarage web interface, showing a

The landing screen for the OpenGarage. You can only open the door or change settings if you know the device key (which you should change immediately). Credit: Kevin Purdy

It’s a handy webpage and a basic opener (provided you know the secret device key you set), but OpenGarage is more powerful in how it uses that data. OpenGarage’s device can keep a cloud connection open to Blynk or the maker’s own OpenThings.io cloud server. You can hook it up to MQTT or an IFTTT channel. It can send you alerts when your garage has been open a certain amount of time or if it’s open after a certain time of day.

Screenshot showing 5 sensors: garage, distance, restart, vehicle, and signal strength.

You’re telling me you can just… see the state of these things, at all times, on your own network? Credit: Kevin Purdy

You really don’t need a corporate garage coder

For me, the greatest benefit is in hooking OpenGarage up to Home Assistant. I’ve added an opener button to my standard dashboard (one that requires a long-press or two actions to open). I’ve restored the automation that turns on the overhead bulbs for five minutes when the garage door opens. And I can dig in if I want, like alerting me that it’s Monday night at 10 pm and I’ve yet to open the garage door, indicating I forgot to put the trash out. Or maybe some kind of NFC tag to allow for easy opening while on a bike, if that’s not a security nightmare (it might be).

Not for nothing, but OpenGarage is also a deeply likable bit of indie kit. It’s a two-person operation, with Ray Wang building on his work with the open and handy OpenSprinkler project, trading Arduino for ESP8266, and doing some 3D printing to fit the sensors and switches, and Samer Albahra providing mobile app, documentation, and other help. Their enthusiasm for DIY home control has likely brought out the same in others and certainly in me.

Photo of Kevin Purdy

Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch.

I, too, installed an open source garage door opener, and I’m loving it Read More »