Features

ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots.txt

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt


Making AI crawlers squirm

Attackers explain how an anti-spam defense became an AI weapon.

Last summer, Anthropic inspired backlash when its ClaudeBot AI crawler was accused of hammering websites a million or more times a day.

And it wasn’t the only artificial intelligence company making headlines for supposedly ignoring instructions in robots.txt files to avoid scraping web content on certain sites. Around the same time, Reddit’s CEO called out all AI companies whose crawlers he said were “a pain in the ass to block,” despite the tech industry otherwise agreeing to respect “no scraping” robots.txt rules.

Watching the controversy unfold was a software developer whom Ars has granted anonymity to discuss his development of malware (we’ll call him Aaron). Shortly after he noticed Facebook’s crawler exceeding 30 million hits on his site, Aaron began plotting a new kind of attack on crawlers “clobbering” websites that he told Ars he hoped would give “teeth” to robots.txt.

Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will “eat just about anything that finds its way inside.”

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

Tarpits were originally designed to waste spammers’ time and resources, but creators like Aaron have now evolved the tactic into an anti-AI weapon. As of this writing, Aaron confirmed that Nepenthes can effectively trap all the major web crawlers. So far, only OpenAI’s crawler has managed to escape.

It’s unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft’s director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed. He noted that all companies have developed poisoning countermeasures, while OpenAI “has been quite vigilant” and excels at detecting the “first signs of data poisoning attempts.”

Despite these efforts, he concluded that data poisoning was “a serious threat to machine learning models.” And in 2025, tarpitting represents a new threat, potentially increasing the costs of fresh data at a moment when AI companies are heavily investing and competing to innovate quickly while rarely turning significant profits.

“A link to a Nepenthes location from your site will flood out valid URLs within your site’s domain name, making it unlikely the crawler will access real content,” a Nepenthes explainer reads.

The only AI company that responded to Ars’ request to comment was OpenAI, whose spokesperson confirmed that OpenAI is already working on a way to fight tarpitting.

“We’re aware of efforts to disrupt AI web crawlers,” OpenAI’s spokesperson said. “We design our systems to be resilient while respecting robots.txt and standard web practices.”

But to Aaron, the fight is not about winning. Instead, it’s about resisting the AI industry further decaying the Internet with tech that no one asked for, like chatbots that replace customer service agents or the rise of inaccurate AI search summaries. By releasing Nepenthes, he hopes to do as much damage as possible, perhaps spiking companies’ AI training costs, dragging out training efforts, or even accelerating model collapse, with tarpits helping to delay the next wave of enshittification.

“Ultimately, it’s like the Internet that I grew up on and loved is long gone,” Aaron told Ars. “I’m just fed up, and you know what? Let’s fight back, even if it’s not successful. Be indigestible. Grow spikes.”

Nepenthes instantly inspires another tarpit

Nepenthes was released in mid-January but was instantly popularized beyond Aaron’s expectations after tech journalist Cory Doctorow boosted a tech commentator, Jürgen Geuter, praising the novel AI attack method on Mastodon. Very quickly, Aaron was shocked to see engagement with Nepenthes skyrocket.

“That’s when I realized, ‘oh this is going to be something,'” Aaron told Ars. “I’m kind of shocked by how much it’s blown up.”

It’s hard to tell how widely Nepenthes has been deployed. Site owners are discouraged from flagging when the malware has been deployed, forcing crawlers to face unknown “consequences” if they ignore robots.txt instructions.

Aaron told Ars that while “a handful” of site owners have reached out and “most people are being quiet about it,” his web server logs indicate that people are already deploying the tool. Likely, site owners want to protect their content, deter scraping, or mess with AI companies.

When software developer and hacker Gergely Nagy, who goes by the handle “algernon” online, saw Nepenthes, he was delighted. At that time, Nagy told Ars that nearly all of his server’s bandwidth was being “eaten” by AI crawlers.

Already blocking scraping and attempting to poison AI models through a simpler method, Nagy took his defense method further and created his own tarpit, Iocaine. He told Ars the tarpit immediately killed off about 94 percent of bot traffic to his site, which was primarily from AI crawlers. Soon, social media discussion drove users to inquire about Iocaine deployment, including not just individuals but also organizations wanting to take stronger steps to block scraping.

Iocaine takes ideas (not code) from Nepenthes, but it’s more intent on using the tarpit to poison AI models. Nagy used a reverse proxy to trap crawlers in an “infinite maze of garbage” in an attempt to slowly poison their data collection as much as possible for daring to ignore robots.txt.

Taking its name from “one of the deadliest poisons known to man” from The Princess Bride, Iocaine is jokingly depicted as the “deadliest poison known to AI.” While there’s no way of validating that claim, Nagy’s motto is that the more poisoning attacks that are out there, “the merrier.” He told Ars that his primary reasons for building Iocaine were to help rights holders wall off valuable content and stop AI crawlers from crawling with abandon.

Tarpits aren’t perfect weapons against AI

Running malware like Nepenthes can burden servers, too. Aaron likened the cost of running Nepenthes to running a cheap virtual machine on a Raspberry Pi, and Nagy said that serving crawlers Iocaine costs about the same as serving his website.

But Aaron told Ars that Nepenthes wasting resources is the chief objection he’s seen preventing its deployment. Critics fear that deploying Nepenthes widely will not only burden their servers but also increase the costs of powering all that AI crawling for nothing.

“That seems to be what they’re worried about more than anything,” Aaron told Ars. “The amount of power that AI models require is already astronomical, and I’m making it worse. And my view of that is, OK, so if I do nothing, AI models, they boil the planet. If I switch this on, they boil the planet. How is that my fault?”

Aaron also defends against this criticism by suggesting that a broader impact could slow down AI investment enough to possibly curb some of that energy consumption. Perhaps due to the resistance, AI companies will be pushed to seek permission first to scrape or agree to pay more content creators for training on their data.

“Any time one of these crawlers pulls from my tarpit, it’s resources they’ve consumed and will have to pay hard cash for, but, being bullshit, the money [they] have spent to get it won’t be paid back by revenue,” Aaron posted, explaining his tactic online. “It effectively raises their costs. And seeing how none of them have turned a profit yet, that’s a big problem for them. The investor money will not continue forever without the investors getting paid.”

Nagy agrees that the more anti-AI attacks there are, the greater the potential is for them to have an impact. And by releasing Iocaine, Nagy showed that social media chatter about new attacks can inspire new tools within a few days. Marcus Butler, an independent software developer, similarly built his poisoning attack called Quixotic over a few days, he told Ars. Soon afterward, he received messages from others who built their own versions of his tool.

Butler is not in the camp of wanting to destroy AI. He told Ars that he doesn’t think “tools like Quixotic (or Nepenthes) will ‘burn AI to the ground.'” Instead, he takes a more measured stance, suggesting that “these tools provide a little protection (a very little protection) against scrapers taking content and, say, reposting it or using it for training purposes.”

But for a certain sect of Internet users, every little bit of protection seemingly helps. Geuter linked Ars to a list of tools bent on sabotaging AI. Ultimately, he expects that tools like Nepenthes are “probably not gonna be useful in the long run” because AI companies can likely detect and drop gibberish from training data. But Nepenthes represents a sea change, Geuter told Ars, providing a useful tool for people who “feel helpless” in the face of endless scraping and showing that “the story of there being no alternative or choice is false.”

Criticism of tarpits as AI weapons

Critics debating Nepenthes’ utility on Hacker News suggested that most AI crawlers could easily avoid tarpits like Nepenthes, with one commenter describing the attack as being “very crawler 101.” Aaron said that was his “favorite comment” because if tarpits are considered elementary attacks, he has “2 million lines of access log that show that Google didn’t graduate.”

But efforts to poison AI or waste AI resources don’t just mess with the tech industry. Governments globally are seeking to leverage AI to solve societal problems, and attacks on AI’s resilience seemingly threaten to disrupt that progress.

Nathan VanHoudnos is a senior AI security research scientist in the federally funded CERT Division of the Carnegie Mellon University Software Engineering Institute, which partners with academia, industry, law enforcement, and government to “improve the security and resilience of computer systems and networks.” He told Ars that new threats like tarpits seem to replicate a problem that AI companies are already well aware of: “that some of the stuff that you’re going to download from the Internet might not be good for you.”

“It sounds like these tarpit creators just mainly want to cause a little bit of trouble,” VanHoudnos said. “They want to make it a little harder for these folks to get” the “better or different” data “that they’re looking for.”

VanHoudnos co-authored a paper on “Counter AI” last August, pointing out that attackers like Aaron and Nagy are limited in how much they can mess with AI models. They may have “influence over what training data is collected but may not be able to control how the data are labeled, have access to the trained model, or have access to the Al system,” the paper said.

Further, AI companies are increasingly turning to the deep web for unique data, so any efforts to wall off valuable content with tarpits may be coming right when crawling on the surface web starts to slow, VanHoudnos suggested.

But according to VanHoudnos, AI crawlers are also “relatively cheap,” and companies may deprioritize fighting against new attacks on crawlers if “there are higher-priority assets” under attack. And tarpitting “does need to be taken seriously because it is a tool in a toolkit throughout the whole life cycle of these systems. There is no silver bullet, but this is an interesting tool in a toolkit,” he said.

Offering a choice to abstain from AI training

Aaron told Ars that he never intended Nepenthes to be a major project but that he occasionally puts in work to fix bugs or add new features. He said he’d consider working on integrations for real-time reactions to crawlers if there was enough demand.

Currently, Aaron predicts that Nepenthes might be most attractive to rights holders who want AI companies to pay to scrape their data. And many people seem enthusiastic about using it to reinforce robots.txt. But “some of the most exciting people are in the ‘let it burn’ category,” Aaron said. These people are drawn to tools like Nepenthes as an act of rebellion against AI making the Internet less useful and enjoyable for users.

Geuter told Ars that he considers Nepenthes “more of a sociopolitical statement than really a technological solution (because the problem it’s trying to address isn’t purely technical, it’s social, political, legal, and needs way bigger levers).”

To Geuter, a computer scientist who has been writing about the social, political, and structural impact of tech for two decades, AI is the “most aggressive” example of “technologies that are not done ‘for us’ but ‘to us.'”

“It feels a bit like the social contract that society and the tech sector/engineering have had (you build useful things, and we’re OK with you being well-off) has been canceled from one side,” Geuter said. “And that side now wants to have its toy eat the world. People feel threatened and want the threats to stop.”

As AI evolves, so do attacks, with one 2021 study showing that increasingly stronger data poisoning attacks, for example, were able to break data sanitization defenses. Whether these attacks can ever do meaningful destruction or not, Geuter sees tarpits as a “powerful symbol” of the resistance that Aaron and Nagy readily joined.

“It’s a great sign to see that people are challenging the notion that we all have to do AI now,” Geuter said. “Because we don’t. It’s a choice. A choice that mostly benefits monopolists.”

Tarpit creators like Nagy will likely be watching to see if poisoning attacks continue growing in sophistication. On the Iocaine site—which, yes, is protected from scraping by Iocaine—he posted this call to action: “Let’s make AI poisoning the norm. If we all do it, they won’t have anything to crawl.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt Read More »

nvidia-geforce-rtx-5090-costs-as-much-as-a-whole-gaming-pc—but-it-sure-is-fast

Nvidia GeForce RTX 5090 costs as much as a whole gaming PC—but it sure is fast


Even setting aside Frame Generation, this is a fast, power-hungry $2,000 GPU.

Credit: Andrew Cunningham

Credit: Andrew Cunningham

Nvidia’s GeForce RTX 5090 starts at $1,999 before you factor in upsells from the company’s partners or price increases driven by scalpers and/or genuine demand. It costs more than my entire gaming PC.

The new GPU is so expensive that you could build an entire well-specced gaming PC with Nvidia’s next-fastest GPU in it—the $999 RTX 5080, which we don’t have in hand yet—for the same money, or maybe even a little less with judicious component selection. It’s not the most expensive GPU that Nvidia has ever launched—2018’s $2,499 Titan RTX has it beat, and 2022’s RTX 3090 Ti also cost $2,000—but it’s safe to say it’s not really a GPU intended for the masses.

At least as far as gaming is concerned, the 5090 is the very definition of a halo product; it’s for people who demand the best and newest thing regardless of what it costs (the calculus is probably different for deep-pocketed people and companies who want to use them as some kind of generative AI accelerator). And on this front, at least, the 5090 is successful. It’s the newest and fastest GPU you can buy, and the competition is not particularly close. It’s also a showcase for DLSS Multi-Frame Generation, a new feature unique to the 50-series cards that Nvidia is leaning on heavily to make its new GPUs look better than they already are.

Founders Edition cards: Design and cooling

RTX 5090 RTX 4090 RTX 5080 RTX 4080 Super
CUDA cores 21,760 16,384 10,752 10,240
Boost clock 2,410 MHz 2,520 MHz 2,617 MHz 2,550 MHz
Memory bus width 512-bit 384-bit 256-bit 256-bit
Memory bandwidth 1,792 GB/s 1,008 GB/s 960 GB/s 736 GB/s
Memory size 32GB GDDR7 24GB GDDR6X 16GB GDDR7 16GB GDDR6X
TGP 575 W 450 W 360 W 320 W

We won’t spend too long talking about the specific designs of Nvidia’s Founders Edition cards since many buyers will experience the Blackwell GPUs with cards from Nvidia’s partners instead (the cards we’ve seen so far mostly look like the expected fare: gargantuan triple-slot triple-fan coolers, with varying degrees of RGB). But it’s worth noting that Nvidia has addressed a couple of my functional gripes with the 4090/4080-series design.

The first was the sheer dimensions of each card—not an issue unique to Nvidia, but one that frequently caused problems for me as someone who tends toward ITX-based PCs and smaller builds. The 5090 and 5080 FE designs are the same length and height as the 4090 and 4080 FE designs, but they only take up two slots instead of three, which will make them an easier fit for many cases.

Nvidia has also tweaked the cards’ 12VHPWR connector, recessing it into the card and mounting it at a slight angle instead of having it sticking straight out of the top edge. The height of the 4090/4080 FE design made some cases hard to close up once you factored in the additional height of a 12VHPWR cable or Nvidia’s many-tentacled 8-pin-to-12VHPWR adapter. The angled connector still extends a bit beyond the top of the card, but it’s easier to tuck the cable away so you can put the side back on your case.

Finally, Nvidia has changed its cooler—whereas most OEM GPUs mount all their fans on the top of the GPU, Nvidia has historically placed one fan on each side of the card. In a standard ATX case with the GPU mounted parallel to the bottom of the case, this wasn’t a huge deal—there’s plenty of room for that air to circulate inside the case and to be expelled by whatever case fans you have installed.

But in “sandwich-style” ITX cases, where a riser cable wraps around so the GPU can be mounted parallel to the motherboard, the fan on the bottom side of the GPU was poorly placed. In many sandwich-style cases, the GPU fan will dump heat against the back of the motherboard, making it harder to keep the GPU cool and creating heat problems elsewhere besides. The new GPUs mount both fans on the top of the cards.

Nvidia’s Founders Edition cards have had heat issues in the past—most notably the 30-series GPUs—and that was my first question going in. A smaller cooler plus a dramatically higher peak power draw seems like a recipe for overheating.

Temperatures for the various cards we re-tested for this review. The 5090 FE is the toastiest of all of them, but it still has a safe operating temperature.

At least for the 5090, the smaller cooler does mean higher temperatures—around 10 to 12 degrees Celsius higher when running the same benchmarks as the RTX 4090 Founders Edition. And while temperatures of around 77 degrees aren’t hugely concerning, this is sort of a best-case scenario, with an adequately cooled testbed case with the side panel totally removed and ambient temperatures at around 21° or 22° Celsius. You’ll just want to make sure you have a good amount of airflow in your case if you buy one of these.

Testbed notes

A new high-end Nvidia GPU is a good reason to tweak our test bed and suite of games, and we’ve done both here. Mainly, we added a 1050 W Thermaltake Toughpower GF A3 power supply—Nvidia recommends at least 1000 W for the 5090, and this one has a native 12VHPWR connector for convenience. We’ve also swapped the Ryzen 7 7800X3D for a slightly faster Ryzen 7 9800X3D to reduce the odds that the CPU will bottleneck performance as we try to hit high frame rates.

As for the suite of games, we’ve removed a couple of older titles and added some with built-in benchmarks that will tax these GPUs a bit more, especially at 4K with all the settings turned up. Those games include the RT Overdrive preset in the perennially punishing Cyberpunk 2077 and Black Myth: Wukong in Cinematic mode, both games where even the RTX 4090 struggles to hit 60 fps without an assist from DLSS. We’ve also added Horizon Zero Dawn Remastered, a recent release that doesn’t include ray-tracing effects but does support most DLSS 3 and FSR 3 features (including FSR Frame Generation).

We’ve tried to strike a balance between games with ray-tracing effects and games without it, though most AAA games these days include it, and modern GPUs should be able to handle it well (best of luck to AMD with its upcoming RDNA 4 cards).

For the 5090, we’ve run all tests in 4K—if you don’t care about running games in 4K, even if you want super-high frame rates at 1440p or for some kind of ultrawide monitor, the 5090 is probably overkill. When we run upscaling tests, we use the newest DLSS version available for Nvidia cards, the newest FSR version available for AMD cards, and the newest XeSS version available for Intel cards (not relevant here, just stating for the record), and we use the “Quality” setting (at 4K, that equates to an actual rendering version of 1440p).

Rendering performance: A lot faster, a lot more power-hungry

Before we talk about Frame Generation or “fake frames,” let’s compare apples to apples and just examine the 5090’s rendering performance.

The card mainly benefits from four things compared to the 4090: the updated Blackwell GPU architecture, a nearly 33 percent increase in the number of CUDA cores, an upgrade from GDDR6X to GDDR7, and a move from a 384-bit memory bus to a 512-bit bus. It also jumps from 24GB of RAM to 32GB, but games generally aren’t butting up against a 24GB limit yet, so the capacity increase by itself shouldn’t really change performance if all you’re focused on is gaming.

And for people who prioritize performance over all else, the 5090 is a big deal—it’s the first consumer graphics card from any company that is faster than a 4090, as Nvidia never spruced up the 4090 last year when it did its mid-generation Super refreshes of the 4080, 4070 Ti, and 4070.

Comparing natively rendered games at 4K, the 5090 is between 17 percent and 40 percent faster than the 4090, with most of the games we tested landing somewhere in the low to high 30 percent range. That’s an undeniably big bump, one that’s roughly commensurate with the increase in the number of CUDA cores. Tests run with DLSS enabled (both upscaling-only and with Frame Generation running in 2x mode) improve by roughly the same amount.

You could find things to be disappointed about if you went looking for them. That 30-something-percent performance increase comes with a 35 percent increase in power use in our testing under load with punishing 4K games—the 4090 tops out around 420 W, whereas the 5090 went all the way up to 573 W, with the 5090 coming closer to its 575 W TDP than the 4090 does to its theoretical 450 W maximum. The 50-series cards use the same TSMC 4N manufacturing process as the 40-series cards, and increasing the number of transistors without changing the process results in a chip that uses more power (though it should be said that capping frame rates, running at lower resolutions, or running less-demanding games can rein in that power use a bit).

Power draw under load goes up by an amount roughly commensurate with performance. The 4090 was already power-hungry; the 5090 is dramatically more so. Credit: Andrew Cunningham

The 5090’s 30-something percent increase over the 4090 might also seem underwhelming if you recall that the 4090 was around 55 percent faster than the previous-generation 3090 Ti while consuming about the same amount of power. To be even faster than a 4090 is no small feat—AMD’s fastest GPU is more in line with Nvidia’s 4080 Super—but if you’re comparing the two cards using the exact same tests, the relative leap is less seismic.

That brings us to Nvidia’s answer for that problem: DLSS 4 and its Multi-Frame Generation feature.

DLSS 4 and Multi-Frame Generation

As a refresher, Nvidia’s DLSS Frame Generation feature, as introduced in the GeForce 40-series, takes DLSS upscaling one step further. The upscaling feature inserted interpolated pixels into a rendered image to make it look like a sharper, higher-resolution image without having to do all the work of rendering all those pixels. DLSS FG would interpolate an entire frame between rendered frames, boosting your FPS without dramatically boosting the amount of work your GPU was doing. If you used DLSS upscaling and FG at the same time, Nvidia could claim that seven out of eight pixels on your screen were generated by AI.

DLSS Multi-Frame Generation (hereafter MFG, for simplicity’s sake) does the same thing, but it can generate one to three interpolated frames for every rendered frame. The marketing numbers have gone up, too; now, 15 out of every 16 pixels on your screen can be generated by AI.

Nvidia might point to this and say that the 5090 is over twice as fast as the 4090, but that’s not really comparing apples to apples. Expect this issue to persist over the lifetime of the 50-series. Credit: Andrew Cunningham

Nvidia provided reviewers with a preview build of Cyberpunk 2077 with DLSS MFG enabled, which gives us an example of how those settings will be exposed to users. For 40-series cards that only support the regular DLSS FG, you won’t notice a difference in games that support MFG—Frame Generation is still just one toggle you can turn on or off. For 50-series cards that support MFG, you’ll be able to choose from among a few options, just as you currently can with other DLSS quality settings.

The “2x” mode is the old version of DLSS FG and is supported by both the 50-series cards and 40-series GPUs; it promises one generated frame for every rendered frame (two frames total, hence “2x”). The “3x” and “4x” modes are new to the 50-series and promise two and three generated frames (respectively) for every rendered frame. Like the original DLSS FG, MFG can be used in concert with normal DLSS upscaling, or it can be used independently.

One problem with the original DLSS FG was latency—user input was only being sampled at the natively rendered frame rate, meaning you could be looking at 60 frames per second on your display but only having your input polled 30 times per second. Another is image quality; as good as the DLSS algorithms can be at guessing and recreating what a natively rendered pixel would look like, you’ll inevitably see errors, particularly in fine details.

Both these problems contribute to the third problem with DLSS FG: Without a decent underlying frame rate, the lag you feel and the weird visual artifacts you notice will both be more pronounced. So DLSS FG can be useful for turning 120 fps into 240 fps, or even 60 fps into 120 fps. But it’s not as helpful if you’re trying to get from 20 or 30 fps up to a smooth 60 fps.

We’ll be taking a closer look at the DLSS upgrades in the next couple of weeks (including MFG and the new transformer model, which will supposedly increase upscaling quality and supports all RTX GPUs). But in our limited testing so far, the issues with DLSS MFG are basically the same as with the first version of Frame Generation, just slightly more pronounced. In the built-in Cyberpunk 2077 benchmark, the most visible issues are with some bits of barbed-wire fencing, which get smoother-looking and less detailed as you crank up the number of AI-generated frames. But the motion does look fluid and smooth, and the frame rate counts are admittedly impressive.

But as we noted in last year’s 4090 review, the xx90 cards portray FG and MFG in the best light possible since the card is already capable of natively rendering such high frame rates. It’s on lower-end cards where the shortcomings of the technology become more pronounced. Nvidia might say that the upcoming RTX 5070 is “as fast as a 4090 for $549,” and it might be right in terms of the number of frames the card can put up on your screen every second. But responsiveness and visual fidelity on the 4090 will be better every time—AI is a good augmentation for rendered frames, but it’s iffy as a replacement for rendered frames.

A 4090, amped way up

Nvidia’s GeForce RTX 5090. Credit: Andrew Cunningham

The GeForce RTX 5090 is an impressive card—it’s the only consumer graphics card to be released in over two years that can outperform the RTX 4090. The main caveats are its sky-high power consumption and sky-high price; by itself, it costs as much (and consumes as much power as) an entire mainstream gaming PC. The card is aimed at people who care about speed way more than they care about price, but it’s still worth putting it into context.

The main controversy, as with the 40-series, is how Nvidia talks about its Frame Generation-inflated performance numbers. Frame Generation and Multi-Frame Generation are tools in a toolbox—there will be games where they make things look great and run fast with minimal noticeable impact to visual quality or responsiveness, games where those impacts are more noticeable, and games that never add support for the features at all. (As well-supported as DLSS generally is in new releases, it is incumbent upon game developers to add it—and update it when Nvidia puts out a new version.)

But using those Multi-Frame Generation-inflated FPS numbers to make topline comparisons to last-generation graphics cards just feels disingenuous. No, an RTX 5070 will not be as fast as an RTX 4090 for just $549, because not all games support DLSS MFG, and not all games that do support it will run it well. Frame Generation still needs a good base frame rate to start with, and the slower your card is, the more issues you might notice.

Fuzzy marketing aside, Nvidia is still the undisputed leader in the GPU market, and the RTX 5090 extends that leadership for what will likely be another entire GPU generation, since both AMD and Intel are focusing their efforts on higher-volume, lower-cost cards right now. DLSS is still generally better than AMD’s FSR, and Nvidia does a good job of getting developers of new AAA game releases to support it. And if you’re buying this GPU to do some kind of rendering work or generative AI acceleration, Nvidia’s performance and software tools are still superior. The misleading performance claims are frustrating, but Nvidia still gains a lot of real advantages from being as dominant and entrenched as it is.

The good

  • Usually 30-something percent faster than an RTX 4090
  • Redesigned Founders Edition card is less unwieldy than the bricks that were the 4090/4080 design
  • Adequate cooling, despite the smaller card and higher power use
  • DLSS Multi-Frame Generation is an intriguing option if you’re trying to hit 240 or 360 fps on your high-refresh-rate gaming monitor

The bad

  • Much higher power consumption than the 4090, which already consumed more power than any other GPU on the market
  • Frame Generation is good at making a game that’s running fast run faster, it’s not as good for bringing a slow game up to 60 Hz
  • Nvidia’s misleading marketing around Multi-Frame Generation is frustrating—and will likely be more frustrating for lower-end cards since they aren’t getting the same bumps to core count and memory interface that the 5090 gets

The ugly

  • You can buy a whole lot of PC for $2,000, and we wouldn’t bet on this GPU being easy to find at MSRP

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Nvidia GeForce RTX 5090 costs as much as a whole gaming PC—but it sure is fast Read More »

sleeping-pills-stop-the-brain’s-system-for-cleaning-out-waste

Sleeping pills stop the brain’s system for cleaning out waste


Cleanup on aisle cerebellum

A specialized system sends pulses of pressure through the fluids in our brain.

Our bodies rely on their lymphatic system to drain excessive fluids and remove waste from tissues, feeding those back into the blood stream. It’s a complex yet efficient cleaning mechanism that works in every organ except the brain. “When cells are active, they produce waste metabolites, and this also happens in the brain. Since there are no lymphatic vessels in the brain, the question was what was it that cleaned the brain,” Natalie Hauglund, a neuroscientist at Oxford University who led a recent study on the brain-clearing mechanism, told Ars.

Earlier studies done mostly on mice discovered that the brain had a system that flushed its tissues with cerebrospinal fluid, which carried away waste products in a process called glymphatic clearance. “Scientists noticed that this only happened during sleep, but it was unknown what it was about sleep that initiated this cleaning process,” Hauglund explains.

Her study found the glymphatic clearance was mediated by a hormone called norepinephrine and happened almost exclusively during the NREM sleep phase. But it only worked when sleep was natural. Anesthesia and sleeping pills shut this process down nearly completely.

Taking it slowly

The glymphatic system in the brain was discovered back in 2013 by Dr. Maiken Nedergaard, a Danish neuroscientist and a coauthor of Hauglund’s paper. Since then, there have been numerous studies aimed at figuring out how it worked, but most of them had one problem: they were done on anesthetized mice.

“What makes anesthesia useful is that you can have a very controlled setting,” Hauglund says.

Most brain imaging techniques require a subject, an animal or a human, to be still. In mouse experiments, that meant immobilizing their heads so the research team could get clear scans. “But anesthesia also shuts down some of the mechanisms in the brain,” Hauglund argues.

So, her team designed a study to see how the brain-clearing mechanism works in mice that could move freely in their cages and sleep naturally whenever they felt like it. “It turned out that with the glymphatic system, we didn’t really see the full picture when we used anesthesia,” Hauglund says.

Looking into the brain of a mouse that runs around and wiggles during sleep, though, wasn’t easy. The team pulled it off by using a technique called flow fiber photometry which works by imaging fluids tagged with fluorescent markers using a probe implanted in the brain. So, the mice got the optical fibers implanted in their brains. Once that was done, the team put fluorescent tags in the mice’s blood, cerebrospinal fluid, and on the norepinephrine hormone. “Fluorescent molecules in the cerebrospinal fluid had one wavelength, blood had another wavelength, and norepinephrine had yet another wavelength,” Hauglund says.

This way, her team could get a fairly precise idea about the brain fluid dynamics when mice were awake and asleep. And it turned out that the glymphatic system basically turned brain tissues into a slowly moving pump.

Pumping up

“Norepinephrine is released from a small area of the brain in the brain stem,” Hauglund says. “It is mainly known as a response to stressful situations. For example, in fight or flight scenarios, you see norepinephrine levels increasing.” Its main effect is causing blood vessels to contract. Still, in more recent research, people found out that during sleep, norepinephrine is released in slow waves that roll over the brain roughly once a minute. This oscillatory norepinephrine release proved crucial to the operation of the glymphatic system.

“When we used the flow fiber photometry method to look into the brains of mice, we saw these slow waves of norepinephrine, but we also saw how it works in synchrony with fluctuation in the blood volume,” Hauglund says.

Every time the norepinephrine level went up, it caused the contraction of the blood vessels in the brain, and the blood volume went down. At the same time, the contraction increased the volume of the perivascular spaces around the blood vessels, which were immediately filled with the cerebrospinal fluid.

When the norepinephrine level went down, the process worked in reverse: the blood vessels dilated, letting the blood in and pushing the cerebrospinal fluid out. “What we found was that norepinephrine worked a little bit like a conductor of an orchestra and makes the blood and cerebrospinal fluid move in synchrony in these slow waves,” Hauglund says.

And because the study was designed to monitor this process in freely moving, undisturbed mice, the team learned exactly when all this was going on. When mice were awake, the norepinephrine levels were much higher but relatively steady. The team observed the opposite during the REM sleep phase, where the norepinephrine levels were consistently low. The oscillatory behavior was present exclusively during the NREM sleep phase.

So, the team wanted to check how the glymphatic clearance would work when they gave the mice zolpidem, a sleeping drug that had been proven to increase NREM sleep time. In theory, zolpidem should have boosted brain-clearing. But it turned it off instead.

Non-sleeping pills

“When we looked at the mice after giving them zolpidem, we saw they all fell asleep very quickly. That was expected—we take zolpidem because it makes it easier for us to sleep,” Hauglund says. “But then we saw those slow fluctuations in norepinephrine, blood volume, and cerebrospinal fluid almost completely stopped.”

No fluctuations meant the glymphatic system didn’t remove any waste. This was a serious issue, because one of the cellular waste products it is supposed to remove is amyloid beta, found in the brains of patients suffering from Alzheimer’s disease.

Hauglund speculates it could be possible zolpidem induces a state very similar to sleep but at the same time it shuts down important processes that happen during sleep. While heavy zolpidem use has been associated with increased risk of the Alzheimer disease, it is not clear if this increased risk was there because the drug was inhibiting oscillatory norepinephrine release in the brain. To better understand this, Hauglund wants to get a closer look into how the glymphatic system works in humans.

“We know we have the same wave-like fluid dynamics in the brain, so this could also drive the brain clearance in humans,” Haugland told Ars. “Still, it’s very hard to look at norepinephrine in the human brain because we need an invasive technique to get to the tissue.”

But she said norepinephrine levels in people can be estimated based on indirect clues. One of them is pupil dilation and contraction, which work in in synchrony with the norepinephrine levels. Another other clue may lay in microarousals—very brief, imperceivable awakenings which, Hauglund thinks, can be correlated with the brain clearing mechanism. “I am currently interested in this phenomenon […]. Right now we have no idea why microarousals are there or what function they have” Hauglund says.

But the last step she has on her roadmap is making better sleeping pills. “We need sleeping drugs that don’t have this inhibitory effect on the norepinephrine waves. If we can have a sleeping pill that helps people sleep without disrupting their sleep at the same time it will be very important,” Hauglund concludes.

Cell, 2025. DOI: 10.1016/j.cell.2024.11.027

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Sleeping pills stop the brain’s system for cleaning out waste Read More »

fire-destroys-starship-on-its-seventh-test-flight,-raining-debris-from-space

Fire destroys Starship on its seventh test flight, raining debris from space

This launch debuted a more advanced, slightly taller version of Starship, known as Version 2 or Block 2, with larger propellant tanks, a new avionics system, and redesigned feed lines flowing methane and liquid oxygen propellants to the ship’s six Raptor engines. SpaceX officials did not say whether any of these changes might have caused the problem on Thursday’s launch.

SpaceX officials have repeatedly and carefully set expectations for each Starship test flight. They routinely refer to the rocket as experimental, and the primary focus of the rocket’s early demo missions is to gather data on the performance of the vehicle. What works, and what doesn’t work?

Still, the outcome of Thursday’s test flight is a clear disappointment for SpaceX. This was the seventh test flight of SpaceX’s enormous rocket and the first time Starship failed to complete its launch sequence since the second flight in November 2023. Until now, SpaceX has made steady progress, and each Starship flight has achieved more milestones than the one before.

On the first flight in April 2023, the rocket lost control a little more than two minutes after liftoff, and the ground-shaking power of the booster’s 33 engines shattered the concrete foundation beneath the launch pad. Seven months later, on Flight 2, the rocket made it eight minutes before failing. On that mission, Starship failed at roughly the same point of its ascent, just before the cutoff of the vehicle’s six methane-fueled Raptor engines.

Back then, a handful of photos and images from the Florida Keys and Puerto Rico showed debris in the sky after Starship activated its self-destruct mechanism due to an onboard fire caused by a dump of liquid oxygen propellant. But that flight occurred in the morning, with bright sunlight along the ship’s flight path.

This time, the ship disintegrated and reentered the atmosphere at dusk, with impeccable lighting conditions accentuating the debris cloud’s appearance. These twilight conditions likely contributed to the plethora of videos posted to social media on Thursday.

Starship and Super Heavy head downrange from SpaceX’s launch site near Brownsville, Texas. Credit: SpaceX

The third Starship test flight last March saw the spacecraft reach its planned trajectory and fly halfway around the world before succumbing to the scorching heat of atmospheric reentry. In June, the fourth test flight ended with controlled splashdowns of the rocket’s Super Heavy booster in the Gulf of Mexico and of Starship in the Indian Ocean.

In October, SpaceX caught the Super Heavy booster with mechanical arms at the launch pad for the first time, proving out the company’s audacious approach to recovering and reusing the rocket. On this fifth test flight, SpaceX modified the ship’s heat shield to better handle the hot temperatures of reentry, and the vehicle again made it to an on-target splashdown in the Indian Ocean.

Most recently, Flight 6 on November 19 demonstrated the ship’s ability to reignite its Raptor engines in space for the first time and again concluded with a bullseye splashdown. But SpaceX aborted an attempt to again catch the booster back at Starbase due to a problem with sensors on the launch pad’s tower.

With Flight 7, SpaceX hoped to test more changes to the heat shield protecting Starship from reentry temperatures up to 2,600° Fahrenheit (1,430° Celsius). Musk has identified the heat shield as one of the most difficult challenges still facing the program. In order for SpaceX to reach its ambition for the ship to become rapidly reusable, with minimal or no refurbishment between flights, the heat shield must be resilient and durable.

Fire destroys Starship on its seventh test flight, raining debris from space Read More »

the-8-most-interesting-pc-monitors-from-ces-2025

The 8 most interesting PC monitors from CES 2025


Monitors worth monitoring

Here are upcoming computer screens with features that weren’t around last year.

Yes, that’s two monitors in a suitcase.

Yes, that’s two monitors in a suitcase.

Plenty of computer monitors made debuts at the Consumer Electronics Show (CES) in Las Vegas this year, but many of the updates at this year’s event were pretty minor. Many could have easily been a part of 2024’s show.

But some brought new and interesting features to the table for 2025—in this article, we’ll tell you all about them.

LG’s 6K monitor

Pixel addicts are always right at home at CES, and the most interesting high-resolution computer monitor to come out of this year’s show is the LG UltraFine 6K Monitor (model 32U990A).

People seeking more than 3840×2160 resolution have limited options, and they’re all rather expensive (looking at you, Apple Pro Display XDR). LG’s 6K monitor means there’s another option for professionals needing extra pixels for things like developing, engineering, and creative work. And LG’s 6144×3456, 32-inch display has extra oomph thanks to something no other 6K monitor has: Thunderbolt 5.

This is the only image LG provided for the monitor. Credit: LG

LG hasn’t confirmed the refresh rate of its 6K monitor, so we don’t know how much bandwidth it needs. But it’s possible that pairing the UltraFine with a Thunderbolt 5 PC could trigger Bandwidth Boost, a Thunderbolt 5 feature that automatically increases bandwidth from 80Gbps to 120Gbps. For comparison, Thunderbolt 4 maxes out at 40Gbps. Thunderbolt 5 also requires 140 W power delivery and maxes out at 240 W. That’s a notable bump from Thunderbolt 4’s 100–140 W.

Considering that Apple’s only 6K monitor has Thunderbolt 3, Thunderbolt 5 is a differentiator. With this capability, the LG UltraFine is ironically better equipped in this regard for use with the new MacBook Pros and Mac Mini (which all have Thunderbolt 5) compared to Apple’s own monitors. LG may be aware of this, as the 32U990A’s aesthetic could be considered very Apple-like.

Inside the 32U990A’s silver chassis is a Nano IPS panel. In recent years, LG has advertised its Nano IPS panels as having “nanometer-sized particles” applied to their LED backlight to absorb “excess, unnecessary light wavelengths” for “richer color expression.” LG’s 6K monitor claims to cover 98 percent of DCI-P3 and 99.5 percent of Adobe RGB. IPS Black monitors, meanwhile, have higher contrast ratios (up to 3,000:1) than standard IPS panels. However, LG has released Nano IPS monitors with 2,000:1 contrast, the same contrast ratio as Dell’s 6K, IPS Black monitor.

LG hasn’t shared other details, like price or a release date. But the monitor may cost more than Dell’s Thunderbolt 4-equipped monitor, which is currently $2,480.

Brelyon’s multi-depth monitor

Brelyon Ultra Reality Extend.

Someone from CNET using the Ultra Reality Extend. Credit: CNET/YouTube

Brelyon is headquartered in San Mateo, California, and was founded by scientists and executives from MIT, IMAX, UCF, and DARPA. It’s been selling display technology for commercial and defense applications since 2022. At CES, the company unveiled the Ultra Reality Extend, describing it as an “immersive display line that renders virtual images in multiple depths.”

“As the first commercial multi-focal monitor, the Extend model offers multi-depth programmability for information overlay, allowing users to see images from 0.7 m to as far as 2.5 m of depth virtually rendered behind the monitor; organizing various data streams at different depth layers, or triggering focal cues to induce an ultra immersive experience akin to looking out through a window,” Brelyon’s announcement said.

Brelyon says the monitor runs 4K at 60 Hz with 1 bit of monocular depth for an 8K effect. The monitor includes “OLED-based curved 2D virtual images, with the largest stretching to 122 inches and extending 2.5 meters deep, viewable through a 30-inch frame,” according to the firm’s announcement. The closer you sit, the greater the field of view you get.

The Extend leverages “new GPU capabilities to process light and video signals inside our display platforms,” Brelyon CEO Barmak Heshmat said in a statement this week. He added: “We are thinking beyond headsets and glasses, where we can leverage GPU capabilities to do real-time driving of higher-bandwidth display interfaces.”

Brelyon says this was captured from the Extend, with its camera lens focus changing from 70 cm to 2,500 cm. Credit: Brelyon

Advancements in AI-based video processing, as well as other software advancements and hardware improvements, purportedly enable the Extend to upscale lower-dimension streams to multiple, higher-dimension ones. Brelyon describes its product as a “generative display system” that uses AI computation and optics to assign different depth values to content in real time for rendering images and information overlays.

The idea of a virtual monitor that surpasses the field of view of typical desktop monitors while allowing users to see the real world isn’t new. Tech firms (including many at CES) usually try to accomplish this through AR glasses. But head-mounted displays still struggle with problems like heat, weight, computing resources, battery, and aesthetics.

Brelyon’s monitor seemingly demoed well at CES. Sam Rutherford, a senior writer at Engadget, watched a clip from the Marvel’s Spider-Man video game on the Extend and said that “trees and light poles whipping past in my face felt so real I started to flinch subconsciously.” He added that the monitor separated “different layers of the content to make snow in the foreground look blurry as it whipped across the screen, while characters in the distance” still looked sharp.

The monitor costs $5,000 to $8,000 depending on how you’ll use it and whether you have other business with Brelyon, per Engadget, and CES is one of the few places where people could actually see the display in action.

Samsung’s 3D monitor

Samsung Odyssey 3D

Samsung’s depiction of the 3D effect of its 3D PC monitor. Credit: Samsung

It’s 2025, and tech companies are still trying to convince people to bring a 3D display into their homes. This week, Samsung took its first swing since 2009 at 3D screens with the Odyssey 3D monitor.

In lieu of 3D glasses. the Odyssey 3D achieves its 3D effect with a lenticular lens “attached to the front of the panel and its front stereo camera,” Samsung says, as well eye tracking and view mapping. Differing from other recent 3D monitors, the Odyssey 3D claims to be able to make 2D content look three-dimensional even if that content doesn’t officially support 3D.

You can find more information in our initial coverage of Samsung’s Odyssey 3D, but don’t bet on finding 3D monitors in many people’s homes soon. The technology for quality 3D displays that work without glasses has been around for years but still has never taken off.

Dell’s OLED productivity monitor

With improvements in burn-in, availability, and brightness, finding OLED monitors today is much easier than it was two years ago. But a lot of the OLED monitors released recently target gamers with features like high refresh rates, ultrawide panels, and RGB. These features are unneeded or unwanted by non-gamers but contribute to OLED monitors’ already high pricing. Numerous smaller OLED monitors were announced at CES, with 27-inch, 4K models being a popular addition. Most of them are still high-refresh gaming monitors, though.

The Dell 32-inch QD-OLED, on the other hand, targets “play, school, and work,” Dell’s announcement says. And its naming (based on a new naming convention Dell announced this week that kills XPS and other longstanding branding) signals that this is a mid-tier monitor from Dell’s entry-level lineup.

Dell 32-inch QD-OLED,

OLED for normies. Credit: Dell

The monitor’s specs, which include a 120 Hz refresh rate, AMD FreeSync Premium, and USB-C power delivery at up to 90 W, make it a good fit for pairing with many mainstream laptops.

Dell also says this is the first QD-OLED with spatial audio, which uses head tracking to alter audio coming from the monitor’s five 5 W speakers. This is a feature we’ve seen before, but not on an OLED monitor.

For professionals and/or Mac users that prefer the sleek looks, reputation, higher power delivery and I/O hubs associated with Dell’s popular UltraSharp line, Dell made two more notable announcements at CES: an UltraSharp 32 4K Thunderbolt Hub Monitor (U3225QE) coming out in February 25 for $950 and an UltraSharp 27 4K Thunderbolt Hub Monitor (U2725QE) coming out that same day for $700.

The suitcase monitors

Before we get into the Base Case, please note that this product has no release date because its creators plan to go to market via crowdfunding. Base Case says it will launch its Indiegogo campaign next month, but even then, we don’t know if the project will be funded, if any final product will work as advertised, or if customers will receive orders in a timely fashion. Still, this is one of the most unusual monitors at CES, and it’s worth discussing.

The Base Case is shaped like a 24x14x16.5-inch rolling suitcase, but when you open it up, you’ll find two 24-inch monitors for connecting to a laptop. Each screen reportedly has a 1920×1080 resolution, a 75 Hz refresh rate, and a max brightness claim of 350 nits. Base Case is also advertising PC and Mac support (through DisplayLink), as well as HDMI, USB-C, USB-A, Thunderbolt, and Ethernet ports. Telescoping legs allow the case to rise 10 inches so the display can sit closer to eye level.

Ultimately, the Base Case would see owners lug around a 20-pound product for the ability to quickly create a dual-monitor setup equipped with a healthy amount of I/O. Tom’s Guide demoed a prototype at CES and reported that the monitors took “seconds to set up.”

In case you’re worried that the Base Case prioritizes displays over storage, note that its makers plan on adding a front pocket to the suitcase that can fit a laptop. The pocket wasn’t on the prototype Tom’s Guide saw, though.

Again, this is far from a finalized product, but Base Case has alluded to a $2,400 starting price. For comparison to other briefcase-locked displays—and yes, doing this is possible—LG’s StanbyME Go (27LX5QKNA) tablet in a briefcase currently has a $1,200 MSRP.

Corsair’s PC-mountable touchscreen

A promotional image of the touchscreen.

If the Base Case is on the heftier side of portable monitors, Corsair’s Xeneon Edge is certainly on the minute side. The 14.5-inch LCD touchscreen isn’t meant to be a primary display, though. Corsair built it as a secondary screen for providing quick information, like the song your computer is playing, the weather, the time, and calendar events. You could also use the 2560×720 pixels to display system information, like component usage and temperatures.

Corsair says its iCue software will be able to provide system information on the Xeneon, but because the Xeneon Edge works like a regular monitor, you could (and likely would prefer to) use your own methods. Still, the Xeneon Edge stands out from other small, touchscreen PC monitors with its clean UI that can succinctly communicate a lot of information on the tiny display at once.

Specs-wise, this is a 60 Hz IPS panel with 5-point capacitive touch. Corsair says the monitor can hit 350 nits of brightness.

You can connect the Xeneon Edge to a computer via USB-C (DisplayPort Alt mode) or HDMI. There are also screw holes, so PC builders could install it via a 360 mm radiator mounting point inside their PC case.

Alternatively, Corsair recommends attaching the touchscreen to the outside of a PC case through the monitor’s 14 integrated magnets. Corsair said in a blog post that the “magnets are underneath the plastic casing so the metal surface you stick it to won’t get scratched.” Or, in traditional portable monitor style, the Xeneon Edge could also just sit on a desk with its included stand.

Corsair Xeneon Edge

Corsair demos different ways the screen could attach to a case. Credit: TechPowerUp/YouTube

Corsair plans to release the Xeneon Edge in Q2. Expected pricing is “around $249,” Tom’s Hardware reported.

MSI’s side panel display panel

Why attach a monitor to your PC case when you can turn your PC case into a monitor instead?

MSI says that the touchscreen embedded into this year’s MEG Vision X AI 2nd gaming desktop’s side panel can work like a regular computer monitor. Similar to Corsair’s monitor, the MSI’s display has a corresponding app that can show system information and other customizations, which you can toggle with controls on the front of the case, PCMag reported.

MSI used an IPS panel with 1920×1080 resolution for the display, which also has an integrated mic and speaker. MSI says “electric vehicle control centers” inspired the design. We’ve seen similar PC cases, like iBuyPower’s more translucent side panel display and the touchscreen on Hyte’s pentagonal PC case, before. But MSI is bringing the design to a more mainstream form factor by including it in a prebuilt desktop, potentially opening the door for future touchscreen-equipped desktops.

Considering the various locations people place their desktops and the different angles at which they may try to look at this screen, I’m curious about the monitor’s viewing angles and brightness. IPS seems like a good choice since it tends to have strong image quality when viewed from different angles. A video PC Mag shot from the show floor shows images on the monitor appearing visible and lively:

Hands on with MSI’s MEG Vision X AI Desktop: Now, your PC tower’s a monitor, too.

World’s fastest monitor

There’s a competitive air at CES that lends to tech brands trying to one-up each other on spec sheets. Some of the most heated competition concerns monitor refresh rates; for years, we’ve been meeting the new world’s fastest monitor at CES. This year is no different.

The brand behind the monitor is Koorui, a three-year-old Chinese firm whose website currently lists monitors and keyboards. Koorui hasn’t confirmed when it will make its 750 Hz display available, where it will sell it, or what it will cost. That should bring some skepticism about this product actually arriving for purchase in the US. However, Koorui did bring the display to the CES show floor.

The speedy display had a refresh rate test running at CES, and according to several videos we’ve seen from attendees, the monitor appeared to consistently hit the 750 Hz mark.

World’s first 750Hz monitor???

For those keeping track, high-end gaming monitors—namely ones targeting professional gamers—hit 360 Hz in 2020. Koorui’s announcement means max monitor speeds have increased 108.3 percent in four years.

One CES attendee noticed, however, that the monitor wasn’t showing any gameplay. This could be due to the graphical and computing prowess needed to demonstrate the benefits of a 750 Hz monitor. A system capable of 750 frames per second would give people a chance to see if they could detect improved motion resolution but would also be very expensive. It’s also possible that the monitor Koorui had on display wasn’t ready for that level of scrutiny yet.

Like many eSports monitors, the Koorui is 24.5 inches, with a resolution of 1920×1080. Perhaps more interesting than Koorui taking the lead in the perennial race for higher refresh rates is the TN monitor’s claimed color capabilities. TN monitors aren’t as popular as they were years ago, but OEMs still employ them sometimes for speed.

They tend to be less colorful than IPS and VA monitors, though. Most offer sRGB color gamuts instead of covering the larger DCI-P3 color space. Asus’ 540 Hz ROG Swift Pro PG248QP, for example, is a TN monitor claiming 125 percent sRGB coverage. Koorui’s monitor claims to cover 95 percent of DCI-P3, due to the use of a quantum dot film. Again, there’s a lot that prospective shoppers should confirm about this monitor if it becomes available.

For those seeking the fastest monitors with more concrete release plans, several companies announced 600 Hz monitors coming out this year. Acer, for example, has a 600 Hz Nitro XV240 F6 (also a TN monitor) that it plans to release in North America this quarter at a starting price of $600.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

The 8 most interesting PC monitors from CES 2025 Read More »

how-i-program-with-llms

How I program with LLMs

The second issue is we can do better. I am happy we now live in a time when programmers write their own tests, but we do not hold ourselves to the same standards with tests as we do with production code. That is a reasonable trade-off; there are only so many hours in the day. But what LLMs lack in arithmetical prowess, they make up for in enthusiasm.

Let’s ask for an even better test.

In the tests, implement the simplest, most readable version of the standard code for quartiles over a fixed set of known values in a slice. Then pass the test cases through the standard code and the reservoir sampler and confirm they are within an epsilon of each other. Structure the comparison code such that it can be used in a fuzz test, too.

This got us some new test code:

// referenceQuartiles calculates the exact quartiles for a slice of float64 values    // using linear interpolation, matching the behavior expected from the sampler.    func referenceQuartiles(data []float64) (q1, median, q3 float64)  …     // compareQuartiles checks if two sets of quartiles are within epsilon of each other.    // Returns true if they match within the tolerance, false otherwise.    func compareQuartiles(q1a, meda, q3a, q1b, medb, q3b, epsilon float64) bool  …     // checkQuartiles is a test helper that compares sampler output against the reference    // implementation and reports any differences.    func checkQuartiles(t *testing.T, data []float64, epsilon float64)        t.Helper()             // Get reference values       wantQ1, wantMed, wantQ3 := referenceQuartiles(data)             // Get sampler values using a large reservoir for accuracy       qs := NewQuartileSampler(1000)       for _, v := range data            qs.Add(v)              gotQ1, gotMed, gotQ3 := qs.Quartiles()             if !compareQuartiles(gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon)            t.Errorf("Quartiles mismatch:ngot  (q1=%v, med=%v, q3=%v)nwant (q1=%v, med=%v, q3=%v)nepsilon=%v",               gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon)             

The original test from above has been reworked to use checkQuartiles, and we have something new:

func FuzzQuartileSampler(f *testing.F)        // Add some seed corpus       f.Add([]float641, 2, 3, 4, 5)             f.Fuzz(func(t *testing.T, data []float64)            // Use a larger epsilon for fuzzing since we might get more extreme values           checkQuartiles(t, data, 0.2)       )      

This is fun because it’s wrong. My running gopls tool immediately says:

fuzzing arguments can only have the following types:      string, bool, float32, float64,      int, int8, int16, int32, int64,      uint, uint8, uint16, uint32, uint64,      []byte  

Pasting that error back into the LLM gets it to regenerate the fuzz test such that it is built around a func(t *testing.T, data []byte) function that uses math.Float64frombits to extract floats from the data slice. Interactions like this point us toward automating the feedback from tools; all it needed was the obvious error message to make solid progress toward something useful. I was not needed.

Doing a quick survey of the last few weeks of my LLM chat history shows (which, as I mentioned earlier, is not a proper quantitative analysis by any measure) that more than 80 percent of the time there is a tooling error, the LLM can make useful progress without me adding any insight. About half the time, it can completely resolve the issue without me saying anything of note. I am just acting as the messenger.

How I program with LLMs Read More »

tech-worker-movements-grow-as-threats-of-rto,-ai-loom

Tech worker movements grow as threats of RTO, AI loom


Advocates say tech workers movements got too big to ignore in 2024.

Credit: Aurich Lawson | Getty Images

It feels like tech workers have caught very few breaks over the past several years, between ongoing mass layoffs, stagnating wages amid inflation, AI supposedly coming for jobs, and unpopular orders to return to office that, for many, threaten to disrupt work-life balance.

But in 2024, a potentially critical mass of tech workers seemed to reach a breaking point. As labor rights groups advocating for tech workers told Ars, these workers are banding together in sustained strong numbers and are either winning or appear tantalizingly close to winning better worker conditions at major tech companies, including Amazon, Apple, Google, and Microsoft.

In February, the industry-wide Tech Workers Coalition (TWC) noted that “the tech workers movement is far more expansive and impactful” than even labor rights advocates realized, noting that unionized tech workers have gone beyond early stories about Googlers marching in the streets and now “make the headlines on a daily basis.”

Ike McCreery, a TWC volunteer and ex-Googler who helped found the Alphabet Workers Union, told Ars that although “it’s hard to gauge numerically” how much movements have grown, “our sense is definitely that the momentum continues to build.”

“It’s been an exciting year,” McCreery told Ars, while expressing particular enthusiasm that even “highly compensated tech workers are really seeing themselves more as workers” in these fights—which TWC “has been pushing for a long time.”

In 2024, TWC broadened efforts to help workers organize industry-wide, helping everyone from gig workers to project managers build both union and non-union efforts to push for change in the workplace.

Such widespread organizing “would have been unthinkable only five years ago,” TWC noted in February, and it’s clear from some of 2024’s biggest wins that some movements are making gains that could further propel that momentum in 2025.

Workers could also gain the upper hand if unpopular policies increase what one November study called “brain drain.” That’s a trend where tech companies adopting potentially alienating workplace tactics risk losing top talent at a time when key industries like AI and cybersecurity are facing severe talent shortages.

Advocates told Ars that unpopular policies have always fueled workers movements, and RTO and AI are just the latest adding fuel to the fire. As many workers prepare to head back to offices in 2025 where worker surveillance is only expected to intensify, they told Ars why they expect to see workers’ momentum continue at some of the world’s biggest tech firms.

Tech worker movements growing

In August, Apple ratified a labor contract at America’s first unionized Apple Store—agreeing to a modest increase in wages, about 10 percent over three years. While small, that win came just a few weeks before the National Labor Relations Board (NLRB) determined that Amazon was a joint employer of unionized contract-based delivery drivers. And Google lost a similar fight last January when the NLRB ruled it must bargain with a union representing YouTube Music contract workers, Reuters reported.

For many workers, joining these movements helped raise wages. In September, facing mounting pressure, Amazon raised warehouse worker wages—investing $2.2 billion, its “biggest investment yet,” to broadly raise base salaries for workers. And more recently, Amazon was hit with a strike during the busy holiday season, as warehouse workers hoped to further hobble the company during a clutch financial quarter to force more bargaining. (Last year, Amazon posted record-breaking $170 billion holiday quarter revenues and has said the current strike won’t hurt revenues.)

Even typically union-friendly Microsoft drew worker backlash and criticism in 2024 following layoffs of 650 video game workers in September.

These mass layoffs are driving some workers to join movements. A senior director for organizing with Communications Workers of America (CWA), Tom Smith, told Ars that shortly after the 600-member Tech Guild—”the largest single certified group of tech workers” to organize at the New York Times—reached a tentative deal to increase wages “up to 8.25 percent over the length of the contract,” about “460 software engineers at a video game company owned by Microsoft successfully unionized.”

Smith told Ars that while workers for years have pushed for better conditions, “these large units of tech workers achieving formal recognition, building lasting organization, and winning contracts” at “a more mass scale” are maturing, following in the footsteps of unionizing Googlers and today influencing a broader swath of tech industry workers nationwide. From CWA’s viewpoint, workers in the video game industry seem best positioned to seek major wins next, Smith suggested, likely starting with Microsoft-owned companies and eventually affecting indie game companies.

CWA, TWC, and Tech Workers Union 1010 (a group run by tech workers that’s part of the Office and Professional Employees International Union) all now serve as dedicated groups supporting workers movements long-term, and that stability has helped these movements mature, McCreery told Ars. Each group plans to continue meeting workers where they are to support and help expand organizing in 2025.

Cost of RTOs may be significant, researchers warn

While layoffs likely remain the most extreme threat to tech workers broadly, a return-to-office (RTO) mandate can be just as jarring for remote tech workers who are either unable to comply or else unwilling to give up the better work-life balance that comes with no commute. Advocates told Ars that RTO policies have pushed workers to join movements, while limited research suggests that companies risk losing top talents by implementing RTO policies.

In perhaps the biggest example from 2024, when Amazon announced that it was requiring workers in-office five days a week next year, a poll on the anonymous platform where workers discuss employers, Blind, found an overwhelming majority of more than 2,000 Amazon employees were “dissatisfied.”

“My morale for this job is gone…” one worker said on Blind.

Workers criticized the “non-data-driven logic” of the RTO mandate, prompting an Amazon executive to remind them that they could take their talents elsewhere if they didn’t like it. Many confirmed that’s exactly what they planned to do. (Amazon later announced it would be delaying RTO for many office workers after belatedly realizing there was a lack of office space.)

Other companies mandating RTO faced similar backlash from workers, who continued to question the logic driving the decision. One February study showed that RTO mandates don’t make companies any more valuable but do make workers more miserable. And last month, Brian Elliott, an executive advisor who wrote a book about the benefits of flexible teams, noted that only one in three executives thinks RTO had “even a slight positive impact on productivity.”

But not every company drew a hard line the way that Amazon did. For example, Dell gave workers a choice to remain remote and accept they can never be eligible for promotions, or mark themselves as hybrid. Workers who refused the RTO said they valued their free time and admitted to looking for other job opportunities.

Very few studies have been done analyzing the true costs and benefits of RTO, a November academic study titled “Return to Office and Brain Drain” said, and so far companies aren’t necessarily backing the limited findings. The researchers behind that study noted that “the only existing study” measuring how RTO impacts employee turnover showed this year that senior employees left for other companies after Microsoft’s RTO mandate, but Microsoft disputed that finding.

Seeking to build on this research, the November study tracked “over 3 million tech and finance workers’ employment histories reported on LinkedIn” and analyzed “the effect of S&P 500 firms’ return-to-office (RTO) mandates on employee turnover and hiring.”

Choosing to only analyze the firms requiring five days in office, the final sample covered 54 RTO firms, including big tech companies like Amazon, Apple, and Microsoft. From that sample, researchers concluded that average employee turnover increased by 14 percent after RTO mandates at bigger firms. And since big firms typically have lower turnover, the increase in turnover is likely larger at smaller firms, the study’s authors concluded.

The study also supported the conclusion that “employees with the highest skill level are more likely to leave” and found that “RTO firms take significantly longer time to fill their job vacancies after RTO mandates.”

“Together, our evidence suggests that RTO mandates are costly to firms and have serious negative effects on the workforce,” the study concluded, echoing some remote workers’ complaints about the seemingly non-data-driven logic of RTO, while urging that further research is needed.

“These turnovers could potentially have short-term and long-term effects on operation, innovation, employee morale, and organizational culture,” the study concluded.

A co-author of the “brain drain” study, Mark Ma, told Ars that by contrast, Glassdoor going fully remote at least anecdotally seemed to “significantly” increase the number and quality of applications—possibly also improving retention by offering the remote flexibility that many top talents today require.

Ma said that next his team hopes to track where people who leave firms over RTO policies go next.

“Do they become self-employed, or do they go to a competitor, or do they fund their own firm?” Ma speculated, hoping to trace these patterns more definitively over the next several years.

Additionally, Ma plans to investigate individual firms’ RTO impacts, as well as impacts on niche classes of workers with highly sought-after skills—such as in areas like AI, machine learning, or cybersecurity—to see if it’s easier for them to find other jobs. In the long-term, Ma also wants to monitor for potentially less-foreseeable outcomes, such as RTO mandates possibly increasing firms’ number of challengers in their industry.

Will RTO mandates continue in 2025?

Many tech workers may be wondering if there will be a spike in return-to-office mandates in 2025, especially since one of the most politically influential figures in tech, Elon Musk, recently reiterated that he thinks remote work is “poison.”

Musk, of course, banned remote work at Tesla, as well as when he took over Twitter. And as co-lead of the US Department of Government Efficiency (DOGE), Musk reportedly plans to ban remote work for government employees, as well. If other tech firms are influenced by Musk’s moves and join executives who seem to be mandating RTO based on intuition, it’s possible that more tech workers could be forced to return to office or else seek other employment.

But Ma told Ars that he doesn’t expect to see “a big spike in the number of firms announcing return to office mandates” in 2025.

His team only found eight major firms in tech and finance that issued five-day return-to-office mandates in 2024, which was the same number of firms flagged in 2023, suggesting no major increase in RTOs from year to year. Ma told Ars that while big firms like Amazon ordering employees to return to the office made headlines, many firms seem to be continuing to embrace hybrid models, sometimes allowing employees to choose when or if they come into the office.

That seeming preference for hybrid work models seems to align with “future of work” surveys outlining workplace trends and employee preferences that the Consumer Technology Association (CTA) conducted for years but has seemingly since discontinued. In 2021, CTA reported that “89 percent of tech executives say flexible work arrangements are the most important employee benefit and 65 percent say they’ll hire more employees to work remotely.” The next year, which apparently was the last time CTA published the survey, the CTA suggested hybrid models could help attract talents in a competitive market hit with “an unprecedented demand for workers with high-tech skills.”

The CTA did not respond to Ars’ requests to comment on whether it expects hybrid work arrangements to remain preferred over five-day return-to-office policies next year.

CWA’s Smith told Ars that workers movements are growing partly because “folks are engaged in this big fight around surveillance and workplace control,” as well as anything “having to do with to what extent will people return to offices and what does that look like if and when people do return to offices?”

Without data backing RTO mandates, Ma’s study suggests that firms will struggle to retain highly skilled workers at a time when tech innovation remains a top priority for the US. As workers appear increasingly put off by policies—like RTO or AI-driven workplace monitoring or efficiency efforts threatening to replace workers with AI—Smith’s experience seems to show that disgruntled workers could find themselves drawn to unions that could help them claw back control over work-life balance. And the cost of the ensuing shuffle to some of the largest tech firms in the world could be “significant,” Ma’s study warned.

TWC’s McCreery told Ars that on top of unpopular RTO policies driving workers to join movements, workers have also become more active in protesting unpopular politics, frustrated to see their talents apparently used to further controversial conflicts and military efforts globally. Some workers think workplace organizing could be more powerful than voting to oppose political actions their companies take.

“The workplace really remains an important site of power for a lot of people where maybe they don’t feel like they can enact their values just by voting or in other ways,” McCreery said.

While unpopular policies “have always been a reason workers have joined unions and joined movements,” McCreery said that “the development of more of these unpopular policies” like RTO and AI-enhanced surveillance “really targeted” at workers has increased “the political consciousness and the sense” that tech workers are “just like any other workers.”

Layoffs at companies like Microsoft and Amazon during periods when revenue is increasing in the double-digits also unify workers, advocates told Ars. Forbes noted Microsoft laid off 1,000 workers “just five days before reporting a 17.6 percent increase in revenue to $62 billion,” while Amazon’s 1,000-worker layoffs followed a 14 percent rise in revenue to $170 billion. And demand for AI led to the highest profit margins Amazon’s seen for its cloud business in a decade, CNBC reported in October.

CWA’s Smith told Ars as companies continue to rake in profits and workers feel their work-life balance slipping away while their efforts in the office are potentially “used to increase control and cause broader suffering,” some of the biggest fights workers raised in 2024 may intensify next year.

“It’s like a shock to employees, these industries pushing people to lower your expectations because we’re going to lay off hundreds of thousands of you just because we can while we make more profits than we ever have,” Smith said. “I think workers are going to step into really broad campaigns to assert a different worldview on employment security.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Tech worker movements grow as threats of RTO, AI loom Read More »

ars’-favorite-games-of-2024-that-were-not-released-in-2024

Ars’ favorite games of 2024 that were not released in 2024


Look what we found laying around

The games that found us in 2024, from 2003 space sims to 2022 backyard survival.

More than 18,500 games will have been released onto the PC gaming platform Steam in the year 2024, according to SteamDB. Dividing that by the number of people covering games at Ars, or the gaming press at large, or even everybody who games and writes about it online, yields a brutal ratio.

Games often float down the river of time to us, filtered by friends, algorithms, or pure happenstance. They don’t qualify for our best games of the year list, but they might be worth mentioning on their own. Many times, they’re better games then they were at release, either by patching or just perspective. And they are almost always lower priced.

Inspired by the cruel logic of calendars and year-end lists, I asked my coworkers to tell me about their favorite games of 2024 that were not from 2024. What resulted were some quirky gems, some reconsiderations, and some titles that just happened to catch us at the right time.

Stardew Valley

Screenshot from Stardew Valley, in front of the blacksmith's shop, where a player character is holding up a bone (for some reason).

Credit: ConcernedApe



ConcernedApe; Basically every platform

After avoiding it forever and even bouncing off of it once or twice, I finally managed to fall face-first into Stardew Valley (2016) in 2024. And I’ve fallen hard—I only picked it up in October, but Steam says I’ve spent about 110 hours playing farmer.

In addition to being a fun distraction and a great way to kill both short and long stretches of time, what struck me is how remarkably soothing the game has been. I’m a nervous flyer, and it’s only gotten worse since the pandemic, but I’ve started playing Stardew on flights, and having my little farm to focus on has proven to be a powerful weapon against airborne anxiety—even when turbulence starts up. Ars sent me on three trips in the last quarter of the year, and Stardew got me through all the flights.

Hell, I’m even enjoying the multiplayer—and I don’t generally do multiplayer. My cousin Shaun and I have been meeting up most weekends to till the fields together, and the primary activity tends to be seeing who can apply the most over-the-top creatively scatological names to the farm animals. I’ve even managed to lure Ur-Quan Masters designer Paul Reiche III to Pelican Town for a few weekends of hoedowns and harvests. (Perhaps unsurprisingly, Paul was already a huge fan of the game. And also of over-the-top creatively scatological farm animal names. Between him and Shaun, I’m amassing quite a list!)

So here’s to you, Stardew Valley. You were one of the brightest parts of my 2024, and a game that I already know I’ll return to for years.

Lee Hutchinson

Grounded

First-person perspective of a suburban house in the background, fall leaves on a tree nearby, and a relatively giant spider approaching the player, who is holding a makeshift bow and arrow, ready to fire.

Credit: Xbox Game Studios

Obsidian; Windows, Switch, Xbox, PlayStation

My favorite discovery this year has probably been Grounded, a Microsoft-published, Obsidian Entertainment-developed survival crafting game that was initially released back in 2022 (2020 if you count early access) but received its final planned content update back in April.

You play as one of four plucky tweens, zapped down to a fraction-of-an-inch high as part of a nefarious science experiment. The game is heavily inspired by 1989’s classic Honey, I Shrunk the Kids, both in its ’80s setting and its graphical design. Explore the backyard, fight bugs, find new crafting materials, build out a base of operations, and power yourself up with special items and steadily better equipment so you can figure out what happened to you and get back to your regular size.

Grounded came up because I was looking for another game for the four-player group I’ve also played Deep Rock Galactic and Raft with. Like RaftGrounded has a main story with achievable objectives and an endpoint, plus a varied enough mix of activities that everyone will be able to find something they like doing. Some netcode hiccups notwithstanding, if you like survival crafting-style games but don’t like Minecraft-esque, objective-less, make-your-own-fun gameplay, Grounded might scratch an itch for you.

Andrew Cunningham

Fights in Tight Spaces

A black-colored figure does a backwards flip kick on a red goon holding a gun, while three other red and maroon goons point guns at him from a perpendicular angle, inside a grayscale room.

Credit: Raw Fury

Ground Shatter; Windows, Switch, Xbox, PlayStation

I spent a whole lot of time browsing, playing, and thinking about roguelike deckbuilders in 2024. Steam’s recommendation algorithm noticed, and tossed 2021’s Fights in Tight Spaces at me. I was on a languid week’s vacation, with a Steam Deck packed, with just enough distance from the genre by then to maybe dip a toe back in. More than 15 hours later, Steam’s “Is this relevant to you?” question is easy to answer.

Back in college, I spent many weekends rounding out my Asian action film knowledge, absorbing every instance of John Woo, Jackie Chan, Jet Li, Flying Guillotine, Drunken Master, and whatever I could scavenge from friends and rental stores. I thrilled to frenetic fights staged in cramped, cluttered, or quirky spaces. When the hero ducks so that one baddie punches the other one, then backflips over a banister to two-leg kick the guy coming up from beneath? That’s the stuff.

Fights gives you card-based, turn-by-turn versions of those fights. You can see everything your opponents are going to do, in what order, and how much it would hurt if they hit you. Your job is to pick cards that move, hit, block, counter, slip, push, pull, and otherwise mess with these single-minded dummies, such that you dodge the pain and they either miss or take each other out. Woe be unto the guy with a pistol who thinks he’s got one up on you, because he’s standing right by a window, and you’ve got enough momentum to kick a guy right into him.

This very low-spec game has a single-color visual style, beautifully smooth animations, and lots of difficulty tweaking to prevent frustration. The developer plans to release a game “in the same universe,” Knights in Tight Spaces, in 2025, and that’s an auto-buy for me now.

Kevin Purdy

The Elder Scrolls III: Morrowind

Axe-wielding polygonal character, wearing furs and armor, complete with bear face above his head, in front of a wooden lodge in a snowy landscape.

Credit: Bethesda Game Studios

Bethesda; Windows, Xbox

The Elder Scrolls III: Morrowind always had a sort of mythic quality for me. It came out when I was 18 years old—the perfect age for it, really. And more than any other game I had ever played, it inspired hope and imagination for where the medium might go.

In the ensuing years, Morrowind (2002) ended up seeming like the end of the line instead of the spark that would start something new. With some occasional exceptions, modern games have emphasized predictable formulae and proven structures over the kind of experimentation, depth, and weirdness that Morrowind embraced. Even Bethesda’s own games gradually became stodgier.

So Morrowind lived in my memory for years, a sort of holy relic of what gaming could have been before AAA game design became quite so oppressively formalist.

After playing hundreds of hours of Starfield this year, I returned to Morrowind for the first time in 20 years.

To be clear: I quite liked Starfield, counter to the popular narrative about it—though I definitely understood why it wasn’t for everyone. But people criticized Starfield for lacking the magic of a game like Morrowind, and I was skeptical of that criticism. As such, my return to the island of Vvardenfell was a test: did Morrowind really have a magic that Starfield lacks, even when taken out of the context of its time and my youthful imagination and open-mindnedness?

I was surprised to find that the result was a strong affirmative. I still like Starfield, but its cardinal sin is that it is unimaginative because it is derivative—of No Man’s Sky, of Privateer and Elite, of Mass Effect, of various 70s and 80s sci-fi films and TV series, and most of all, of Bethesda Game Studios’ earlier work.

In contrast, Morrowind is a fever dream of bold experimentation that seems to come more from the creativity of ambitious designers who were too young to know any better, than from the proven designs of past hits.

I played well over a hundred hours of Morrowind this year, and while I did find it tedious at times, it’s engrossing for anyone who’s willing to put up with its archaic pacing and quirks.

To be clear, many of the design experiments in the game simply don’t work, with systems that are easily exploited. Its designers’ naivety shines through clearly, and its rough edges serve as clear reminders of why today’s strict formalism has taken root, especially in AAA games where too-big budgets and payrolls leave no room at all for risk.

Regardless, it’s been wild to go back and play this game from 2002 and realize that in the 22 years since there have been very few other RPGs that were nearly as brazenly creative. I love it for that, just as much as I did when I was 18.

Samuel Axon

Tetrisweeper

Tetris-style colored blocks fallen inside a column on top of settled blocks, most of which are gray and have Minesweeper-like numbers indicating an explosive tile nearby.

Credit: Kertis Jones Interactive

Kertis Jones; Itch.io, coming to Steam

If you ask someone to list the most addictive puzzle games of all time, Tetris and Minesweeper will probably be at or near the top of the list. So it shouldn’t be too surprising that Tetrisweeper makes an even more addictive experience by combining the two grid-based games together in a frenetic, brain-melting mess.

Tetrisweeper starts just like Tetris, asking you to arrange four-block pieces dropping down a well to make lines without gaps. But in Tetrisweeper, those completed lines won’t clear until you play a game of Minesweeper on top of those dropped pieces, using adjacency information and logical rules to mark which ones are safe and which ones house game-ending mines (if you want to learn more about Minesweeper, there’s a book I can recommend).

At first, playing Tetris with your keyboard fingers while managing Minesweeper with your mouse hand can feel a little unwieldy—a bit like trying to drive a car and cook an omelet at the same time. After a few games, though, you’ll learn how to split your attention effectively to drop pieces and solve complex mine patterns nearly simultaneously. That’s when you start to master the game’s intricate combo multiplier system and bonus scoring, striving for point-maximizing Tetrisweeps and T-spins (my high score is just north of 3 million, but pales in comparison to that of the best players).

While Tetrisweeper grew out of a 2020 Game Jam, I didn’t discover the game until this year, when it helped me clear my head during many a work break (and passed the time during a few dull Zoom calls as well). I’m hoping the game’s planned Steam release—still officially listed as “Coming Soon”—will help attract even more addicts than its current itch.io availability.

Kyle Orland

Freelancer

Ship with three thruster engines approaching a much larger freighter, long and slightly cylindrical, in murky green space, with a HUD around the borders.

Digital Anvil; Windows

What if I told you that Star Citizen creator Chris Roberts previously tried to make Star Citizen more than two decades ago but left the project and saw it taken over by real, non-crazy professionals who had the discipline to actually finish something?

That’s basically the story behind 2003’s forgotten PC game Freelancer. What started as a ludicrously ambitious space life sim concept ended up as a sincere attempt to make games like Elite and Wing Commander: Privateer far more accessible.

That meant a controversial, mouse-based control scheme instead of flight sticks, as well as cutting-edge graphics, celebrity voice actors, carefully designed economy and progression systems, and flashy cutscenes.

I followed the drama of Freelancer‘s development in forums, magazines, and gaming news websites when I was younger. I bought the hype as aggressively as Star Citizen fans did years later. The game that came out wasn’t what I was dreaming of, and that disappointment prevented me from finishing it.

Fast-forward to 2024: on a whim, I played Freelancer from beginning to end for the first time.

And honestly? It’s great. In a space trading sim genre that’s filled with giant piles of jank (the X series) or inaccessible titles that fly a little too far into the simulation zone for some (Elite Dangerous), Freelancer might be the most fun you can have with the genre even today.

It’s understandable that it didn’t have much lasting cultural impact since the developers who took it over lacked the wild ambition of the man who started it, but I enjoyed a perfectly pleasant 20–30 hours smuggling space goods and shooting pirates—and I didn’t have to spend $48,000 of real money on a ship to get that.

Samuel Axon

Cyberpunk 2077

A woman with a red mohawk, wearing a belly shirt, amidst a dense, steel, multi-colored cityscape, suffused with neon.

Credit: CD Projekt Red

CD Projekt Red; Windows, Xbox, PlayStation (macOS in 2025)

Can one simply play, as a game, one of the biggest and most argued-over gaming narratives of all time? Four years after its calamitous launch sparked debates about AAA gaming sprawl, developer crunch, game review practicalities, and, eventually, post-release redemption arcs, what do you get when you launch Cyberpunk 2077?

I got a first-person shooter, one with some interesting ideas, human-shaped characters you’d expect from the makers of The Witcher 3, and some confused and unrefined systems and ideas. I enjoyed my time with it, appreciate the work put into it, and can recommend it to anyone who is okay with something that’s not quite an in-depth FPS RPG (or “immersive sim”) but likes a bit of narrative thrust to their shooting and hacking.

You can’t fit everything about Cyberpunk 2077 into one year-end blurb (or a 1.0 release, apparently), so I’ll stick to the highs and lows. I greatly enjoyed the voice performances, especially from Keanu Reeves and Idris Elba (the latter in the Phantom Liberty DLC), and those behind Jackie, Viktor Vektor, and the female version of protagonist V. I was surprised at how good the shooting felt, given the developer’s first time out; the discovery of how a “Smart” shotgun worked will stick with me a while. The driving: less so. There were moments of quiet, ambient world appreciation, now that the game’s engine is running okay. And the side quests have that Witcher-ish quality to them, where they’re never as straightforward as described and also tell little stories about life in this place.

What seems missing to me, most crucially, are the bigger pieces, the real choices and unexpected consequences, and the sense of really living in this world. You can choose one of three backgrounds, but it only comes up as an occasional dialogue option. You can build your character in myriad ways, and there are lots of dialogue options. But the main quest keeps you on a fairly strict path, with the options to talk, hack, or stealth your way past inevitable shootouts not as great as you might think. Once you’ve brought your character up to power-fantasy levels, the larger city becomes a playground, but not one I much enjoyed playing in. (Plus, the idea of idle wandering and amassing wealth, given the main plot contrivance, is kind of ridiculous, but this is a game, after all).

Phantom Liberty, in my experience, patches up every one of these weaknesses inside its smaller play space, providing more real choices and a tighter story, with more set pieces arriving at a faster pace. If you can buy this game bundled with its DLC, by all means, do so. I didn’t encounter any game-breaking bugs in my mid-2024 playthrough, nor even many crashes. Your mileage may vary, especially on consoles, as other late-coming players have seen.

Waiting on this game a good bit certainly helps me grade it on a curve; nobody today is losing $60 on something that looks like it’s playing over a VNC connection. When CD Projekt Red carries on in this universe, I think they’ll have learned a lot from what they delivered here, much like we’ve all learned about pre-release expectations. It’s okay to take your time getting to a gargantuan game; there are lots of games from prior years to look into.

Kevin Purdy

Photo of Kevin Purdy

Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch.

Ars’ favorite games of 2024 that were not released in 2024 Read More »

i-keep-turning-my-google-sheets-into-phone-friendly-webapps,-and-i-can’t-stop

I keep turning my Google Sheets into phone-friendly webapps, and I can’t stop


Software is eating the world and I have snacks for it

How I tackled takeout, spices, and meal ideas with spreadsheets and Glide.

It started, like so many overwrought home optimization projects, during the pandemic.

My wife and I, like many people stuck inside, were ordering takeout more frequently. We wanted to support local restaurants, reduce the dish load, and live a little. It became clear early on that app-based delivery services like DoorDash and Uber Eats were not the best way to support local businesses. If a restaurant had its own ordering site or a preferred service, we wanted to use that—or even, heaven forfend, call the place.

The secondary issue was that we kept ordering from the same places, and we wanted to mix it up. Sometimes we’d want to pick something up nearby. Sometimes we wanted to avoid an entire category (“Too many carbs this week, no pasta”) or try the newest places we knew about, or maybe a forgotten classic. Or just give me three places randomly, creative constraints, please—it’s Friday.

At its core, this is a shared list, i.e. spreadsheet. But my spreadsheet maintenance enthusiasm greatly outweighs that of my spouse. More than that, have you ever pulled up a Google Sheet or online Excel file on your normal-sized phone to make changes? I do so only in moments of true desperation.

For things that are bigger than a note or dry-erase board but smaller than paying for some single-use, subscription-based app, I build little private webapps with Glide. You might use something else, but Glide is a really nice entry into the spreadsheet-to-app milieu. The apps it creates are the kind that can easily be shared and installed (i.e., “Add to Home Screen”) on phones, tablets, or desktops, from a browser. Here’s how it worked for me.

Why you might want to make a little personal webapp

Glide is technically a no-code tool aimed at businesses, but you get one user-based published app for free, and you can have more “private” apps if you’re truly keeping it to your household or friend group. Each full-fledged app can have 10 users and up to 25,000 rows, which should probably be enough for most uses.

I do wish there was a “prosumer” kind of account that billed for less than $828 per year. If you want more than one (relatively) small-scale apps, there are alternatives, like Google’s AppSheet (included in most paid Google Workspace accounts). But most are just as business-oriented, and none have struck me as elegant a tool as Glide.

As mentioned, my primary use for a sheet-based app is to make searching, filtering, reading, and editing that sheet far easier. In the case of my takeout app, that meant being able to search anything—a specific restaurant, “tacos,” a quadrant of the District of Columbia. And a sorting option for when I added a restaurant, so I can find the place I added while a friend was recommending it.

Let’s whip up a webapp

Google Sheet showing the columns Restaurant, Category, Address, Order Link, Phone, Quadrant, Notes/Hours, and Added

The spreadsheet behind my “DC Takeout” app.

Credit: Kevin Purdy

The spreadsheet behind my “DC Takeout” app. Credit: Kevin Purdy

Throwing that sheet fresh into Glide, it’s not off to a bad start. The main view of Glide is a usable version of your app, and I can see that I can already type whatever I want into the search bar, and it will search across fields.

First version of takeout app, with a

I could honestly stop here if I wasn’t picky about some of the quirks I’m seeing. The app is showing screens, “Public” and “Users,” and I want to hide them. In the upper-left corner, in Navigation, I’ll click an eye icon to hide the “Users” section. With “Public” selected, I’ll change the Label in the upper-right to “DC Takeout,” and, if I was going to have more than one screen, give it an icon.

The app already provided a “+” button for adding restaurants, just a simple vertical stack of entry boxes, along with a date picker for Date Added. If you prefer something static, toggle off the options in the “Actions” field in the bottom-right.

Searching is pretty robust, but what if you want to browse a broad category or just see stuff that’s nearby? In the “Options” section to the right, you can add in In-App Filter, which creates a familiar arrow-shaped three-bar button to the right of the search bar. I’ve added Quadrant and Category filters. If I want to go further here, it’s on me and my spreadsheet. Open on Mondays? Offers pick-up? Has a bar? The possibilities are endless, even if my weekend spreadsheet time is not.

Phone app showing a filter for quadrant and restaurant catgegory.

Simple filter for my takeout app. I need to get the category to be comma-separate values, not a single pile of descriptors.

Credit: Kevin Purdy

Simple filter for my takeout app. I need to get the category to be comma-separate values, not a single pile of descriptors. Credit: Kevin Purdy

What happens when you click on a restaurant? Right now, you see, essentially, a vertical readout of everything in that spreadsheet row. What could you see? Well, you’ve got an address there, so how about a map?

Click on a restaurant in the fake phone, and on the left you can see “Components,” one of which is our map. Set the address to equal the address column in your sheet, and, in “Options,” set Visibility so that it only shows up when the address field is not empty. In “Actions,” you can set it so that clicking the map opens the phone’s default mapping app, set to the proper address. (If you’re struggling to get the right place to show up, you might want to check out a sample address in Mapbox’s API; it can be a little finicky about how it parses them.)

Map showing on a phone layout, with La Casina, a Romano pizza place, selected with details showing.

Take me to the slightly different pizza!

Credit: Kevin Purdy

Take me to the slightly different pizza! Credit: Kevin Purdy

Glide offers dozens more ways to customize every little thing about your app.

That’s about all I need from a “mix up your takeout and use the right apps” app, one made mostly for me, my spouse, and nearby friends and visitors. Pretty much anything you’d find useful while sitting down at a spreadsheet, you can also make useful through a little phone webapp.

Joyful overkill

I went a good deal further with my “DIYRoot” app. After using a couple meal delivery services, I sussed out the kinds of recipe formulas they were mixing up each week, plus the items or equivalents I had found at nearby stores. Knowing that I could figure out the basic cooking, I made an app that listed as many recipes as I could find, broke them into components, let me add them to an erasable menu plan and shopping list, and even had some pictures.

Image of a phone app, showing

The best version of an entry has an image, ingredients, and recipe. There’s a button to add it to the menu and all the items to a list.

Credit: Kevin Purdy

The best version of an entry has an image, ingredients, and recipe. There’s a button to add it to the menu and all the items to a list. Credit: Kevin Purdy

I didn’t quite master this app (the shopping list is plagued by blank items/rows), and it’s now technically an outdated “Classic” Glide app; maybe I’ll give it another shot. More successful is my most recent effort, “Pantry Items,” which is just a searchable list of spices and sauces, a note about how much I have left of each, and, through a webhook, add anything I see missing to a shopping list on Bring.

I can feel some people reading this article demanding that I just learn Swift or some mobile-friendly JavaScript package and make some real apps, but I steadfastly refuse. I enjoy the messy middle of programming, where I have just enough app, API, and logic knowledge to make something small for my friends and family that’s always accessible on this little computer I carry everywhere, but I have no ambitions to make it “real.” Anyone can add to it through the relatively simple spreadsheet. Heck, I’ll even take feature requests if I’m feeling gracious.

I use Glide, but you might have something else even simpler (and should recommend it as such in the comments). Just be warned that once you start thinking (or overthinking) along these lines, it can be hard to stop, even without the worldwide pandemic.

Photo of Kevin Purdy

Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch.

I keep turning my Google Sheets into phone-friendly webapps, and I can’t stop Read More »

2024:-the-year-ai-drove-everyone-crazy

2024: The year AI drove everyone crazy


What do eating rocks, rat genitals, and Willy Wonka have in common? AI, of course.

It’s been a wild year in tech thanks to the intersection between humans and artificial intelligence. 2024 brought a parade of AI oddities, mishaps, and wacky moments that inspired odd behavior from both machines and man. From AI-generated rat genitals to search engines telling people to eat rocks, this year proved that AI has been having a weird impact on the world.

Why the weirdness? If we had to guess, it may be due to the novelty of it all. Generative AI and applications built upon Transformer-based AI models are still so new that people are throwing everything at the wall to see what sticks. People have been struggling to grasp both the implications and potential applications of the new technology. Riding along with the hype, different types of AI that may end up being ill-advised, such as automated military targeting systems, have also been introduced.

It’s worth mentioning that aside from crazy news, we saw fewer weird AI advances in 2024 as well. For example, Claude 3.5 Sonnet launched in June held off the competition as a top model for most of the year, while OpenAI’s o1 used runtime compute to expand GPT-4o’s capabilities with simulated reasoning. Advanced Voice Mode and NotebookLM also emerged as novel applications of AI tech, and the year saw the rise of more capable music synthesis models and also better AI video generators, including several from China.

But for now, let’s get down to the weirdness.

ChatGPT goes insane

Illustration of a broken toy robot.

Early in the year, things got off to an exciting start when OpenAI’s ChatGPT experienced a significant technical malfunction that caused the AI model to generate increasingly incoherent responses, prompting users on Reddit to describe the system as “having a stroke” or “going insane.” During the glitch, ChatGPT’s responses would begin normally but then deteriorate into nonsensical text, sometimes mimicking Shakespearean language.

OpenAI later revealed that a bug in how the model processed language caused it to select the wrong words during text generation, leading to nonsense outputs (basically the text version of what we at Ars now call “jabberwockies“). The company fixed the issue within 24 hours, but the incident led to frustrations about the black box nature of commercial AI systems and users’ tendency to anthropomorphize AI behavior when it malfunctions.

The great Wonka incident

A photo of the Willy's Chocolate Experience, which did not match AI-generated promises.

A photo of “Willy’s Chocolate Experience” (inset), which did not match AI-generated promises, shown in the background. Credit: Stuart Sinclair

The collision between AI-generated imagery and consumer expectations fueled human frustrations in February when Scottish families discovered that “Willy’s Chocolate Experience,” an unlicensed Wonka-ripoff event promoted using AI-generated wonderland images, turned out to be little more than a sparse warehouse with a few modest decorations.

Parents who paid £35 per ticket encountered a situation so dire they called the police, with children reportedly crying at the sight of a person in what attendees described as a “terrifying outfit.” The event, created by House of Illuminati in Glasgow, promised fantastical spaces like an “Enchanted Garden” and “Twilight Tunnel” but delivered an underwhelming experience that forced organizers to shut down mid-way through its first day and issue refunds.

While the show was a bust, it brought us an iconic new meme for job disillusionment in the form of a photo: the green-haired Willy’s Chocolate Experience employee who looked like she’d rather be anywhere else on earth at that moment.

Mutant rat genitals expose peer review flaws

An actual laboratory rat, who is intrigued. Credit: Getty | Photothek

In February, Ars Technica senior health reporter Beth Mole covered a peer-reviewed paper published in Frontiers in Cell and Developmental Biology that created an uproar in the scientific community when researchers discovered it contained nonsensical AI-generated images, including an anatomically incorrect rat with oversized genitals. The paper, authored by scientists at Xi’an Honghui Hospital in China, openly acknowledged using Midjourney to create figures that contained gibberish text labels like “Stemm cells” and “iollotte sserotgomar.”

The publisher, Frontiers, posted an expression of concern about the article titled “Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway” and launched an investigation into how the obviously flawed imagery passed through peer review. Scientists across social media platforms expressed dismay at the incident, which mirrored concerns about AI-generated content infiltrating academic publishing.

Chatbot makes erroneous refund promises for Air Canada

If, say, ChatGPT gives you the wrong name for one of the seven dwarves, it’s not such a big deal. But in February, Ars senior policy reporter Ashley Belanger covered a case of costly AI confabulation in the wild. In the course of online text conversations, Air Canada’s customer service chatbot told customers inaccurate refund policy information. The airline faced legal consequences later when a tribunal ruled the airline must honor commitments made by the automated system. Tribunal adjudicator Christopher Rivers determined that Air Canada bore responsibility for all information on its website, regardless of whether it came from a static page or AI interface.

The case set a precedent for how companies deploying AI customer service tools could face legal obligations for automated systems’ responses, particularly when they fail to warn users about potential inaccuracies. Ironically, the airline had reportedly spent more on the initial AI implementation than it would have cost to maintain human workers for simple queries, according to Air Canada executive Steve Crocker.

Will Smith lampoons his digital double

The real Will Smith eating spaghetti, parodying an AI-generated video from 2023.

The real Will Smith eating spaghetti, parodying an AI-generated video from 2023. Credit: Will Smith / Getty Images / Benj Edwards

In March 2023, a terrible AI-generated video of Will Smith’s AI doppelganger eating spaghetti began making the rounds online. The AI-generated version of the actor gobbled down the noodles in an unnatural and disturbing way. Almost a year later, in February 2024, Will Smith himself posted a parody response video to the viral jabberwocky on Instagram, featuring AI-like deliberately exaggerated pasta consumption, complete with hair-nibbling and finger-slurping antics.

Given the rapid evolution of AI video technology, particularly since OpenAI had just unveiled its Sora video model four days earlier, Smith’s post sparked discussion in his Instagram comments where some viewers initially struggled to distinguish between the genuine footage and AI generation. It was an early sign of “deep doubt” in action as the tech increasingly blurs the line between synthetic and authentic video content.

Robot dogs learn to hunt people with AI-guided rifles

A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries.

A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries. Credit: Onyx Industries

At some point in recent history—somewhere around 2022—someone took a look at robotic quadrupeds and thought it would be a great idea to attach guns to them. A few years later, the US Marine Forces Special Operations Command (MARSOC) began evaluating armed robotic quadrupeds developed by Ghost Robotics. The robot “dogs” integrated Onyx Industries’ SENTRY remote weapon systems, which featured AI-enabled targeting that could detect and track people, drones, and vehicles, though the systems require human operators to authorize any weapons discharge.

The military’s interest in armed robotic dogs followed a broader trend of weaponized quadrupeds entering public awareness. This included viral videos of consumer robots carrying firearms, and later, commercial sales of flame-throwing models. While MARSOC emphasized that weapons were just one potential use case under review, experts noted that the increasing integration of AI into military robotics raised questions about how long humans would remain in control of lethal force decisions.

Microsoft Windows AI is watching

A screenshot of Microsoft's new

A screenshot of Microsoft’s new “Recall” feature in action. Credit: Microsoft

In an era where many people already feel like they have no privacy due to tech encroachments, Microsoft dialed it up to an extreme degree in May. That’s when Microsoft unveiled a controversial Windows 11 feature called “Recall” that continuously captures screenshots of users’ PC activities every few seconds for later AI-powered search and retrieval. The feature, designed for new Copilot+ PCs using Qualcomm’s Snapdragon X Elite chips, promised to help users find past activities, including app usage, meeting content, and web browsing history.

While Microsoft emphasized that Recall would store encrypted snapshots locally and allow users to exclude specific apps or websites, the announcement raised immediate privacy concerns, as Ars senior technology reporter Andrew Cunningham covered. It also came with a technical toll, requiring significant hardware resources, including 256GB of storage space, with 25GB dedicated to storing approximately three months of user activity. After Microsoft pulled the initial test version due to public backlash, Recall later entered public preview in November with reportedly enhanced security measures. But secure spyware is still spyware—Recall, when enabled, still watches nearly everything you do on your computer and keeps a record of it.

Google Search told people to eat rocks

This is fine. Credit: Getty Images

In May, Ars senior gaming reporter Kyle Orland (who assisted commendably with the AI beat throughout the year) covered Google’s newly launched AI Overview feature. It faced immediate criticism when users discovered that it frequently provided false and potentially dangerous information in its search result summaries. Among its most alarming responses, the system advised humans could safely consume rocks, incorrectly citing scientific sources about the geological diet of marine organisms. The system’s other errors included recommending nonexistent car maintenance products, suggesting unsafe food preparation techniques, and confusing historical figures who shared names.

The problems stemmed from several issues, including the AI treating joke posts as factual sources and misinterpreting context from original web content. But most of all, the system relies on web results as indicators of authority, which we called a flawed design. While Google defended the system, stating these errors occurred mainly with uncommon queries, a company spokesperson acknowledged they would use these “isolated examples” to refine their systems. But to this day, AI Overview still makes frequent mistakes.

Stable Diffusion generates body horror

An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass. Credit: HorneyMetalBeing

In June, Stability AI’s release of the image synthesis model Stable Diffusion 3 Medium drew criticism online for its poor handling of human anatomy in AI-generated images. Users across social media platforms shared examples of the model producing what we now like to call jabberwockies—AI generation failures with distorted bodies, misshapen hands, and surreal anatomical errors, and many in the AI image-generation community viewed it as a significant step backward from previous image-synthesis capabilities.

Reddit users attributed these failures to Stability AI’s aggressive filtering of adult content from the training data, which apparently impaired the model’s ability to accurately render human figures. The troubled release coincided with broader organizational challenges at Stability AI, including the March departure of CEO Emad Mostaque, multiple staff layoffs, and the exit of three key engineers who had helped develop the technology. Some of those engineers founded Black Forest Labs in August and released Flux, which has become the latest open-weights AI image model to beat.

ChatGPT Advanced Voice imitates human voice in testing

An illustration of a computer synthesizer spewing out letters.

AI voice-synthesis models are master imitators these days, and they are capable of much more than many people realize. In August, we covered a story where OpenAI’s ChatGPT Advanced Voice Mode feature unexpectedly imitated a user’s voice during the company’s internal testing, revealed by OpenAI after the fact in safety testing documentation. To prevent future instances of an AI assistant suddenly speaking in your own voice (which, let’s be honest, would probably freak people out), the company created an output classifier system to prevent unauthorized voice imitation. OpenAI says that Advanced Voice Mode now catches all meaningful deviations from approved system voices.

Independent AI researcher Simon Willison discussed the implications with Ars Technica, noting that while OpenAI restricted its model’s full voice synthesis capabilities, similar technology would likely emerge from other sources within the year. Meanwhile, the rapid advancement of AI voice replication has caused general concern about its potential misuse, although companies like ElevenLabs have already been offering voice cloning services for some time.

San Francisco’s robotic car horn symphony

A Waymo self-driving car in front of Google's San Francisco headquarters, San Francisco, California, June 7, 2024.

A Waymo self-driving car in front of Google’s San Francisco headquarters, San Francisco, California, June 7, 2024. Credit: Getty Images

In August, San Francisco residents got a noisy taste of robo-dystopia when Waymo’s self-driving cars began creating an unexpected nightly disturbance in the South of Market district. In a parking lot off 2nd Street, the cars congregated autonomously every night during rider lulls at 4 am and began engaging in extended honking matches at each other while attempting to park.

Local resident Christopher Cherry’s initial optimism about the robotic fleet’s presence dissolved as the mechanical chorus grew louder each night, affecting residents in nearby high-rises. The nocturnal tech disruption served as a lesson in the unintentional effects of autonomous systems when run in aggregate.

Larry Ellison dreams of all-seeing AI cameras

A colorized photo of CCTV cameras in London, 2024.

In September, Oracle co-founder Larry Ellison painted a bleak vision of ubiquitous AI surveillance during a company financial meeting. The 80-year-old database billionaire described a future where AI would monitor citizens through networks of cameras and drones, asserting that the oversight would ensure lawful behavior from both police and the public.

His surveillance predictions reminded us of parallels to existing systems in China, where authorities already used AI to sort surveillance data on citizens as part of the country’s “sharp eyes” campaign from 2015 to 2020. Ellison’s statement reflected the sort of worst-case tech surveillance state scenario—likely antithetical to any sort of free society—that dozens of sci-fi novels of the 20th century warned us about.

A dead father sends new letters home

An AI-generated image featuring Dad's Uppercase handwriting.

An AI-generated image featuring my late father’s handwriting. Credit: Benj Edwards / Flux

AI has made many of us do weird things in 2024, including this writer. In October, I used an AI synthesis model called Flux to reproduce my late father’s handwriting with striking accuracy. After scanning 30 samples from his engineering notebooks, I trained the model using computing time that cost less than five dollars. The resulting text captured his distinctive uppercase style, which he developed during his career as an electronics engineer.

I enjoyed creating images showing his handwriting in various contexts, from folder labels to skywriting, and made the trained model freely available online for others to use. While I approached it as a tribute to my father (who would have appreciated the technical achievement), many people found the whole experience weird and somewhat disturbing. The things we unhinged Bing Chat-like journalists do to bring awareness to a topic are sometimes unconventional. So I guess it counts for this list!

For 2025? Expect even more AI

Thanks for reading Ars Technica this past year and following along with our team coverage of this rapidly emerging and expanding field. We appreciate your kind words of support. Ars Technica’s 2024 AI words of the year were: vibemarking, deep doubt, and the aforementioned jabberwocky. The old stalwart “confabulation” also made several notable appearances. Tune in again next year when we continue to try to figure out how to concisely describe novel scenarios in emerging technology by labeling them.

Looking back, our prediction for 2024 in AI last year was “buckle up.” It seems fitting, given the weirdness detailed above. Especially the part about the robot dogs with guns. For 2025, AI will likely inspire more chaos ahead, but also potentially get put to serious work as a productivity tool, so this time, our prediction is “buckle down.”

Finally, we’d like to ask: What was the craziest story about AI in 2024 from your perspective? Whether you love AI or hate it, feel free to suggest your own additions to our list in the comments. Happy New Year!

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

2024: The year AI drove everyone crazy Read More »

the-20-most-read-stories-of-2024-on-ars-technica

The 20 most-read stories of 2024 on Ars Technica


Ars looks back at the top stories of the year.

Credit: Aurich Lawson | Getty Images

Hey, look at that! Another year has flown by, and I suspect many people would say “good riddance” to 2024.

The 2020s have been quite the decade so far. No matter what insanity has transpired by a particular December 31, the following year has shown up and promptly said, “Hold my beer.”

The biggest news at Ars in 2024 was our first site redesign in nearly a decade. We’re proud of Ars 9.0 (we’re up to 9.0.3 now), and we have continued to make changes based on your feedback. The best kind of feedback, however, is your clicks. Those clicks power this recap, so read on to learn which stories our readers found especially compelling.

20. NASA is about to make its most important safety decision in nearly a generation

Boeing’s Starliner spacecraft, seen docked at the International Space Station through the window of a SpaceX Dragon spacecraft.

Boeing’s Starliner spacecraft, seen docked at the International Space Station through the window of a SpaceX Dragon spacecraft. Credit: NASA

In June, NASA astronauts Butch Wilmore and Suni Williams were sent into space for a mission slated to last a little over a week. Six months later, they are still orbiting this terrestrial ball.

The two retired naval test pilots were the first people to catch a ride to orbit on the Boeing Starliner. Unfortunately for them (and Boeing), Starliner developed problems with its propulsion system.

Figuring out how to get them back down to Earth was arguably the biggest safety decision NASA has had to make in decades. Stephen Clark unpacked the situation, looking at how NASA’s culture of safety has evolved since the Challenger accident.

19. macOS 15 Sequoia: The Ars Technica review

Credit: Apple

One constant in our year-end recaps is operating system reviews. During 2024, Apple’s annual macOS release was the sole OS review to hit the top 20.

Touted as “the AI one,” most of the Apple Intelligence features didn’t show up until macOS 15.1 was released. The overall verdict on Sequoia? 2024’s installment of macOS was a solid update. Andrew Cunningham liked the new window tiling, mostly unchanged backward compatibility, and all of the minor but useful tweaks to many of the built-in apps.

18. What we know about the xz Utils backdoor that almost infected the world

Credit: Getty Images

xz Utils is a popular open-source data-compression utility for *nix OSes. In late March, one developer floored developers everywhere when he revealed a backdoor in the utility. The malicious code planted in versions 5.6.0 and 5.6.1 targeted encrypted SSH connections.

Thankfully, the malicious code was caught before it was merged into Debian and Red Hat. Years in the making and described the “best executed supply chain attack” by one cryptography engineer, the effort came thisclose to success. Dan Goodin explains what we know and how this might have happened.

Credit: Aurich Lawson | Getty Images | NASA

One of the problems humankind faces as we climb out of Earth’s gravitational well is cosmic radiation. On a long voyage to Mars, the crew will need to be protected against solar storms and other space radiation. Right now, ensuring that level of protection would require tons of shielding material, but that may change.

Active shielding was proposed in the 1960s, but the initial research didn’t result in any working prototypes. Now, the ESA and NASA are looking at magnetic fields and electrostatic shields to protect space travelers. Researchers have built and tested small-scale models of their electrostatic shields, and the ESA is working on superconducting magnets.

16. I added a ratgdo to my garage door, and I don’t know why I waited so long

Photograph of a ratgdo

Messing around with the electronics in our dwellings is part of the Ars DNA. In 1998, we were overclocking our Celerons. In 2024, we’re messing with our garage door openers.

Senior Tech Editor Lee Hutchinson hates looking out the back window to see if he remembered to close his garage door, so he stuck a Raspberry Pi out there that would email him every time the garage door opened or closed. Unfortunately for Lee and his Raspberry Pi, Houston is hot and humid for approximately 10 months out of the year, so his tiny computer gave up the ghost after one 98° day too many.

Instead of using the MyQ app that came with his garage door opener, he grabbed a ratgdo—a tiny little board with built-in Wi-Fi that gets wired into the garage door opener’s terminals. The result? A daily experience of the magic of functional home automation.

15. Boston Dynamics’ new humanoid moves like no robot you’ve ever seen

Credit: Boston Dynamics

Moves like Jagger? Not quite, but the latest Atlas robot from Boston Dynamics moves like it could bust a move out on the dance floor.

This Atlas uses electricity instead of hydraulics. While the old Atlas was capable of lifting heavy objects and traveling across all kinds of terrain, the heavy and complicated hydraulics made it massive. The all-electric version can move in ways that its predecessor couldn’t, as there are no hydraulic lines to worry about. As a result, the Atlas has an uncanny range of motion.

Hyundai was the first company to test Atlas in a manufacturing environment.

14. Unpatchable vulnerability in Apple chip leaks secret encryption keys

Credit: Aurich Lawson | Apple

Even though CPU manufacturers have been baking security features into their silicon for some time, malicious actors and researchers keep poking and prodding, looking for security flaws. A group of researchers found a dreaded unpatchable vulnerability in Apple silicon, one that doesn’t even require root access.

The attack, dubbed GoFetch, works against classical and hardened encryption algorithms and can extract a 2048-bit RSA key in less than an hour. It takes advantage of the chips’ data memory-dependent prefetcher, which optimizes performance by reducing latency between the CPU and RAM. Since you can’t patch silicon, the only solution is adding defenses to cryptographic code, and those come with performance penalties.

13. Air Canada must honor refund policy invented by airline’s chatbot

Depending on how you feel about interacting with actual humans for customer support, the rise of AI customer-service chatbots has been either a boon or a curse. An example of the latter comes courtesy of Air Canada.

Jake Moffatt had to fly from Vancouver to Toronto for his grandmother’s funeral, so he asked Air Canada’s chatbot to explain the airline’s bereavement policy. The chatbot gave Moffatt incorrect instructions, telling him that he could be reimbursed for a reduced bereavement rate up to 90 days of the ticket being issued. Unfortunately for everyone involved, this was not the policy of Air Canada.

The airline refused to honor the policy spelled out by its chatbot, at least until Moffatt took them to small claims court and won. When we last checked, the chatbot was no longer active.

12. In rare move from printing industry, HP actually has a decent idea

Someone touching a piece of paper that's sitting in a printer

There are some days that I long for my old Stylewriter printer. It was slow and dumb as a rock, but it more than adequately performed the function of putting ink on paper. I now have a multifunction printer/scanner/fax that suffers print quality problems partly because of how little it’s used. It might be different if I wanted to spend over $300 for a set of HP-branded toner cartridges instead of roughly $80 for generic ones, but I’d rather live with faded printouts.

HP has rightfully been the target of ire from consumers, and as Scharon pointed out, that company has been a major cause of broken trust between printer OEMs and consumers. So we were all surprised when HP came up with an idea that could simplify and speed up some print jobs. Having a new feature that would improve the printing feature is so much better than, say, using DRM to ensure third-party products don’t function correctly with HP printers.

11. It turns out NASA’s Mars helicopter was much more revolutionary than we knew

Credit: NASA/JPL

Ingenuity made its first flight on Mars in April 2021. Seventy-two flights and nearly three years later, the small helicopter made its last flight. As Eric Berger noted, Ingenuity stood out from other NASA hardware in two ways. First, it proved that powered flight on other worlds was a possibility. Despite Mars’ very thin atmosphere, the copter was able to zoom around on its carbon fiber blades.

More importantly, Ingenuity was built with commercial, off-the-shelf hardware. The success of its mission has opened the door to other possibilities, like flying a nuclear-powered drone through the thick, nitrogen-heavy atmosphere of Titan.

10. After Russian ship docks to space station, astronauts report a foul smell

Credit: NASA



Around these parts, the usual response to a foul smell is a glance in the dog’s direction. But when you’re in a tiny space station orbiting the Earth, a bad odor is particularly worrying, as astronauts on the International Space Station found out in November.

When the Russian cargo craft docked with the ISS, the Russian cosmonauts that opened the hatch were greeted by a wave of stink. The “toxic” smell was so bad that the Russians immediately shut the hatch.

Ultimately, the astronauts crewing the ISS were not in danger, and after some extra air scrubbing, the hatch was opened and the supplies unloaded.

9. What I learned when I replaced my cheap Pi 5 PC with a no-name Amazon mini desktop

Two cheapo Intel mini PCs, a Raspberry Pi 5, and an Xbox controller for scale.

Credit: Andrew Cunningham

Two cheapo Intel mini PCs, a Raspberry Pi 5, and an Xbox controller for scale. Credit: Andrew Cunningham

One of the fun things about working at Ars Technica is watching Andrew Cunningham stretch the limits of obsolete or inexpensive hardware and software. His attempt to use a Raspberry Pi 5 as a daily-driver desktop had mixed results, but that didn’t stop him from trying out a couple of sub-$200 PCs from Amazon.

Andrew ultimately settled on the $170 Bostgame B100 and $180 GMKtec NucBox G2. Both of them used Intel Processor N100 quad-core chips and could run Windows 11 along with some Linux distros. If you’re curious about what it’s like to use a tiny, inexpensive desktop for your daily computing needs, check out Andrew’s write-up.

8. Users ditch Glassdoor, stunned by site adding real names without consent

Complaining about your employer online is a time-honored tradition. Frustrated workers vent all over the Internet, but the hub of employee griping has historically been Glassdoor. That changed for a lot of folks when Glassdoor inexplicably decided to link real names to formerly anonymous accounts.

When Glassdoor acquired the professional networking app Fishbowl in 2021, every Glassdoor user was also signed up for a Fishbowl account. The big difference is that Fishbowl requires identity verification, so Glassdoor changed its terms of service to require the same.

“Since we require all users to have their names on their profiles, we will need to update your profile to reflect this,” a Glassdoor employee wrote to a user named Monica, reassuring her that “your anonymity will still be protected.” Monica did not trust the company’s assurances that it would go to court to “defeat requests for user information,” instead requesting that Glassdoor delete her account entirely. She wasn’t the only one.

7. What’s happening at Tesla? Here’s what experts think.

A coin with Elon Musk's face on it, being held next to a Tesla logo

Credit: Aurich Lawson | Getty Images | Beata Zawrzel

Tesla is responsible for two things: making electronic vehicles a realistic option for most drivers and helping make founder Elon Musk the world’s richest person. But after years of astronomical growth, Tesla has been on a downward slide. The Chinese market has gotten much tougher for Tesla—and everyone else—due to Chinese OEMs churning out low-cost BEVs. There have been safety problems with Tesla, and the company’s once legendary profit margins have crated to below industry average.

What’s going on? Our crack automotive reporter Jonathan Gitlin talked to some experts to see if Tesla was primed for a turnaround or if its slump was indicative of more troubles to come.

6.The Starliner spacecraft has started to emit strange noises

Boeing’s Starliner spacecraft is seen docked at the International Space Station on June 13.

Boeing’s Starliner spacecraft is seen docked at the International Space Station on June 13. Credit: NASA

It all started with some weird sounds. “I’ve got a question about Starliner,” astronaut Butch Wilmore radioed down to Mission Control at Johnson Space Center in Houston in late August. “There’s a strange noise coming through the speaker… I don’t know what’s making it.”

While that space oddity turned out to be just a weird anomaly, it may have helped prepare Wilmore and fellow astronaut Suni Williams for the bad Starliner news that followed—and an extra-long stay in orbit.

5. Here’s what it’s like to charge an EV at Electrify America’s new station

A row of EVs charging at EA's flagship location in San Francisco

Credit: Roberto Baldwin

I’ve been an EV owner for five years. During that time I’ve been exposed to just about every facet of EV ownership, including charging on road trips. With the right combination of apps (shoutout to PlugShare) and planning, road trips should be problem-free. But sometimes chargers are few and far between, out of service, crowded, or just plain janky.

Out of all the charging networks—and I’ve tried almost all of them at some point—Electrify America has been the most reliable for me. Their new flagship charging station is a far cry from their outposts typically located at the far end of a giant parking lot connected to a Walmart or Meijer. Instead of aimlessly wandering the aisles of a big-box retailer, drivers can chill in a well-appointed and secure space while their cars are topped off with electrons.

Want to increase EV adoption? Get more of these working, secure, and well-lit stations up and running ASAP.

4. Dell said return to the office or else—nearly half of workers chose “or else”

Signage outside Dell Technologies headquarters in Round Rock, Texas, US, on Monday, Feb. 6, 2023.

Ars has been all about the remote workforce since our launch in 1998. Once the COVID-19 pandemic hit, remote work became a thing for millions of workers. Some companies have adapted nicely to this new reality, realizing that their employees could do their jobs just as well from the comfort of their homes while pocketing some savings from a reduced office footprint.

Others have been less sanguine about remote work. Some have tried luring workers back to the office with perks, while others—like Dell—have been more coercive in their approach. The PC manufacturer told employees who stayed remote that they would be giving up on promotions or changing roles within the company. Internal tracking data showed that almost half of Dell’s workforce simply shrugged and stayed remote, consequences or not.

3. What I learned from using a Raspberry Pi 5 as my main computer for two weeks

The Raspberry Pi 5 inside its official case.

Credit: Andrew Cunningham

The Raspberry Pi 5 inside its official case. Credit: Andrew Cunningham

We read about Andrew’s experience with a pair of sub-$200 desktop PCs, but this story is what started it all. The spec sheet looked promising enough, with support for two 4K displays running at 60 Hz and space for an internal PCIe SSD, but the experience was not what he’d hoped.

Andrew’s time using the Raspberry Pi 5 as his daily driver started out disappointing, but once he reset his expectations, he ended up pleasantly surprised by the experience.

If you’re looking for the cheapest mini desktop PC possible, you’ll want to look elsewhere, but if you want to see how far along Arm Linux has come, read Andrew’s article.

2. What happens when an astronaut in orbit says he’s not coming back?

The STS-51-B mission begins with the liftoff of the Challenger from Pad 39A in April 1985.

Credit: NASA

The STS-51-B mission begins with the liftoff of the Challenger from Pad 39A in April 1985. Credit: NASA

Being strapped into a small space and thundered into space aboard a giant rocket has to be an incredibly stressful experience. But sometimes the stress doesn’t end with a successful launch. We don’t often get to peer behind the curtains and get a glimpse of the mental state of an astronaut, so when we do, it’s jarring.

“Hey, if you guys don’t give me a chance to repair my instrument, I’m not going back,” said astronaut Taylor Wang during a Space Shuttle mission in 1985. The first Chinese-born person in space, Wang was heading up an experiment on the behavior of liquid droplets in microgravity. When it didn’t work at the outset, Wang asked permission to troubleshoot it and make repairs. When Mission Control denied his request, he uttered that chilling sentence.

1. The surprise is not that Boeing lost commercial crew but that it finished at all

Boeing’s Starliner spacecraft is lifted to be placed atop an Atlas V rocket for its first crewed launch.

Credit: United Launch Alliance

Boeing’s Starliner spacecraft is lifted to be placed atop an Atlas V rocket for its first crewed launch. Credit: United Launch Alliance

Not only has there been a lot of Boeing on this top 20 list, there has been a lot of Boeing in the news all year. And most of that news has been bad.

Eric Berger dives deep into the development of Starliner, outlining the problems and setbacks that plagued its development, trying to answer the big question of how a company like Boeing, which had been at the acme of crewed spaceflight for decades, fell so far behind competition that didn’t even exist 20 years ago?


Thank you for making Ars a daily read during 2024. May you and those you love have a happy and safe holiday season.

Photo of Eric Bangeman

Eric Bangeman is the Managing Editor of Ars Technica. In addition to overseeing the daily operations at Ars, Eric also manages story development for the Policy and Automotive sections. He lives in the northwest suburbs of Chicago, where he enjoys cycling, playing the bass, and refereeing rugby.

The 20 most-read stories of 2024 on Ars Technica Read More »