Features

google-pixel-10a-review:-the-sidegrade

Google Pixel 10a review: The sidegrade


Meet the new boss, same as the old boss.

Pixel 10a in hand, back side

The camera now sits flush with the back panel. Credit: Ryan Whitwam

The camera now sits flush with the back panel. Credit: Ryan Whitwam

Google’s budget Pixels have long been a top recommendation for anyone who needs a phone with a good camera and doesn’t want to pay flagship prices. This year, Google’s A-series Pixel doesn’t see many changes, and the formula certainly isn’t different. The Pixel 10a isn’t so much a downgraded version of the Pixel 10 as it is a refresh of the Pixel 9a. In fact, it’s hardly deserving of a new name. The new Pixel gets a couple of minor screen upgrades, a flat camera bump, and boosted charging. But the hardware hasn’t evolved beyond that—there’s no PixelSnap and no camera upgrade, and it runs last year’s Tensor processor.

Even so, it’s still a pretty good phone. Anything with storage and RAM is getting more expensive in 2026, but Google has managed to keep the Pixel 10a at $500, the same price as the last few phones. It’s probably still the best $500 you can spend on an Android phone, but if you can pick up a Pixel 9a for even a few bucks cheaper, you should do that instead.

If it ain’t broke…

The phone’s silhouette doesn’t shake things up. It’s a glass slab with a flat metal frame. The display and the plastic back both sit inside the aluminum surround to give the phone good rigidity. The buttons, which are positioned on the right edge of the frame, are large, flat, and sturdy. On the opposite side is the SIM card slot—Google has thankfully kept this feature after dropping it on the flagship Pixel 10 family, but it has moved from the bottom edge. The bottom looks a bit cleaner now, with matching cut-outs housing the speaker and microphone.

Pixel 10a in hand

The Pixel 10a is what passes for a small phone now.

Credit: Ryan Whitwam

The Pixel 10a is what passes for a small phone now. Credit: Ryan Whitwam

Traditionally, Google’s Pixel A-series always had the same Tensor chip as the matching flagship generation. So last year’s Pixel 9a had the Tensor G4, just like the Pixel 9 and 9 Pro. The Pixel 10a breaks with tradition by remaining on the G4, while the flagship Pixels advanced to Tensor G5.

Specs at a glance: Google Pixel 9a vs. Pixel 10a
Phone Pixel 9a Pixel 10a
SoC Google Tensor G4 Google Tensor G4
Memory 8GB 8GB
Storage 128GB, 256GB 128GB, 256GB
Display 1080×2424 6.3″ pOLED, 60–120 Hz, Gorilla Glass 3, 2,700 nits (peak) 1080×2424 6.3″ pOLED, 60–120 Hz, Gorilla Glass 7i, 3,000 nits (peak)
Cameras 48 MP primary, f/1.7, OIS; 13 MP ultrawide, f/2.2; 13 MP selfie, f/2.2 48 MP primary, f/1.7, OIS; 13 MP ultrawide, f/2.2; 13 MP selfie, f/2.2
Software Android 15 (at launch), 7 years of OS updates Android 16, 7 years of OS updates
Battery 5,100 mAh, 23 W wired charging, 7.5 W wireless charging 5,100 mAh, 30 W wired charging, 10 W wireless charging
Connectivity Wi-Fi 6e, NFC, Bluetooth 5.3, sub-6 GHz 5G, USB-C 3.2 Wi-Fi 6e, NFC, Bluetooth 6.0, sub-6 GHz 5G, USB-C 3.2
Measurements 154.7×73.3×8.9 mm; 185g 153.9×73×9 mm; 183g

Google’s custom Arm chips aren’t the fastest you can get, and the improvement from G4 to G5 wasn’t dramatic. The latest version is marginally faster and more efficient in CPU and GPU compute, but the NPU saw a big boost in AI throughput. So the upgrade to Tensor G5 is not a must-have (unless you love mobile AI), but the Pixel 10a doesn’t offer the same value proposition that the 9a did. Most of the other specs remain the same for 2026 as well. The base storage and RAM are still 128GB and 8GB, respectively, and it’s IP68 rated for water and dust exposure.

Camera bump comparison

The Pixel 10a (left) has a flat camera module, but the Pixel 9a camera sticks out a bit.

Credit: Ryan Whitwam

The Pixel 10a (left) has a flat camera module, but the Pixel 9a camera sticks out a bit. Credit: Ryan Whitwam

This is what passes for a small phone these days. The device fits snugly in one hand, and its generously rounded corners make it pretty cozy. You can reach a large swath of the screen with one hand, and the device isn’t too heavy at 183 grams. The Pixel 10 is about the same size, but it’s much heavier at 204 g.

At 6.3 inches, the OLED screen offers the same viewable area as the 9a. However, Google says the bezels are a fraction of a millimeter slimmer. More importantly, the display has moved from the aging Gorilla Glass 3 to Gorilla Glass 7i. That’s a welcome upgrade that could help this piece of hardware live up to its lengthy software support. Google also boosted peak brightness by 11 percent to 3,000 nits. That’s the same as in the Pixel 10, but the difference won’t be obvious unless you’re looking at the 9a and 10a side by side under strong sunlight.

Pixel 10a and keyboard glamor shot

Google isn’t rocking the boat with the Pixel 10a.

Credit: Ryan Whitwam

Google isn’t rocking the boat with the Pixel 10a. Credit: Ryan Whitwam

There’s an optical fingerprint scanner under the screen, which will illuminate a dark room more than you would expect. The premium Pixels have ultrasonic sensors these days, which are generally faster and more accurate. The sensor on the 10a is certainly good enough given the price tag, and with Google increasingly looking to separate the A-series from the flagships, we wouldn’t expect anything more.

The new camera module is the only major visual alteration this cycle. The sensors inside haven’t changed, but Google did manage to fully eliminate the bump. The rear cameras on this phone are now flush with the surface, a welcome departure from virtually every other smartphone. The Pixel 10a sits flat on a table and won’t rock side to side if you tap the screen. The cameras on the 9a didn’t stick out much, but shaving a few millimeters off is still an accomplishment, and the generous battery capacity has been preserved.

The Tensor tension

Google will be the first to tell you that it doesn’t tune Tensor chips to kill benchmarks. That said, the Tensor G5 did demonstrate modest double-digit improvements in our testing. You don’t get that with the Pixel 10a and its year-old Tensor G4, but the performance isn’t bad at all for a $500 phone.

Pixel phones, including this one, are generally very pleasant to use. Animations are smooth and not overly elaborate, and apps open quickly. Benchmarks can still help you understand where a device falls in the grand scheme of things, so here are some comparisons.

Google builds phones with the intention of supporting them for the long haul, but how will that work when the hardware is leveling off? Tensor might not be as fast as Qualcomm’s Snapdragon chips, but the architecture is much more capable than what you’d find in your average budget phone, and Google’s control of the chipset ensures it can push updates as long as it wants.

Meanwhile, 8 gigabytes of RAM might be a little skimpy in seven years, but you’re not going to see generous RAM allotments in budget phones this year—not while AI data centers are gobbling up every scrap of flash memory. Right now, though, the Pixel 10a keeps apps in memory well enough, and it’s not running as many AI models in the background compared to the flagship Pixels.

The one place you may feel the Pixel 10a lagging is in games. None of the Tensor chips are particularly good at rendering complex in-game worlds, but that’s more galling for phones that cost $1,000. A $500 Pixel 10a that’s mediocre at gaming doesn’t sting as much, and it’s really not that bad unless you insist on playing titles like Call of Duty Mobile or Genshin Impact.

You don’t buy a Pixel because it will blow the door off every game and benchmark app—you buy it because it’s fast enough that you don’t have to think about the system-on-a-chip inside. That’s the Pixel 10a with Tensor G4.

Pixel 10a from edge in hand

The Pixel 10a is fairly thin, but it has a respectable 5,100 mAh battery inside.

Credit: Ryan Whitwam

The Pixel 10a is fairly thin, but it has a respectable 5,100 mAh battery inside. Credit: Ryan Whitwam

The new Pixel A phone again has a respectable 5,100 mAh battery. That’s larger than every other Pixel, save for the 10 Pro XL (5,200 mAh). It’s possible to get two solid days of usage from this phone between charges, and it’s a bit speedier when you do have to plug in. Google upgraded the wired charging from 23 W in the 9a to 30 W for the 10a. Wireless charging has been increased from 7.5 W to 10 W with a compatible Qi charger. However, there are no PixelSnap magnets inside the phone, which seems a bit arbitrary—this could be another way to make the $800 Pixel 10 look like a better upgrade. We’re just annoyed that Google’s new magnetic charger doesn’t work very well with the 10a.

Some AI, lots of updates

Phones these days come with a lot of bloatware—partner apps, free-to-play games, sports tie-ins, and more. You don’t have to deal with any of that on a Pixel. There’s only one kind of bloat out of the box, and that’s Google’s. If you plan to use Google apps and services on the Google phone, you don’t have to do much customization to make the Pixel 10a tolerable. It’s a clean, completely Googley experience.

Naturally, Google’s take on Android has the most robust implementation of Material 3 Expressive, which uses wallpaper colors to theme system elements and supported apps. It looks nice and modern, and we prefer it over Apple’s Liquid Glass. The recent addition of AI-assisted icon theming also means your Pixel home screen will finally be thematically consistent.

Pixel 10a on leather background

Material 3 Expressive looks nice on Google’s phones.

Credit: Ryan Whitwam

Material 3 Expressive looks nice on Google’s phones. Credit: Ryan Whitwam

There’s much more AI on board, but it’s not the full suite of Google generative tools. As with last year’s budget Pixel, you’re missing things like Pixel Screenshots, weather summaries, and Pixel Studio—Google reserves those for the flagship phones with their more powerful Gemini Nano models. You will get Google’s AI-powered anti-spam tools, plenty of Gemini integrations, and most of the phone features, like Call Screen. If you’re not keen on Google AI, this may actually be a selling point.

One of the main reasons to buy a Pixel is the support. Pixels are guaranteed a lengthy seven years of update support, covering both monthly security patches and OS updates. You can expect the Pixel 10a to get updates through 2033.

Samsung is the only other Android device maker that offers seven years of support, but it tends to be slower in updating phones after their first year. Pixel phones get immediate updates to new security patches and even new versions of Android. If you buy anything else that isn’t an iPhone, you’ll be looking at much less support and much more waiting.

Google also consistently delivers new features via the quarterly Pixel Drops, and while a lot of that is AI, there are some useful tools and security features, too. Google doesn’t promise all phones will get the same attention in Pixel Drops, but you should see new additions for at least a few years.

Pixel camera on a budget

Google isn’t pushing the envelope with the Pixel 10a, and in some ways, the camera experience is why it can get away with that. There’s no other $500 phone with a comparable camera experience, and that’s not because the Pixel 10a is light-years ahead in hardware. The phone has fairly modest sensors in that new, flatter module, but Google’s image processing is just that good.

Pixel 10a camera

The Pixel camera experience is a big selling point.

Credit: Ryan Whitwam

The Pixel camera experience is a big selling point. Credit: Ryan Whitwam

In 2026, Google’s budget Pixel still sports a 48 MP primary wide-angle camera, paired with a 13 MP ultrawide. There is no telephoto lens on the back, and the front-facing selfie shooter is also 13 MP. Of these cameras, only the primary lens has optical stabilization. Photos taken with all the cameras are sharp, with bright colors and consistent lighting.

Google’s image processing does a superb job of bringing out details in bright and dim areas of a frame, and Night Sight is great for situations where there just isn’t enough light for other phones to take a good photo. In middling light, the Pixel 10a maintains fast enough shutter speeds to capture movement, something both Samsung and Apple often struggle with.

Outdoor overcast. Ryan Whitwam

Pixel phones don’t have as many camera settings as a Samsung or OnePlus phone does—in fact, the 10a doesn’t even get as many manual controls as the flagship Pixels—but they’re great at quick snapshots. Within a couple of seconds, you can pop open the Pixel camera and shoot a photo that’s detailed and well-exposed without waiting on autofocus or fiddling with settings. So you’ll capture more moments with a Pixel than with other phones, which might not nail the focus or lighting even if you take a whole batch of photos with different settings.

Without a telephoto lens option, you won’t be able to push the Pixel 10a with extreme zoom levels like the more expensive Pixel 10 phones. You’re limited to 8x zoom, and things get quite blurry beyond 3-4x. Google’s image processing should be able to clean up a 2x crop well enough, but the image will look a bit artificial and over-sharpened if you look closely.

Video can be a weak point for Google. Samsung and Apple phones offer more options, and the quality of Google’s phones isn’t strong enough to make up for it. The videos look fine, but the stabilization isn’t perfect, and 4k60 can sometimes hiccup. It’s more what we’d expect from a $500 phone, whereas the 10a punches above its weight in still photography.

Running unopposed

It’s easy to be disappointed in the Pixel 10a when you look at the spec sheet. The hardware has barely evolved beyond last year’s phone, and it even has the same processor inside. This is a departure for Google, but it’s also expected given the state of the smartphone market. These are mature products, and support has gotten strong enough that you can use them for years without an upgrade. Smartphones are really becoming more like appliances than gadgets.

Pixel 10a vs. Pixel 10

The Pixel 10 has a much larger camera module to accommodate a third sensor.

Credit: Ryan Whitwam

The Pixel 10 has a much larger camera module to accommodate a third sensor. Credit: Ryan Whitwam

Google’s Pixel line has finally started to gain traction as smaller OEMs continue to drop out and scale back their plans in North America. Google is not alone in the mid-range—Samsung and Motorola still make a variety of Android phones in this price range, but they tend to make more compromises than the Pixel does.

The latest Google Pixel is only marginally better than the last model, featuring the same Tensor G4 processor, 8GB of RAM, and dual-camera setup. The body has modest upgrades, including a flat camera module and a slightly brighter, stronger display. We’d all like more exciting phone releases, but Google has realized it doesn’t need to be flashy to dominate the mid-range.

Pixel 10a, Pixel 10, and Pixel 10 Pro XL

The Pixel 10a (left), Pixel 10 (middle), and Pixel 10 Pro XL (right).

Credit: Ryan Whitwam

The Pixel 10a (left), Pixel 10 (middle), and Pixel 10 Pro XL (right). Credit: Ryan Whitwam

Even with a less-than-impressive 2026 upgrade, Google’s A-series Pixel remains a good value, just like its predecessor. The Pixel 9a was already much better than the competition, and the 10a is slightly better than that. With no real competition to speak of, Google’s new Pixel is still worth buying.

Of course, the very similar Pixel 9a remains a good purchase, too. Google continues to sell that phone at the same price. In fact, that’s true of the Pixel 8a in Google’s store, too. So you can have your choice of the new phone, the old phone, or an even older phone for the same $500. Google is clearly not concerned with clearing old stock. We expect to see at least occasional deals on last year’s Pixel. If you can get that phone even a little cheaper than the 10a, that’s a good idea. Otherwise, get used to spending $500 on Google’s mid-range appliance.

The good

  • Great camera experience
  • Long battery life
  • Good version of Android with generous update guarantee
  • Lighter and more compact than flagship phones

The bad

  • Barely an upgrade from Pixel 9a
  • Gaming performance is iffy

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google Pixel 10a review: The sidegrade Read More »

trump-fcc’s-equal-time-crackdown-doesn’t-apply-equally—or-at-all—to-talk-radio

Trump FCC’s equal-time crackdown doesn’t apply equally—or at all—to talk radio


FCC Chairman Brendan Carr’s unequal enforcement of the equal-time rule.

James Talarico and Stephen Colbert on the set of The Late Show with Stephen Colbert. Credit: Getty Images

In the Trump FCC’s latest series of attacks on TV broadcasters, Federal Communications Commission Chairman Brendan Carr has been threatening to enforce the equal-time rule on daytime and late-night talk shows. The interview portions of talk shows have historically been exempt from equal-time regulations, but Carr has a habit of interpreting FCC rules in novel ways to target networks disfavored by President Trump.

Critics of Carr point out that his threats of equal-time enforcement apply unequally since he hasn’t directed them at talk radio, which is predominantly conservative. Given the similarities between interviews on TV and radio shows, Carr has been asked to explain why he issued an equal-time enforcement warning to TV but not radio broadcasters.

Carr’s responses to the talk radio questions have been vague, even as he tangled with Late Show host Stephen Colbert and launched an investigation into ABC’s The View over its interview with Texas Democratic Senate candidate James Talarico. In a press conference after the FCC’s February 18 meeting, Deadline reporter Ted Johnson asked Carr why he has not expressed “the same concern about broadcast talk radio as broadcast TV talk shows.”

The Deadline reporter pointed out that “Sean Hannity’s show featured Ken Paxton in December.” Paxton, the Texas attorney general, is running for a US Senate seat in this year’s election. Carr claimed in response that TV broadcasters have been “misreading” FCC precedents while talk radio shows have not been.

“It appeared that programmers were either overreading or misreading some of the case law on the equal-time rule as it applies to broadcast TV,” Carr replied. “We haven’t seen the same issues on the radio side, but the equal-time rule is going to apply to broadcast across the board, and we’ll take a look at anything that arises at the end of the day.”

Carr’s radio claim “a bunch of nonsense”

Carr didn’t provide any specifics to support his claim that radio programmers have interpreted precedents correctly while TV programmers have not. The most obvious explanation for the disparate treatment is that Carr isn’t targeting conservative talk radio because he’s primarily interested in stifling critics of Trump. Carr has consistently used his authority to fight Trump’s battles against the media, particularly TV broadcasters, and backed Trump’s declaration that historically independent agencies like the FCC are no longer independent from the White House.

Carr’s claim that TV but not radio broadcasters have misread FCC precedents is “a bunch of nonsense,” said Gigi Sohn, a longtime lawyer and consumer advocate who served as counselor to then-FCC Chairman Tom Wheeler during the Obama era. Carr “was responding to criticism from people like Sean Hannity that the guidance would apply to conservative talk radio just as much as it would to so-called ‘liberal’ TV,” Sohn told Ars. “It doesn’t matter whether a broadcaster is a radio broadcaster or a TV broadcaster, the Equal Opportunities law and however the FCC implements it must apply to both equally.”

Sean Hannity during a Fox News Channel program on October 30, 2025.

Credit: Getty Images | Bloomberg

Sean Hannity during a Fox News Channel program on October 30, 2025. Credit: Getty Images | Bloomberg

Hannity, who hosts a Fox News show and a nationally syndicated radio show, pushed back against content regulation shortly after Carr’s FCC issued the equal-time warning to TV broadcasters in January. “Talk radio is successful because people are smart and understand we are the antidote to corrupt and abusively biased left wing legacy media,” Hannity said in a statement to the Los Angeles Times. “We need less government regulation and more freedom. Let the American people decide where to get their information from without any government interference.”

Carr’s claim of misreadings relates to the bona fide news exceptions to the equal-time rule, which is codified under US law as the Equal Opportunities Requirement. The rule requires that when a station gives time to one political candidate, it must provide comparable time and placement to an opposing candidate if an opposing candidate makes a request.

But when a political candidate appears on a bona fide newscast or bona fide news interview, a broadcaster does not have to make equal time available to opposing candidates. The exception also applies to news documentaries and on-the-spot coverage of news events.

Equal time didn’t apply to Jay Leno or Howard Stern

In the decades before Trump appointed Carr to the FCC chairmanship, the commission consistently applied bona fide exemptions to talk shows that interview political candidates. Phil Donahue’s show won a notable exemption in 1984, and over the ensuing 22 years, the FCC exempted shows hosted by Sally Jessy Raphael, Jerry Springer, Bill Maher, and Jay Leno. On the radio side, Howard Stern won a bona fide news exemption in 2003.

Despite the seemingly well-settled precedents, the FCC’s Media Bureau said in a January 21 public notice that the agency’s previous decisions do not “mean that the interview portion of all arguably similar entertainment programs—whether late night or daytime—are exempted from the section 315 equal opportunities requirement under a bona fide news exemption… these decisions are fact-specific and the exemptions are limited to the program that was the subject of the request.”

The Carr FCC warned that a program “motivated by partisan purposes… would not be entitled to an exemption under longstanding FCC precedent.” But if late-night show hosts are “motivated by partisan purposes,” what about conservative talk radio hosts? Back in 2017, Hannity described himself as “an advocacy journalist.” In previous years, he said he’s not a journalist at all.

“Remember when Sean Hannity used to claim he wasn’t a journalist, then claimed to be an ‘advocacy journalist’?” Harold Feld, a longtime telecom lawyer and senior VP of advocacy group Public Knowledge, told Ars. “Given that the Media Bureau guidance leans heavily into the question of whether the motivation is ‘for partisan purposes’ or ‘designed for the specific advantage of a candidate,’ it would seem that conservative talk radio is rather explicitly a problem under this guidance.”

“To put it bluntly, Carr’s explanation that shows that Trump has expressly disliked are ‘misreading’ the law, while conservative radio shows are not, strains credulity,” Feld said.

Conservative radio boomed after FCC ditched Fairness Doctrine

Conservative talk radio benefited from the FCC’s long-term shift away from regulating TV and radio content. A major change came in 1987 when the FCC decided to stop enforcing the Fairness Doctrine, a decision that helped fuel the late Rush Limbaugh’s success.

FCC regulation of broadcast content through the Fairness Doctrine had been upheld in 1969 by the Supreme Court in the Red Lion Broadcasting decision, which said broadcasters had special obligations because of the scarcity of radio frequencies. But the Reagan-era FCC decided 18 years later that the scarcity rationale “no longer justifies a different standard of First Amendment review for the electronic press” in “the vastly transformed, diverse market that exists today.” The FCC made that decision after an appeals court ruled that the FCC acted arbitrarily and capriciously in its enforcement of the doctrine against a TV station.

Even where the FCC didn’t eliminate content-based rules, it reduced enforcement. But after decades of the FCC scaling back enforcement of content-based regulations, Donald Trump was elected president.

Trump’s first FCC chair, Ajit Pai, rejected Trump’s demands to revoke station licenses over content that Trump claimed was biased against him. Pai and his successor, Biden-era FCC Chairwoman Jessica Rosenworcel, agreed that the First Amendment prohibits the FCC from revoking station licenses simply because the president doesn’t like a network’s news content.

After winning a second term, Trump promoted Carr to the chairmanship. Carr, an unabashed admirer of Trump, has said in interviews that “President Trump is fundamentally reshaping the media landscape” and that “President Trump ran directly at the legacy mainstream media, and he smashed a facade that they’re the gatekeepers of truth.” Carr describes Trump as “the political colossus of modern times.”

FCC Commissioner Brendan Carr standing next to and speaking to Donald Trump, who is wearing a

President-elect Donald Trump speaks to Brendan Carr, his intended pick for Chairman of the Federal Communications Commission, as he attends a SpaceX Starship rocket launch on November 19, 2024 in Brownsville, Texas.

Credit: Getty Images | Brandon Bell

President-elect Donald Trump speaks to Brendan Carr, his intended pick for Chairman of the Federal Communications Commission, as he attends a SpaceX Starship rocket launch on November 19, 2024 in Brownsville, Texas. Credit: Getty Images | Brandon Bell

Carr has led the charge in Trump’s war against the media by repeatedly threatening to revoke licenses under the FCC’s rarely enforced news distortion policy. Carr’s aggressive stance, particularly in his attacks on ABC’s Jimmy Kimmel, even alarmed prominent Republicans such as Sens. Rand Paul (R-Ky.) and Ted Cruz (R-Texas). Cruz said that trying to dictate what the media can say during Trump’s presidency will come back to haunt Republicans in future Democratic administrations.

With both the news distortion policy and equal-time rule, Carr hasn’t formally imposed any punishment. But his threats have an effect. Kimmel was temporarily suspended, CBS owner Paramount agreed to install what Carr called a “bias monitor” in exchange for a merger approval, and Texas-based ABC affiliates have filed equal-time notices with the FCC as a result of Carr’s threats against The View.

Colbert said on his show that CBS forbade him from interviewing Talarico because of Carr’s equal-time threats. CBS denied prohibiting the interview but acknowledged giving Colbert “legal guidance,” and Carr claimed that Colbert lied about the incident.

Colbert did not put his interview with Talarico on his broadcast show but released it on YouTube, where it racked up nearly 9 million views. “Only a handful of people would’ve seen it if it had run live,” Christopher Terry, a professor of media law and ethics at the University of Minnesota, told Ars. “But what is it up to, 8 million views on YouTube now? It’s like the biggest thing, everybody in the world’s talking about it now. CBS gave Talarico the best press they ever could have by not letting him on the air… Oldest lesson in the First Amendment handbook, the more you try to suppress speech, the more powerful you make it.”

FCC misread its own rules, Feld says

Feld said the Carr FCC’s public notice “misreads the law and ignores inconvenient precedent.” The notice describes the equal-time rule as a public-interest obligation for broadcasters that have licenses to use spectrum, and Carr has repeatedly said the rule is only for licensed broadcasters. But Feld said the rule also applies to cable channels, which are referred to as community antenna television systems in the Equal Opportunities law as written by Congress.

Moreover, Feld said the FCC guidance “conflates two separate statutory exemptions,” the bona fide newscast exemption and the bona fide news interview exemption. FCC precedents didn’t find that Howard Stern and Jerry Springer were doing newscasts but that their interviews “met the criteria for a bona fide news interview,” Feld said. Despite that, the Carr FCC’s “guidance appears to require that Late Night Shows must be news shows, not merely host an interview segment,” he said.

The FCC guidance describes the Jay Leno decision as an outlier that was “contrary” to a 1960 decision involving Jack Paar and “the first time that such a finding had been applied to a late night talk show, which is primarily an entertainment offering.”

Feld pointed out that Politically Incorrect with Bill Maher was the first late-night show to receive the exemption in 1999, seven years before Leno. Maher’s show was on ABC at the time. The FCC guidance also “fails to explain any meaningful difference” between late-night shows and afternoon shows like Jerry Springer’s, Feld said.

Carr may label TV hosts as “partisan political actors”

At the February 18 press conference, Johnson asked Carr to explain how the FCC is “assessing whether a candidate appearance on a talk show is motivated by partisan purposes.” The reporter asked if there were specific criteria, like a talk show host giving money to a political candidate or hosting a fundraiser.

“Yeah it’s possible, all of that could be relevant,” Carr said. Whether a program is “animated by a partisan political motivation” can be determined “through discovery,” and “people can come forward with their own showings in a petition for a declaratory ruling, but this is something that will be explored,” Carr said. “It’s part of the FCC’s case law, and the idea is that if you’re a partisan political actor under the case law, then you’re likely not going to qualify under the bona fide news exception. That’s OK, it just means you have to either provide equal airtime to the different candidates or there’s different ways you can get your message out through streaming services and other means for which the equal-time rule doesn’t apply.”

In a follow-up question, Johnson asked, “A partisan political actor would mean a talk show host or someone whose show it is?” Carr replied, “It could be that, yeah, it could be that.”

Carr confirmed reports that the FCC is investigating The View over the show’s interview with Talarico. “Yes, the FCC has an enforcement action underway on that and we’re taking a look at it,” Carr said at the press conference.

We contacted Carr’s office to ask for specifics about how TV programmers have allegedly misread the FCC’s equal-time precedents. We also asked whether the FCC is concerned that talk radio shows may be misreading the Howard Stern precedent or other rulings related to radio and have not received a response.

Carr targeted SNL on Trump’s behalf

Carr hasn’t been truthful in his statements about the equal-time rule, Terry said. “Carr is just an obnoxious figure who needs attention, and remember he absolutely lied about the NBC/Kamala Harris equal-time thing,” Terry said. Terry was referring to Carr’s November 2024 allegation that when NBC put Kamala Harris on Saturday Night Live before the election, it was “a clear and blatant effort to evade the FCC’s Equal Time rule.”

In fact, NBC gave Trump free airtime during a NASCAR telecast and an NFL post-game show and filed an equal-time notice with the FCC to comply with the rule. Terry filed a Freedom of Information Act request for emails that showed Carr discussing NBC’s equal-time notice on November 3, 2024, but Carr reiterated his allegation over a month later despite being aware of the steps NBC took to comply with the rule.

Terry said Carr has taken a similarly dishonest approach with his claim that talk shows don’t qualify for the equal-time exception. “I think it’s like a lot of things Carr says. Just because he says it doesn’t mean it’s true, right? It’s nonsense,” Terry told Ars. “Every precedent suggests that a show like The View or one of the talk shows at night is an interview-based talk show, and that’s what the bona fide news exception was designed to cover.”

Terry said applying Carr’s “partisan purposes” test would likely require “a complete rulemaking proceeding” and would be difficult now that the Supreme Court has limited the authority of federal agencies to interpret ambiguities in US law. But it’s up to broadcasters to stand up to Carr, he said.

“If one broadcaster was like, ‘Oh yeah? Make us,’ he’d lose in court. He would. The precedent is absolutely against this,” Terry said.

Because the bona fide exemptions apply so broadly to TV and radio programs, the equal-time rule has applied primarily to advertising access for the past few decades, Terry said. If a station sells advertising to one candidate, “you have to make equal opportunities available to their opponents at the same price that reaches the same functional amount of audience,” he said.

Terry said he thinks NBC could make a good argument that Saturday Night Live is exempt, but the network has decided that it’s “easier just to provide time” to opposing candidates. Terry, a former radio producer, said, “I worked in talk radio for over 20 years. We never once even thought about equal time outside of advertising.”

Howard Stern precedent ignored

Howard Stern talking in a studio and gesturing with his hands during his radio show.

Howard Stern debuts his show on Sirius Satellite Radio on January 9, 2006, at the network’s studios at Rockefeller Center in New York City.

Credit: Getty Images

Howard Stern debuts his show on Sirius Satellite Radio on January 9, 2006, at the network’s studios at Rockefeller Center in New York City. Credit: Getty Images

Feld said the Carr FCC’s guidance “says the exact opposite” of what the FCC’s 2003 ruling on Howard Stern stated “with regard to how this process is supposed to work. The Howard Stern decision expressly states that licensees don’t need to seek permission first.”

The 2003 FCC’s Stern ruling said, “Although we take this action in response to [broadcaster] Infinity’s request, we emphasize that licensees airing programs that meet the statutory news exemption, as clarified in our case law, need not seek formal declaration from the Commission that such programs qualify as news exempt programming under Section 315(a).”

By contrast, the Carr FCC encouraged TV programs and stations “to promptly file a petition for declaratory ruling” if they want “formal assurance” that they are exempt from the equal-time rule. “Importantly, the FCC has not been presented with any evidence that the interview portion of any late night or daytime television talk show program on air presently would qualify for the bona fide news exemption,” the notice said.

The Lerman Senter law firm said that before the Carr FCC issued its public notice, broadcasters that met the criteria for the bona fide news interview exemption generally did not seek an FCC ruling. Because of the public notice, “stations can no longer rely on FCC precedent as to applicability of the bona fide news interview exemption,” the law firm said. “Only by obtaining a declaratory ruling, in advance, from the FCC can a station be assured that it will not face regulatory action for interviewing a candidate without providing equal opportunities to opposing candidates.”

This is “quite a switch,” Feld said. If this is the new standard, “then conservative talk radio hosts should also be required to affirmatively seek declaratory rulings,” he said.

FCC is “licensing speech”

Berin Szóka, president of think tank TechFreedom, told Ars that “the FCC is effectively creating a system of prior restraints, that is, licensing speech. This is the greatest of all First Amendment problems. What’s worse, the FCC is doing this selectively, discriminating on the basis of speakers.”

TechFreedom has argued that the FCC should repeal the news distortion policy that Carr has embraced, and Szóka is firmly against Carr on equal-time enforcement as well. As Szóka noted, the Supreme Court has made clear that “laws favoring some speakers over others demand strict scrutiny when the legislature’s speaker preference reflects a content preference.”

“That’s exactly what’s happening here,” Szóka said. “Carr is imposing a de facto requirement that TV broadcasters, but not radio broadcasters, must file for prior assessment as to their ‘news’ bona fides.” Ultimately, it means that TV broadcasters “can no longer have political candidates on their shows without offering equal time to all candidates in that race unless they seek prior pre-clearance from the FCC as to whether they qualify as providing bona fide news,” he said.

Carr’s enforcement push was applauded by Daniel Suhr, president of the Center for American Rights, a group that has supported Trump’s claims of media bias. The group filed bias complaints against CBS, ABC, and NBC stations that were dismissed during the Biden era, but those complaints were revived by Carr in January 2025.

“This major announcement from the FCC should stop one-sided left-wing entertainment shows masquerading as ‘bona fide news,’” Suhr wrote on January 21. “The abuse of the airwaves by ABC & NBC as DNC-TV must end. FCC is restoring respect for the equal time rules enacted by Congress.”

Suhr later argued in the Yale Journal on Regulation that Carr’s approach is consistent with FCC rulings from 1960 to 1980, before the commission started exempting the interview portions of talk shows.

“From 1984 to 2006, conversely, the Commission took a broader view that included less traditional shows,” Suhr wrote. “The Commission suggested a more traditional view in 2008, and again in 2015, each time qualifying a show because it ‘reports news of some area of current events, in a manner similar to more traditional newscasts.’”

But both decisions mentioned by Suhr granted bona fide exemptions and did not upend the precedents that broadcasters continued to rely on until Carr’s public notice. Suhr also argued that the Carr approach is supported by the Supreme Court’s 1969 decision upholding the Fairness Doctrine, although the Reagan-era FCC decided that the court’s 1969 rationale about scarcity of the airwaves could no longer be justified in the modern media market.

Don’t like a show? Change the channel

With the FCC having a 2-1 Republican majority, Democratic Commissioner Anna Gomez has been the only member pushing back against Carr. Gomez has also urged big media companies to assert their rights under the First Amendment and reject Carr’s threats.

When asked about Carr threatening TV broadcasters but not radio ones, Gomez told Ars in a statement that “the FCC’s equal-time rules apply equally to television and radio broadcasters. The Communications Act does not vary by platform, and it does not vary by politics. Our responsibility is to apply the law consistently, grounded in statute and precedent, not based on who supports or challenges those in power.”

FCC enforcement in the Trump administration has been “driven by politics rather than principle,” with decisions “shaped by whether a broadcaster is perceived as a critic of this administration,” Gomez said. “That is not how an independent agency operates. The FCC is not in the business of policing media bias, and it is wholly inappropriate to wield its authority selectively for political ends. When enforcement is targeted in this way, it damages the commission’s credibility, undermines confidence that the law is being applied fairly and impartially, and violates the First Amendment.”

Gomez addressed the disparity in enforcement during her press conference after the recent FCC meeting, saying the rules should be applied equally to TV and radio. She also pointed out that viewers and listeners can easily find different programs if one doesn’t suit their tastes.

“There’s plenty of content on radio I’m not particularly fond of, but that’s why I don’t listen to it,” Gomez said. “I have plenty of other outlets I can go to.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Trump FCC’s equal-time crackdown doesn’t apply equally—or at all—to talk radio Read More »

new-airsnitch-attack-breaks-wi-fi-encryption-in-homes,-offices,-and-enterprises

New AirSnitch attack breaks Wi-Fi encryption in homes, offices, and enterprises


CLOWNS TO THE LEFT, JOKERS TO THE RIGHT

That guest network you set up for your neighbors may not be as secure as you think.

Illustration of a symbol representing radio waves for Wi-Fi networks

Credit: Getty Image | BlackJack3D

Credit: Getty Image | BlackJack3D

It’s hard to overstate the role that Wi-Fi plays in virtually every facet of life. The organization that shepherds the wireless protocol says that more than 48 billion Wi-Fi-enabled devices have shipped since it debuted in the late 1990s. One estimate pegs the number of individual users at 6 billion, roughly 70 percent of the world’s population.

Despite the dependence and the immeasurable amount of sensitive data flowing through Wi-Fi transmissions, the history of the protocol has been littered with security landmines stemming both from the inherited confidentiality weaknesses of its networking predecessor, Ethernet (it was once possible for anyone on a network to read and modify the traffic sent to anyone else), and the ability for anyone nearby to receive the radio signals Wi-Fi relies on.

Ghost in the machine

In the early days, public Wi-Fi networks often resembled the Wild West, where ARP spoofing attacks that allowed renegade users to read other users’ traffic were common. The solution was to build cryptographic protections that prevented nearby parties—whether an authorized user on the network or someone near the AP (access point)—from reading or tampering with the traffic of any other user.

New research shows that behaviors that occur at the very lowest levels of the network stack make encryption—in any form, not just those that have been broken in the past—incapable of providing client isolation, an encryption-enabled protection promised by all router makers, that is intended to block direct communication between two or more connected clients.

The isolation can effectively be nullified through AirSnitch, the name the researchers gave to a series of attacks that capitalize on the newly discovered weaknesses. Various forms of AirSnitch work across a broad range of routers, including those from Netgear, D-Link, Ubiquiti, Cisco, and those running DD-WRT and OpenWrt.

AirSnitch “breaks worldwide Wi-Fi encryption, and it might have the potential to enable advanced cyberattacks,” Xin’an Zhou, the lead author of the research paper, said in an interview. “Advanced attacks can build on our primitives to [perform] cookie stealing, DNS and cache poisoning. Our research physically wiretaps the wire altogether so these sophisticated attacks will work. It’s really a threat to worldwide network security.” Zhou presented his research on Wednesday at the 2026 Network and Distributed System Security Symposium.

Paper co-author Mathy Vanhoef, said a few hours after this post went live that the attack may be better described as a Wi-Fi encryption “bypass,” “in the sense that we can bypass client isolation. We don’t break Wi-Fi authentication or encryption. Crypto is often bypassed instead of broken. And we bypass it ;)” People who don’t rely on client or network isolation, he added, are safe.

Previous Wi-Fi attacks that overnight broke existing protections such as WEP and WPA worked by exploiting vulnerabilities in the underlying encryption they used. AirSnitch, by contrast, targets a previously overlooked attack surface—the lowest levels of the networking stack, a hierarchy of architecture and protocols based on their functions and behaviors.

The lowest level, Layer-1, encompasses physical devices such as cabling, connected nodes, and all the things that allow them to communicate. The highest level, Layer-7, is where applications such as browsers, email clients, and other Internet software run. Levels 2 through 6 are known as the Data Link, Network, Transport, Session, and Presentation layers, respectively.

Identity crisis

Unlike previous Wi-Fi attacks, AirSnitch exploits core features in Layers 1 and 2 and the failure to bind and synchronize a client across these and higher layers, other nodes, and other network names such as SSIDs (Service Set Identifiers). This cross-layer identity desynchronization is the key driver of AirSnitch attacks.

The most powerful such attack is a full, bidirectional machine-in-the-middle (MitM) attack, meaning the attacker can view and modify data before it makes its way to the intended recipient. The attacker can be on the same SSID, a separate one, or even a separate network segment tied to the same AP. It works against small Wi-Fi networks in both homes and offices and large networks in enterprises.

With the ability to intercept all link-layer traffic (that is, the traffic as it passes between Layers 1 and 2), an attacker can perform other attacks on higher layers. The most dire consequence occurs when an Internet connection isn’t encrypted—something that Google recently estimated occurred when as much as 6 percent and 20 percent of pages loaded on Windows and Linux, respectively. In these cases, the attacker can view and modify all traffic in the clear and steal authentication cookies, passwords, payment card details, and any other sensitive data. Since many company intranets are sent in plaintext, traffic from them can also be intercepted.

Even when HTTPS is in place, an attacker can still intercept domain look-up traffic and use DNS cache poisoning to corrupt tables stored by the target’s operating system. The AirSnitch MitM also puts the attacker in the position to wage attacks against vulnerabilities that may not be patched. Attackers can also see the external IP addresses hosting webpages being visited and often correlate them with the precise URL.

Given the range of possibilities it affords, AirSnitch gives attackers capabilities that haven’t been possible with other Wi-Fi attacks, including KRACK from 2017 and 2019 and more recent Wi-Fi attacks that, like AirSnitch, inject data (known as frames) into remote GRE tunnels and bypass network access control lists.

“This work is impressive because unlike other frame injection methods, the attacker controls a bidirectional flow,” said HD Moore, a security expert and the founder and CEO of runZero.

He continued:

This research shows that a wireless-connected attacker can subvert client isolation and implement full relay attacks against other clients, similar to old-school ARP spoofing. In a lot of ways, this restores the attack surface that was present before client isolation became common. For folks who lived through the chaos of early wireless guest networking rollouts (planes, hotels, coffee shops) this stuff should be familiar, but client isolation has become so common, these kinds of attacks may have fallen off people’s radar.

Stuck in the middle with you

The MitM targets Layers 1 and 2 and the interaction between them. It starts with port stealing, one of the earliest attack classes of Ethernet that’s adapted to work against Wi-Fi. An attacker carries it out by modifying the Layer-1 mapping that associates a network port with a victim’s MAC—a unique address that identifies each connected device. By connecting to the BSSID that bridges the AP to a radio frequency the target isn’t using (usually a 2.4GHz or 5GHz) and completing a Wi-Fi four-way handshake, the attacker replaces the target’s MAC with one of their own.

The attacker spoofs the victim’s MAC address on a different NIC,

causing the internal switch to mistakenly associate the victim’s address with the attacker’s port/BSSID. As a result, frames intended for the victim are

forwarded to the attacker and encrypted using the attacker’s PTK.

Credit: Zhou et al.

The attacker spoofs the victim’s MAC address on a different NIC,

causing the internal switch to mistakenly associate the victim’s address with the attacker’s port/BSSID. As a result, frames intended for the victim are

forwarded to the attacker and encrypted using the attacker’s PTK. Credit: Zhou et al.

In other words, the attacker connects to the Wi-Fi network using the target’s MAC and then receives the target’s traffic. With this, an attacker obtains all downlink traffic (data sent from the router) intended for the target. Once the switch at Layer-2 sees the response, it updates its MAC address table to preserve the new mapping for as long as the attacker needs.

This completes the first half of the MitM, allowing all data to flow to the attacker. That alone would result in little more than a denial of service for the target. To prevent the target from noticing—and more importantly, to gain the bidirectional MitM capability needed to perform more advanced attacks—the attacker needs a way to restore the original mapping (the one assigning the victim’s MAC to the Layer-1 port). An attacker performs this restoration by sending an ICMP ping from a random MAC. The ping, which must be wrapped in a Group Temporal key shared among all clients, triggers replies that cause the Layer-1 mapping (i.e., port states) to revert back to the original one.

“In a normal Layer-2 switch, the switch learns the MAC of the client by seeing it respond with its source address,” Moore explained. “This attack confuses the AP into thinking that the client reconnected elsewhere, allowing an attacker to redirect Layer-2 traffic. Unlike Ethernet switches, wireless APs can’t tie a physical port on the device to a single client; clients are mobile by design.”

The back-and-forth flipping of the MAC from the attacker to the target, and vice versa, can continue for as long as the attacker wants. With that, the bidirectional MitM has been achieved. Attackers can then perform a host of other attacks, both related to AirSnitch or ones such as the cache poisoning discussed earlier. Depending on the router the target is using, the attack can be performed even when the attacker and target are connected to separate SSIDs connected by the same AP. In some cases, Zhou said, the attacker can even be connected from the Internet.

“Even when the guest SSID has a different name and password, it may still share parts of the same internal network infrastructure as your main Wi-Fi,” the researcher explained. “In some setups, that shared infrastructure can allow unexpected connectivity between guest devices and trusted devices.”

No, enterprise defenses won’t protect you

Variations of the attack defeat the client isolation promised by makers of enterprise routers, which typically use credentials and a master encryption key that are unique to each client. One such attack works across multiple APs when they share a wired distribution system, as is common in enterprise and campus networks.

In their paper, AirSnitch: Demystifying and Breaking Client Isolation in Wi-Fi Networks, the researchers wrote:

Although port stealing was originally devised for hosts on the same switch, we show that attackers can hijack MAC-to-port mappings at a higher layer, i.e., at the level of the distribution switch—to intercept traffic to victims associated with different APs. This escalates the attack beyond its traditional limits, breaking the assumption that separate APs provide effective isolation.

This discovery exposes a blind spot in client isolation: even physically separated APs, broadcasting different SSIDs, offer ineffective isolation if connected to a common distribution system. By redirecting traffic at the distribution switch, attackers can intercept and manipulate victim traffic across AP boundaries, expanding the threat model for modern Wi-Fi networks.

The researchers demonstrated that their attacks can enable the breakage of RADIUS, a centralized authentication protocol for enhanced security in enterprise networks. “By spoofing a gateway MAC and connecting to an AP,” the researchers wrote, “an attacker can steal uplink RADIUS packets.” The attacker can go on to crack a message authenticator that’s used for integrity protection and, from there, learn a shared passphrase. “This allows the attacker to set up a rogue RADIUS server and associated rogue WPA2/3 access point, which allows any legitimate client to connect, thereby intercepting their traffic and credentials.”

The researchers tested the following 11 devices:

  • Netgear Nighthawk x6 R8000
  • Tenda RX2 Pro
  • D-LINK DIR-3040
  • TP-LINK Archer AXE75
  • ASUS RT-AX57
  • DD-WRT v3.0-r44715
  • OpenWrt 24.10
  • Ubiquiti AmpliFi Alien Router
  • Ubiquiti AmpliFi Router HD
  • LANCOM LX-6500
  • Cisco Catalyst 9130

As noted earlier, every tested router was vulnerable to at least one attack. Zhou said that some router makers have already released updates that mitigate some of the attacks, and more updates are expected in the future. But he also said some manufacturers have told him that some of the systemic weaknesses can only be addressed through changes in the underlying chips they buy from silicon makers.

The hardware manufacturers face yet another challenge: The client isolation mechanisms vary from maker to maker. With no industry-wide standard, these one-off solutions are splintered and may not receive the concerted security attention that formal protocols are given.

So how bad is AirSnitch, really?

With a basic understanding of AirSnitch, the next step is to put it into historical context and assess how big a threat it poses in the real world. In some respects, it resembles the 2007 PTW attack (named for its creators Andrei Pyshkin, Erik Tews, and Ralf-Philipp Weinmann) that completely and immediately broke WEP, leaving Wi-Fi users everywhere with no means to protect themselves against nearby adversaries. For now, client isolation is similarly defeated—almost completely and overnight—with no immediate remedy available.

At the same time, the bar for waging WEP attacks was significantly lower, since it was available to anyone within range of an AP. AirSnitch, by contrast, requires that the attacker already have some sort of access to the Wi-Fi network. For many people, that may mean steering clear of public Wi-Fi networks altogether.

If the network is properly secured—meaning it’s protected by a strong password that’s known only to authorized users—AirSnitch may not be of much value to an attacker. The nuance here is that even if an attacker doesn’t have access to a specific SSID, they may still use AirSnitch if they have access to other SSIDs or BSSIDs that use the same AP or other connecting infrastructure.

Yet another difference to the PTW attack—and others that have followed breaking WPA, WPA2, and WPA3 protections—is that they were limited to hacks using terrestrial radio signals, a much more limited theater than the one AirSnitch uses. Ultimately, the AirSnitch attacks are broader but less severe.

Also unlike those previous attacks, firewall mitigations may be more problematic.

“We expand the threat model showing an attacker can be on another channel or port, or can be from the Internet,” Zhou said. “Firewalls are also networking devices. We often say a firewall is a Layer-3 device because it works at the IP layer. But fundamentally, it’s connected by wire to different network elements. That wire is not secure.”

Some of the threat can be mitigated by using VPNs, but this remedy has all the usual drawbacks that come with them. For one, VPNs are notorious for leaking metadata, DNS queries, and other traffic that can be useful to attackers, making the protection limited. And for another, finding a reputable and trustworthy VPN provider has historically proven to be vexingly difficult, though things have improved more recently. Ultimately, a VPN shouldn’t be regarded as much more than a bandage.

Another potential mitigation is using wireless VLANs to isolate one SSID from another. Zhou said such options aren’t universally available and are also “super easy to be configured wrong.” Specifically, he said VLANs can often be implemented in ways that allow “hopping vulnerabilities.” Further, Moore has argued why “VLANs are not a practical barrier” against all AirSnitch attacks

The most effective remedy may be to adopt a security stance known as zero trust, which treats each node inside a network as a potential adversary until it provides proof it can be trusted. This model is challenging for even well-funded enterprise organizations to adopt, although it’s becoming easier. It’s not clear if it will ever be feasible for more casual Wi-Fi users in homes and smaller businesses.

Probably the most reasonable response is to exercise measured caution for all Wi-Fi networks managed by people you don’t know. When feasible, use a trustworthy VPN on public APs or, better yet, tether a connection from a cell phone.

Wi-Fi has always been a risky proposition, and AirSnitch only expands the potential for malice. Then again, the new capabilities may mean little in the real world, where evil twin attacks accomplish many of the same objectives with much less hassle.

Moore said the attacks possible before client isolation were often as simple as running ettercap or similar tools as soon as a normal Wi-Fi connection was completed. AirSnitch attacks require considerably more work, at least until someone writes an easy-to-use script that automates it.

“It will be interesting to see if the wireless vendors care enough to resolve these issues completely and if attackers care enough to put all of this together when there might be easier things to do (like run a fake AP instead),” Moore said. “At the least it should make pentesters’ lives more interesting since it re-opens a lot of exposure that many folks may not have any experience with.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

New AirSnitch attack breaks Wi-Fi encryption in homes, offices, and enterprises Read More »

inside-the-quixotic-team-trying-to-build-an-entire-world-in-a-20-year-old-game

Inside the quixotic team trying to build an entire world in a 20-year-old game


Stories and lesson learned from an impossibly large community modding project.

The city of Anvil, rendered in The Elder Scrolls III: Morrowind. Credit: Daniel Larlham Jr.

Despite being regarded as one of the greatest role-playing games of all time, The Elder Scrolls III: Morrowind disappointed some fans upon its release in 2002 because it didn’t match the colossal scope of its predecessor, The Elder Scrolls II: Daggerfall. Almost immediately, fans began modding the remaining parts of the series’ fictional continent, Tamriel, into the game.

Over 20 years later, thousands of volunteers have collaborated on the mod projects Tamriel Rebuilt and Project Tamriel, building a space comparable in size to a small country. Such projects often sputter out, but these have endured, thanks in part to a steady stream of small, manageable updates instead of larger, less frequent ones.

A tale of (at least two) mods

It’s true that Daggerfall included an entire continent’s worth of content, but it was mostly composed of procedurally generated liminal space. By contrast, Morrowind contained just a single island—not even the entire province after which the game was named. The difference was that it was handcrafted.

Still, a player called “Ender,” stewing in disappointment over Morrowind’s perceived scope, took to an Elder Scrolls forum to propose a collaborative effort to mod the rest of Tamriel into the game. Tamriel Rebuilt was born.

After realizing that re-creating the entire continent was too lofty a goal, the group decided to instead focus on the rest of the Morrowind province alone—but that didn’t last long.

There had been others working toward similar goals. The makers of the fan project “Skyrim: Home of The Nords” were working on putting the province of Skyrim into Morrowind well before that location was officially made the setting of the 2011 sequel The Elder Scrolls V: Skyrim.

A Khajiit attacks inside a fort in Skyrim

A screenshot from Skyrim: Home of the Nords.

Credit: Daniel Larlham Jr.

A screenshot from Skyrim: Home of the Nords. Credit: Daniel Larlham Jr.

Other modders were working on “Project Cyrodilll,” an attempt to put The Elder Scrolls IV: Oblivion’s province into Morrowind. In 2015, those two projects combined to form Project Tamriel, reigniting the goal of adding the remaining provinces of Tamriel.

Tamriel Rebuilt and Project Tamriel first became connected when the modders decided to combine their asset repositories into Tamriel_Data, but they have since grown closer through their shared developers, training protocols, and tools.

“The entirety of Tamriel is, in our scale, roughly the size of the real-life country of Malta, which is small in real life, but quite big from a human perspective,” said Tiny Plesiosaur, a senior developer who has done mapping and planning for both projects but who spends most of her time on Project Tamriel these days.

Both projects aim to create a cohesive, lore-accurate representation of these realms as they would have looked during the fictional historical period in which Morrowind takes place. So far, they’ve made substantial progress.

One thing in their favor, said Mort, a 13-year veteran quest-designer of Tamriel Rebuilt, is that Morrowind design makes it especially amenable to large-scale modding.

“I’d say the thing that makes Morrowind most conducive to these kinds of projects is no voiced dialogue,” Mort said. “The reason that you see so many quest mods for Morrowind as opposed to Oblivion and Skyrim and even Fallout is that the barrier to make a quest is essentially nothing.”

Frequent, contained public releases also work to their advantage. “I know for a lot of projects, they want to [do a] ‘we’ll release it when it’s done’ kind of thing,” said Mort. “We’ve found that releasing content builds hype, it gives players what they want, and perhaps most importantly, it serves as a proof of life and a fantastic recruitment tool.”

Every time Tamriel Rebuilt pushes a release, he said, the team picks up at least a dozen devs almost immediately. So far, Tamriel Rebuilt has seen nine releases; the most recent is titled “Grasping Fortune.” The next release, “Poison Song,” is expected sometime in 2026 and will include a never-before-seen faction. The most optimistic estimate for when the project will be fully finished is 2035.

A map of the province of Morrowind for the Tamriel Rebuilt project. Note that the original game includes only the large island in the bay in the top half of the image.

Credit: Tamriel Rebuilt

A map of the province of Morrowind for the Tamriel Rebuilt project. Note that the original game includes only the large island in the bay in the top half of the image. Credit: Tamriel Rebuilt

Project Tamriel has made most of its progress in Skyrim and Cyrodiil. The release of “Abecean Shores,” the coastal section of Cyrodiil, came in late 2024. Together, the projects have added hundreds of hours of hand-crafted quests, dungeons, and landscapes to a game that was already robust.

Lus said the current timeline for Project Tamriel is a new release for Skyrim and then Cyrodill, followed by either High Rock—a comparatively smaller, peninsula province west of Skyrim—or the desert province of Hammerfell.

For many developers, the point isn’t to see these massive projects in a finished state but to complete the next task and hopefully bring the team closer to the next release.

A brief history of Tamriel Rebuilt

Sultan of Rum, a kind of historian for Tamriel Rebuilt, joked that the project was aptly named because of how many times it has been rebuilt—partly because the tools the modders use to build the project have gotten better over time, rendering work done before those advances obsolete.

But even then, Tamriel Rebuilt was more of a Wild West in its infancy: a ragtag bunch of video game enthusiasts working mostly independently and without very much oversight. As the project has become more unified, it has meant a lot of turnover and a fair share of setbacks.

“If you took a satellite picture of the game world in 2005, you’d have essentially a complete province already,” Sultan of Rum said. “But the trouble was that the quality wasn’t good; there was no coherence. The 5 percent of the work to just create a landmass was done, but the management wasn’t there.”

Much of the project’s history has been lost to time as Internet forums disappeared, but Sultan of Rum has been able to piece together some of the growing pains Tamriel Rebuilt has endured. A struggle between the need to centralize and the desire of some modders to remain independent is a recurring theme.

One period is considered a dark age for Tamriel Rebuilt. In the first couple of years, a significant group of modders had been working on a piece of content for the project called “Silgrad Tower,” while the project simultaneously began consolidating to build continuity.

Concept art for the project. ThomasRuz

There was debate among the modders about where Silgrid Tower should be located and which faction would have controlled it. This eventually led to an acrimonious split between the two groups. “The Silgrid Tower team was eventually put to the choice of either having to delete their work and restart it or, you know, leave the project. So they left the project,” said Sultan of Rum.

He said that much of the conflict has since been scrubbed from the forum archives, and the ordeal led to the deletion of the Tamriel Rebuilt forums, which were hosted by the Silgrid Tower team. This was probably the most drama the project has seen, he said.

There was also a period when the project moved to The Elder Scrolls IV: Oblivion’s construction set. “Maybe even a majority of the project jumped onto the [Oblivion] engine to start building out Hammerfell,” said Sultan of Rum. “So for a long time—four years—the sort of focus point of Tamriel Rebuilt was on Oblivion and on the province of Hammerfell, not on the Morrowind part, which of course was the successful one.”

Another event is solemnly referred to as “The Great Self-Decapitation.” Sultan of Rum explained that around 2015, some of the older guard—developers and administrators alike—left the project all at once. The exodus was due to the second scrapping of a large city in development.

“People were hoping that by 2013 it would come out. Literally thousands of hours of human labor were spent creating it in the construction set,” recalled Sultan of Rum. “It just turned out that it was non-viable as a playable space. It wasn’t thought out well enough, it didn’t coalesce into a compelling, playable world. The modders were faced with the prospect of having to throw out just a huge chunk of work.”

That decision sapped a lot of energy from the project, and others on the team began to move away from it as their personal lives became busier. Sultan of Rum said all this has made the project better in the long run. Project leaders soon instituted better planning and management systems that centralized information and preserved institutional knowledge in case longtime developers decide to leave.

Over the years, they’ve also refined their training practices, which has ultimately led to more developers joining both projects.

“If your goal is to get development done, providing as much detail and tutorializing and onboarding processes, making that as simple as possible is going to get you your best results,” said Mort. “Because, again, if you aren’t gaining devs, you’re losing devs.”

The parameters for onboarding new developers are now clearly defined, with a low barrier to entry focused on competence with the tools. These tests are called showcases.

Once the showcase is accepted, developers can begin working on both Tamriel Rebuilt and Project Tamriel, where much of the overlap between the two lies.

Mort added that the gap between a potential developer expressing interest and actively contributing can be as little as a week. This also allows movement between roles—for example, an interior designer training in exterior designing or someone starting in quest design moving elsewhere if it’s not a good fit.

Even more importantly, newer tech has improved the development process. The open source 3D modeling and animation tool Blender has become much friendlier to Morrowind modders, enabling teams to create custom assets more easily.

While this has required retouching some areas of Tamriel Rebuilt, it has also meant quicker turnaround times for custom assets. For Project Tamriel developers, the impact has been greater, as they can now reliably and routinely create assets to better represent Tamriel’s diverse cultures.

The old informs the new

The developers are well aware that both projects may still be unfinished 10 years from now, but most are just working toward the next release.

Discussing the project with just a few of the developers, it’s immediately clear how current work will inform future efforts.

For example, LogansGun is an exterior developer who did much of the work on the promotional videos for Tamriel Rebuilt’s last few releases. He joined the project because he wanted to leave his mark on this historical effort and ended up staying much longer than he thought he would.

Between work and raising a family, LogansGun often found himself working on Tamriel Rebuilt instead of playing video games, partly because of a childhood love for Morrowind and a desire to make the game more than it was.

“I remember playing it a lot, and it really stuck with me,” LogansGun said. “And it might have been like 5th or 6th grade that I had a friend and we all sat in like a four-student pod, and he would bring the map inside the plastic Xbox disc case. When we had some free time in class, he’d lay it out, and we’d all be looking all over the map of Vvardenfell and all the things that we had explored or wanted to explore.”

A city spire against the sky

Another environment from the game, Old Ebonheart.

Credit: Daniel Larlham Jr.

Another environment from the game, Old Ebonheart. Credit: Daniel Larlham Jr.

Meadhainnigh, a college-aged chemical engineering student, first learned about Tamriel Rebuilt through the promotional video for Grasping Fortune, the project’s most recent update. The roughly three-minute video showcases some of the landscapes, cityscapes, and interiors, the culmination of thousands of hours of work. LogansGun is credited as the creator of that video, which has been used to inspire the next wave of contributors.

“I was thinking, well, this seems like a really cool project, and I just wanted to contribute and feel part of something bigger, and the rest is history, really,” said Meadhainnigh, who is now an asset dev for Project Tamriel. “But I joined the Discord server. I kind of learned the process of the project, and once I felt like I knew what I was going on, I tossed my hat in the ring.”

Meadhainnigh knew very little about development before he joined the project, and he said it’s the first online community he has been a part of. What keeps him going is that community—and to see his and others’ work become a part of a whole.

“We have some really wonderful people who are the old guard that feel like they are the comfortable welders, and they’re all very wise,” he said. “But even in the newest editions, we’re not here because we think that it’s all going to be done within our lifetimes. We like to joke about 2090 and about raising our children to work on the project. We just like to look at the next release, and that tends to be exciting enough to get us going.”

Inside the quixotic team trying to build an entire world in a 20-year-old game Read More »

the-first-cars-bold-enough-to-drive-themselves

The first cars bold enough to drive themselves


Quevedo’s telekino of 1904 was the first step on the road to autonomous Waymos.

Credit: Aurich Lawson | Getty Images

No one knows exactly when the vehicles we drive will finally wrest the steering wheel from us. But the age of the autonomous automobile isn’t some sudden Big Bang. It’s more of a slow crawl, one that started during the Roosevelt administration. And that’s Theodore, not Franklin. And not in America, but in Spain, by someone you’ve probably never heard of.

His name was Leonardo Torres Quevedo, a Spanish engineer born in Santa Cruz, Spain, in 1852. Smart? In 1914, he developed a mechanical chess machine that autonomously played against humans. But more than a decade earlier, he pioneered the development of remote-control systems. What he wrought was brilliant, if crude—and certainly ahead of its time.

The first wireless control

It was called the Telekino, a name drawn from the Greek “tele,” meaning at a distance, and “kino,” meaning movement. Patented in Spain, France, and the United States, it was conceived as a way to prevent airship accidents. The Telekino transmitted wireless signals to a small receiver known as a coherer, which detected electromagnetic waves and transformed them into an electrical current. This current was amplified and sent on to electromagnets that slowly rotated a switch controlling the proper servomotor. Quevedo could issue 19 distinct commands to the systems of an airship without ever touching a control cable.

By 1904, he was using the Telekino to direct a small, three-wheeled vehicle from nearly 100 feet away. It was the earliest recorded instance of a vehicle being controlled by radio. After that, Quevedo demonstrated the system’s usefulness aboard boats and even torpedoes, but here the story slows. The Spanish Crown, cautious and reluctant to invest, withheld its support. Without funding, Quevedo couldn’t build and sell the Telekino.

But he had shown that a machine could be guided by signals. It would be more than a century before that notion would reach fruition. But that doesn’t mean others didn’t try.

Leave it to Ohio

Dayton, Ohio, August 5, 1921. The country was in the thick of the automotive age, and Dayton stood as one of its industrious nerve centers. General Motors had established a strong presence there with its Frigidaire Division, promising a future of electrified domestic bliss. Meanwhile, across town, engineers at Delco, the Dayton Engineering Laboratories Company, were refining the very heart of the automobile. This was a place where invention was not merely encouraged, but expected.

But on this particular summer afternoon, the most remarkable innovation did not come from the factory floor or the corporate drafting room. It came instead from the US Army, an outfit not usually known for whimsical experimentation. It sent a small, three-wheeled vehicle, scarcely eight feet long and fitted with radio equipment, rolling through the city’s business district. The vehicle moved without a driver. Some 50 feet behind it, Captain R. E. Vaughn of nearby McCook Field guided its movement by radio signal.

1926: A woman smiles and waves from the driver's seat of a Chandler convertible parked on a gravel road near a coastline. She wears an overcoat and a cloche hat

A 1926 Chandler. Obviously, this one is human-driven—you can tell by the human waving from the driver’s seat.

Credit: American Stock/Getty Images

A 1926 Chandler. Obviously, this one is human-driven—you can tell by the human waving from the driver’s seat. Credit: American Stock/Getty Images

Four years later, the spectacle reappeared. This time it was on the streets of New York City, where a crowd along Broadway watched as a 1926 Chandler, sitting quietly at the curb, came to life. The engine turned, the gears engaged, and it pulled smoothly into the stream of traffic before making its way up Fifth Avenue without a driver. Dubbed the “American Wonder” by its creator, Francis P. Houdina, the car responded to radio commands transmitted from a chase car. Signals were received by antennas atop the Chandler, where they triggered circuit breakers and small electric motors that operated the steering, throttle, brakes, and horn.

The idea proved too tantalizing to fade. In Cincinnati, a Toledo inventor named Maurice J. Francill took up the cause in 1928. Francill, who styled himself “America’s Radio Wizard,” demonstrated how radio control could move Ford automobiles without a driver. In a series of stage-like performances, he also milked cows, baked bread, and operated a laundry, all through radio command. By 1936, newspapers from Ohio to California were still reporting his feats.

“Francill claims that he can accomplish anything the human hand can do by radio,” the Orange County News observed. “Eight pounds [3.6 kg] of delicate brain-like radio apparatus was employed to control the lights, ignition system, horn and start the motor running. Five pounds [2.3 kg] of radio apparatus is required to guide the car.”

These vehicles may seem like novelties today, but they’re early proof that the automobile can be guided by something other than humans.

Detroit buys into the dream

The dream of a self-driving automobile did not vanish when these moments passed. It lingered, an idea returned to again and again, particularly in the years when America believed that anything was possible.

At the 1939 New York World’s Fair, General Motors offered a glimpse of that future with its enormous Futurama exhibit. Seated above a raised platform, fairgoers saw a miniature city where tiny electric cars moved serenely along highways without drivers. The cars, they were told, would one day be guided by radio signals and electric currents running through cables and circuits beneath the pavement, creating an electromagnetic field that could both power the vehicles and guide their course. It was a bold, imaginative vision—and characteristic of a time when modern engineering was forecast to remake the world.

After the war, engineers did not let the idea fade. They continued to work on the idea of communication between road and machine. At General Motors’ Motorama, a traveling showcase of the car’s newest vehicles and latest ideas, one display in 1956 captured the imagination of audiences across the country. GM unveiled a sleek, gas turbine–powered automobile, sheathed in titanium and brimming with the promise of autonomous driving.

GM's Firebird II concept from 1956

The Firebird II concept from 1956 could drive itself on special roads.

Credit: General Motors

The Firebird II concept from 1956 could drive itself on special roads. Credit: General Motors

Beneath certain stretches of highway, GM proposed laying an electronic strip. When the car traveled over it, sensors would lock onto the signal, guiding the vehicle automatically along its lane. The driver would simply lean back, hands free from the wheel, and watch the miles roll by. Onboard amenities inexplicably included an orange juice dispenser.

Proof of concept

By 1958, the idea became a reality. On a plain stretch of highway outside Lincoln, Nebraska, it was put to the test. The state’s Department of Roads embedded a 400-foot (121 m) length of the roadway with electric circuits, while engineers from RCA and General Motors brought specially fitted Chevrolets to test it. Observers watched as the driverless cars steered themselves, responding to the buried signal beneath the pavement.

A few years later, across the Atlantic, the United Kingdom’s Transport and Road Research Laboratory undertook its own experiments. Using a Citroën DS, they laid magnetic cables beneath a test track and sent the car down it at speeds of up to 80 mph (129 km/h). Wind and weather made no difference; the DS held its line faithfully.

Autonomy emerges in the modern age

Fast forward to 1986, and German scientist Ernst Dickmanns, as part of his position with the German armed forces, began testing an autonomously driving Mercedes-Benz using computers, cameras, and sensors, not unlike modern-day cars. Within a year, it was travelling down the Autobahn at nearly 55 mph (89 km/h). That was enough to capture the attention of Daimler-Benz, which helped fund further research.

Several years later, in October 1994, Dickmanns gathered his research team at Charles de Gaulle Airport outside Paris, where they met a delegation of high-ranking officials. Parked at the curb were two sedans. They appeared ordinary but were fitted with cameras, sensors, and onboard computers. The guests climbed in, and the cars made their way toward the nearby thoroughfare. Then, with the traffic flowing steadily around them, the engineers switched the vehicles into self-driving mode and took their hands off the wheel. The cars held their lanes, adjusted their speed, and followed the road’s gentle curves without driver intervention.

An illustration of a 1994 driverless car

The experimental driverless car VaMP (Versuchsfahrzeug für autonome Mobilität und Rechnersehen), which was developed during the European research project PROMETHEUS: (top left) components for autonomous driving; (right) VaMP and view into passenger cabin (lower right); (lower left) bifocal camera arrangement (front) on yaw platform.

Credit: CC BY-SA 3.0

The experimental driverless car VaMP (Versuchsfahrzeug für autonome Mobilität und Rechnersehen), which was developed during the European research project PROMETHEUS: (top left) components for autonomous driving; (right) VaMP and view into passenger cabin (lower right); (lower left) bifocal camera arrangement (front) on yaw platform. Credit: CC BY-SA 3.0

A year later, Dickmanns would travel from Bavaria to Denmark, a trip of more than 1,056 miles (1,700 km), reaching speeds of nearly 110 mph (177 km/h). Unfortunately, Daimler lost interest and cut funding for the effort. Dickmann’s project came to a halt, but the modern-day technology was in place to set the stage for what came next.

The military sparks innovation–again

By the turn of the century, the federal government had created a new research arm of the Pentagon, the Defense Advanced Research Projects Agency, or DARPA. Its mission was ambitious: to develop technologies that could protect American soldiers on the battlefield. Among its goals was the creation of vehicles that could drive themselves, sparing troops the dangers of roadside ambushes and explosive traps.

To accelerate progress, DARPA announced a competition to build a driverless vehicle capable of traveling 142 miles (229 km) across the Mojave Desert. The prize was $1 million, though the real prize was the knowledge gained along the way.

When race day arrived, the results were humbling. One by one, every vehicle failed to finish. But in the sun and dust of the Mojave, a community emerged, one of engineers, programmers, and dreamers who believed that the autonomous vehicle was not a fantasy but a problem to be solved. Twenty years later, their work has brought the idea closer to everyday reality than ever before.

By themselves, these efforts did not yet give the world the self-driving car. But these successful experiments demonstrate the ability to make a fantasy reality. It’s also a reminder that while the tech industry likes to position itself as a disruptor bringing self-driving cars to market, Detroit was dreaming about and demonstrating autonomous transportation long before Silicon Valley existed.

The first cars bold enough to drive themselves Read More »

wikipedia-blacklists-archive.today,-starts-removing-695,000-archive-links

Wikipedia blacklists Archive.today, starts removing 695,000 archive links

The English-language edition of Wikipedia is blacklisting Archive.today after the controversial archive site was used to direct a distributed denial of service (DDoS) attack against a blog.

In the course of discussing whether Archive.today should be deprecated because of the DDoS, Wikipedia editors discovered that the archive site altered snapshots of webpages to insert the name of the blogger who was targeted by the DDoS. The alterations were apparently fueled by a grudge against the blogger over a post that described how the Archive.today maintainer hid their identity behind several aliases.

“There is consensus to immediately deprecate archive.today, and, as soon as practicable, add it to the spam blacklist (or create an edit filter that blocks adding new links), and remove all links to it,” stated an update today on Wikipedia’s Archive.today discussion. “There is a strong consensus that Wikipedia should not direct its readers towards a website that hijacks users’ computers to run a DDoS attack (see WP:ELNO#3). Additionally, evidence has been presented that archive.today’s operators have altered the content of archived pages, rendering it unreliable.”

More than 695,000 links to Archive.today are distributed across 400,000 or so Wikipedia pages. The archive site is commonly used to bypass news paywalls, and the FBI has sought information on the site operator’s identity with a subpoena to domain registrar Tucows.

“Those in favor of maintaining the status quo rested their arguments primarily on the utility of archive.today for verifiability,” said today’s Wikipedia update. “However, an analysis of existing links has shown that most of its uses can be replaced. Several editors started to work out implementation details during this RfC [request for comment] and the community should figure out how to efficiently remove links to archive.today.”

Editors urged to remove links

Guidance published as a result of the decision asked editors to help remove and replace links to the following domain names used by the archive site: archive.today, archive.is, archive.ph, archive.fo, archive.li, archive.md, and archive.vn. The guidance says editors can remove Archive.today links when the original source is still online and has identical content; replace the archive link so it points to a different archive site, like the Internet Archive, Ghostarchive, or Megalodon; or “change the original source to something that doesn’t need an archive (e.g., a source that was printed on paper), or for which a link to an archive is only a matter of convenience.”

Wikipedia blacklists Archive.today, starts removing 695,000 archive links Read More »

zero-grip,-maximum-fun:-a-practical-guide-to-getting-into-amateur-ice-racing

Zero grip, maximum fun: A practical guide to getting into amateur ice racing


Where we’re racing, we don’t need roads.

A studded winter tire on a blue Subaru WRX

To drive on ice, you just need the right tires. Credit: Tim Stevens

To drive on ice, you just need the right tires. Credit: Tim Stevens

In Formula One, grip is everything. The world’s best engineers devote their careers to designing cars that maximize downforce and grip to squeeze every bit of performance out of a set of four humble tires. These cars punish their drivers by slinging them at six Gs through corners and offer similar levels of abuse in braking.

It’s all wildly impressive, but I’ve long maintained that those drivers are not the ones having the most fun. When it comes to sheer enjoyment, grip is highly overrated, and if you want proof of that, you need to try ice racing.

Should you be lucky enough to live somewhere that gets cold enough consistently enough, all you need is a good set of tires and a car that’s willing and able. That, of course, and a desire to spend more time driving sideways than straight. I’ve been ice racing for well over 20 years now, and I’m here to tell you that there’s no greater thrill on four wheels than sliding through a corner a few inches astern of a hard-charging competitor.

Here’s how you can get started.

A blue Subaru WRX STI on the ice

For street legal classes, you don’t even need a roll cage. Just the right tires and the right attitude.

Credit: Tim Stevens

For street legal classes, you don’t even need a roll cage. Just the right tires and the right attitude. Credit: Tim Stevens

Ice racing basics

There are certainly plenty of professionals out there who have dabbled in or got their start in ice racing, F1 legend Alain Prost and touring car maestro Peter Cunningham being two notable examples. And a European ice racing series called Trophée Andros formerly challenged some of the world’s top professionals to race across a series of purpose-built frozen tracks in Europe and even Quebec.

These days, however, ice racing is an almost entirely amateur pursuit, a low-temp, low-grip hobby where the biggest prize you’re likely to bring home on any given Sunday is a smile and maybe a little trophy for the mantel.

That said, there are numerous types of ice racing. The most common and accessible is time trials, basically autocrosses on ice. The Sports Car Club of Vermont ice time trial series is a reliable, well-run example, but you’ll find plenty of others, too.

Some other clubs step it up by hosting wheel-to-wheel racing on plowed ovals. Lakes Region Ice Racing Club in Moultonborough, New Hampshire, is a long-running group that has been blessed with enough ice lately to keep racing even as temperatures have increased.

At the top tier, though, you’re looking at clubs that plow full-on road courses on the ice, groups like the Adirondack Motor Enthusiast Club (AMEC), based in and around the Adirondack Park. Established in 1954, this is among the oldest ice racing clubs in the world and the one I’ve been lucky to be a member of since 2002.

Will any other discipline of motorsport teach you as much about car control? Tim Stevens

AMEC offers numerous classes, providing eligibility for everything from a bone-stock Miata to purpose-built sprint cars that look like they made a wrong turn off a dirt oval. Dedicated volunteers plow courses on lakes throughout the ADK, tirelessly searching for ice of sufficient depth and quality.

Different clubs have different requirements, but most like to see a foot of solid, clean ice. That may not sound like much, but according to the US Army Corps of Engineers, it’s plenty for eight-ton trucks. That’s enough to support not only the 60 to 100 racers that AMEC routinely sees on any frigid Sunday but also the numerous tow rigs, trailers, and plow trucks that support the action.

How do you get started? All you need is a set of tires.

Tires

Tires are the most talked-about component of any car competing on the ice, and for good reason. Clubs have different regulations for what is and is not legal for competition, but in general, you can lump ice racing tires into three categories.

The first is unstudded, street-legal tires, such as Bridgestone Blizzacks, Continental WinterContacts, and Michelin X-Ices. These tires generally have chunky, aggressive treads, generous siping, and squishy compounds. Modern snow tires like these are marvelous things, and when there’s a rough surface on the ice or some embedded snow, an unstudded tire can be extremely competitive, even keeping up with a street-legal studded tire.

These tires, like the Nokian Hakkapeliita 10 and the Pirelli Winter Ice Zero, take the chunky, aggressive tread pattern of a normal snow tire and embed some number of metallic studs. These tiny studs, which typically protrude only 1 millimeter from the tire surface, provide a massive boost in grip on smooth, polished ice.

Tim races on Nokian Hakka 10 tires, which are a street-legal studded winter tire.

Credit: Tim Stevens

Tim races on Nokian Hakka 10 tires, which are a street-legal studded winter tire. Credit: Tim Stevens

Finally, there is what is broadly called a “race stud” tire, which is anything not legal for road use. These tires range from hand-made bolt tires, put together by people who have a lot of patience and who don’t mind the smell of tire sealant, to purpose-built race rubber of the sort you’ll see on a World Rally car snow stage.

These tires offer massive amounts of grip—so much so that the feel they deliver is more like driving on dirt than on ice. Unless you DIY it, the cost typically increases substantially as well. For that reason, going to grippier tires doesn’t necessarily mean more fun for your dollar, but there are plenty of opinions on where you’ll find the sweet spot of smiles per mile.

Driver skills

The other major factor in finding success on the ice is driver skill. If you have some experience in low-grip, car-control-focused driving like rally or drift, you’ll have a head start over someone who’s starting fresh. But if I had a dollar for every rally maestro or drifter I’ve seen swagger their way out onto the ice and then wedge their car straight into the first snowbank, I’d have at least five or six extra dollars to my name.

Ice racing is probably the purest and most challenging form of low-grip driving. On ice, the performance envelope of a normal car on normal tires is extremely small. Driving fast on ice, then, means learning how to make your car do what you want, even when you’re far outside of that envelope.

There are many techniques involved, but it all starts with getting comfortable with entering your car into a slide and sustaining it. Learning to balance your car in a moderate drift, dancing between terminal understeer (plowing into the snowbank nose-first) and extreme oversteer (spinning into the snowbank tail-first), is key. That comfort simply takes time.

Reading the ice

Ruts in the ice made by ice racing

The condition of the track changes constantly.

Credit: Tim Stevens

The condition of the track changes constantly. Credit: Tim Stevens

Once you figure out how to keep your car going in the right direction, and once you stop making sedan-shaped holes in snowbanks, the next trick is to learn how to read the ice.

The grip level of the ice constantly evolves throughout the day. The street-legal tires tend to polish it off, wearing down rougher sections into smoothly polished patches with extremely low grip. The race studs, on the other hand, chew it up again, creating a heavily textured surface.

If you’re on the less extreme sorts of tires, you’ll find the most grip on that rough, unused ice. In a race stud, you want to seek out smooth, clean ice because it will give your studs better purchase.

If you’re familiar with road racing, it’s a little like running a rain line: not necessarily driving the shortest path around, but instead taking the one that offers the most grip. Imagine a rain line that changes every lap and you start to get the picture.

How can I try it?

Intrigued? The good news is that ice racing is among the most accessible and affordable forms of motorsport on the planet, possibly second only to autocrossing. Costs vary widely, but in my club, AMEC, a full day of racing costs $70. That’s for three heat races and a practice session. Again, all you need is a set of snow tires, which will last the full season if you don’t abuse them.

The bad news, of course, is that you need to be close to an ice racing club. They’re getting harder and harder to find, and active clubs generally have shorter seasons with fewer events. If you can’t find one locally, you may need to travel, which increases the cost and commitment substantially.

If you don’t live where the lakes freeze, you’ll have to travel. Tim Stevens

If cost is no issue, you certainly have more opportunities. We’ve already reported on McLaren’s program, but it’s not alone. Exotic brands like Ferrari and Lamborghini also offer winter driving programs, where you can wheel amazing cars in glamorous places like St. Moritz and Livigno. The cost is very much in the “if you have to ask” category.

Dirtfish, one of the world’s greatest rally schools, also offers an ice-driving program in Wisconsin, starting at about $2,000 for a single day. This is a great, if expensive, way to get a feel for the skills you’ll need on ice.

And if you just want the most seat time, look for programs like Lapland Ice Driving or Ice Drive Sweden. The northern wilds of Sweden and Finland are full of frozen lakes where clubs plow out full race courses, sometimes repeating Formula One circuits. If you have the funds, you can rent any manner of sports car and run it sideways all day long on proper studded tires.

Whatever it costs and whatever you have to do to make it happen, ice racing is well worth the effort. I’ve been lucky to drive a long list of amazing cars in amazing places, but nothing comes close to the joy of wheeling my 20-year-old Subaru around a frozen lake.

Zero grip, maximum fun: A practical guide to getting into amateur ice racing Read More »

password-managers’-promise-that-they-can’t-see-your-vaults-isn’t-always-true

Password managers’ promise that they can’t see your vaults isn’t always true


ZERO KNOWLEDGE, ZERO CLUE

Contrary to what password managers say, a server compromise can mean game over.

Over the past 15 years, password managers have grown from a niche security tool used by the technology savvy into an indispensable security tool for the masses, with an estimated 94 million US adults—or roughly 36 percent of them—having adopted them. They store not only passwords for pension, financial, and email accounts, but also cryptocurrency credentials, payment card numbers, and other sensitive data.

All eight of the top password managers have adopted the term “zero knowledge” to describe the complex encryption system they use to protect the data vaults that users store on their servers. The definitions vary slightly from vendor to vendor, but they generally boil down to one bold assurance: that there is no way for malicious insiders or hackers who manage to compromise the cloud infrastructure to steal vaults or data stored in them. These promises make sense, given previous breaches of LastPass and the reasonable expectation that state-level hackers have both the motive and capability to obtain password vaults belonging to high-value targets.

A bold assurance debunked

Typical of these claims are those made by Bitwarden, Dashlane, and LastPass, which together are used by roughly 60 million people. Bitwarden, for example, says that “not even the team at Bitwarden can read your data (even if we wanted to).” Dashlane, meanwhile, says that without a user’s master password, “malicious actors can’t steal the information, even if Dashlane’s servers are compromised.” LastPass says that no one can access the “data stored in your LastPass vault, except you (not even LastPass).”

New research shows that these claims aren’t true in all cases, particularly when account recovery is in place or password managers are set to share vaults or organize users into groups. The researchers reverse-engineered or closely analyzed Bitwarden, Dashlane, and LastPass and identified ways that someone with control over the server—either administrative or the result of a compromise—can, in fact, steal data and, in some cases, entire vaults. The researchers also devised other attacks that can weaken the encryption to the point that ciphertext can be converted to plaintext.

“The vulnerabilities that we describe are numerous but mostly not deep in a technical sense,” the researchers from ETH Zurich and USI Lugano wrote. “Yet they were apparently not found before, despite more than a decade of academic research on password managers and the existence of multiple audits of the three products we studied. This motivates further work, both in theory and in practice.”

The researchers said in interviews that multiple other password managers they didn’t analyze as closely likely suffer from the same flaws. The only one they were at liberty to name was 1Password. Almost all the password managers, they added, are vulnerable to the attacks only when certain features are enabled.

The most severe of the attacks—targeting Bitwarden and LastPass—allow an insider or attacker to read or write to the contents of entire vaults. In some cases, they exploit weaknesses in the key escrow mechanisms that allow users to regain access to their accounts when they lose their master password. Others exploit weaknesses in support for legacy versions of the password manager. A vault-theft attack against Dashlane allowed reading but not modification of vault items when they were shared with other users.

Staging the old key switcheroo

One of the attacks targeting Bitwarden key escrow is performed during the enrollment of a new member of a family or organization. After a Bitwarden group admin invites the new member, the invitee’s client accesses a server and obtains a group symmetric key and the group’s public key. The client then encrypts the symmetric key with the group public key and sends it to the server. The resulting ciphertext is what’s used to recover the new user’s account. This data is never integrity-checked when it’s sent from the server to the client during an account enrollment session.

The adversary can exploit this weakness by replacing the group public key with one from a keypair created by the adversary. Since the adversary knows the corresponding private key, it can use it to decrypt the ciphertext and then perform an account recovery on behalf of the targeted user. The result is that the adversary can read and modify the entire contents of the member vault as soon as an invitee accepts an invitation from a family or organization.

Normally, this attack would work only when a group admin has enabled autorecovery mode, which, unlike a manual option, doesn’t require interaction from the member. But since the group policy the client downloads during the enrollment policy isn’t integrity-checked, adversaries can set recovery to auto, even if an admin had chosen a manual mode that requires user interaction.

Compounding the severity, the adversary in this attack also obtains a group symmetric key for all other groups the member belongs to since such keys are known to all group members. If any of the additional groups use account recovery, the adversary can obtain the members’ vaults for them, too. “This process can be repeated in a worm-like fashion, infecting all organizations that have key recovery enabled and have overlapping members,” the research paper explained.

A second attack targeting Bitwarden account recovery can be performed when a user rotates vault keys, an option Bitwarden recommends if a user believes their master password has been compromised. When account recovery is on (either manually or automatically), the user client regenerates the recovery ciphertext, which as described earlier involves obtaining a new public key that’s encrypted with the organization public key. The researchers denoted the group public key as pkorg. They denote the public key supplied by the adversary as pkadvorg, the recovery ciphertext as crec, and the user symmetric key as k.

The paper explained:

The key point here is that pkorg is not retrieved from the user’s vault; rather the client performs a sync operation with the server to obtain it. Crucially, the organization data provided by this sync operation is not authenticated in any way. This thus provides the adversary with another opportunity to obtain a victim’s user key, by supplying a new public key pkadvorg, for which they know the skadvorg and setting the account recovery enrollment to true. The client will then send an account recovery ciphertext crec containing the new user key, which the adversary can decrypt to obtain k′.

The third attack on the Bitwarden account recovery allows an adversary to recover a user’s master key. It abuses key connector, a feature primarily used by enterprise customers.

More ways to pilfer vaults

The attack allowing theft of LastPass vaults also targets key escrow, specifically in the Teams and Teams 5 versions, when a member’s master key is reset by a privileged user known as a superadmin. The next time the member logs in through the LastPass browser extension, their client will retrieve an RSA keypair assigned to each superadmin in the organization, encrypt their new key with each one, and send the resulting ciphertext to each superadmin.

Because LastPass also fails to authenticate the superadmin keys, an adversary can once again replace the superadmin public key (pkadm) with their own public key (pkadvadm).

“In theory, only users in teams where password reset is enabled and who are selected for reset should be affected by this vulnerability,” the researchers wrote. “In practice, however, LastPass clients query the server at each login and fetch a list of admin keys. They then send the account recovery ciphertexts independently of enrollment status.” The attack, however, requires the user to log in to LastPass with the browser extension, not the standalone client app.

Several attacks allow reading and modification of shared vaults, which allow a user to share selected items with one or more other users. When Dashlane users share an item, their client apps sample a fresh symmetric key, which either directly encrypts the shared item or, when sharing with a group, encrypts group keys, which in turn encrypt the shared item. In either case, the newly created RSA keypair(s)—belonging to either the shared user or group—isn’t authenticated. The item is then encrypted with the private key(s).

An adversary can supply their own key pair and use the public key to encrypt the ciphertext sent to the recipients. The adversary then decrypts that ciphertext with their corresponding secret key to recover the shared symmetric key. With that, the adversary can read and modify all shared items. When sharing is used in either Bitwarden or LastPass, similar attacks are possible and lead to the same consequence.

Another avenue for attackers or adversaries with control of a server is to target the backward compatibility that all three password managers provide to support older, less-secure versions. Despite incremental changes designed to harden the apps against the very attacks described in the paper, all three password managers continue to support the versions without these improvements. This backward compatibility is a deliberate decision intended to prevent users who haven’t upgraded from losing access to their vaults.

The severity of these attacks is lower than that of the previous ones described, with the exception of one, which is possible against Bitwarden. Older versions of the password manager used a single symmetric key to encrypt and decrypt the user key from the server and items inside vaults. This design allowed for the possibility that an adversary could tamper with the contents. To add integrity checks, newer versions provide authenticated encryption by augmenting the symmetric key with an HMAC hash function.

To protect customers using older app versions, Bitwarden ciphertext has an attribute of either 0 or 1. A 0 designates authenticated encryption, while a 1 supports the older unauthenticated scheme. Older versions also use a key hierarchy that Bitwarden deprecated to harden the app. To support the old hierarchy, newer client versions generate a new RSA keypair for the user if the server doesn’t provide one. The newer version will proceed to encrypt the secret key portion with the master key if no user ciphertext is provided by the server.

This design opens Bitwarden to several attacks. The most severe, allowing reading (but not modification) of all items created after the attack is performed. At a simplified level, it works because the adversary can forge the ciphertext sent by the server and cause the client to use it to derive a user key known to the adversary.

The modification causes the use of CBC (cipher block chaining), a form of encryption that’s vulnerable to several attacks. An adversary can exploit this weaker form using a padding oracle attack and go on to retrieve the plaintext of the vault. Because HMAC protection remains intact, modification isn’t possible.

Surprisingly, Dashlane was vulnerable to a similar padding oracle attack. The researchers devised a complicated attack chain that would allow a malicious server to downgrade a Dashlane user’s vault to CBC and exfiltrate the contents. The researchers estimate that the attack would require about 125 days to decrypt the ciphertext.

Still other attacks against all three password managers allow adversaries to greatly reduce the selected number of hashing iterations—in the case of Bitwarden and LastPass, from a default of 600,000 to 2. Repeated hashing of master passwords makes them significantly harder to crack in the event of a server breach that allows theft of the hash. For all three password managers, the server sends the specified iteration count to the client, with no mechanism to ensure it meets the default number. The result is that the adversary receives a 300,000-fold decrease in the time and resources required to crack the hash and obtain the user’s master password.

Attacking malleability

Three of the attacks—one against Bitwarden and two against LastPass—target what the researchers call “item-level encryption” or “vault malleability.” Instead of encrypting a vault in a single, monolithic blob, password managers often encrypt individual items, and sometimes individual fields within an item. These items and fields are all encrypted with the same key. The attacks exploit this design to steal passwords from select vault items.

An adversary mounts an attack by replacing the ciphertext in the URL field, which stores the link where a login occurs, with the ciphertext for the password. To enhance usability, password managers provide an icon that helps visually recognize the site. To do this, the client decrypts the URL field and sends it to the server. The server then fetches the corresponding icon. Because there’s no mechanism to prevent the swapping of item fields, the client decrypts the password instead of the URL and sends it to the server.

“That wouldn’t happen if you had different keys for different fields or if you encrypted the entire collection in one pass,” Kenny Paterson, one of the paper co-authors, said. “A crypto audit should spot it, but only if you’re thinking about malicious servers. The server is deviating from expected behavior.

The following table summarizes the causes and consequences of the 25 attacks they devised:

Credit: Scarlata et al.

Credit: Scarlata et al.

A psychological blind spot

The researchers acknowledge that the full compromise of a password manager server is a high bar. But they defend the threat model.

“Attacks on the provider server infrastructure can be prevented by carefully designed operational security measures, but it is well within the bounds of reason to assume that these services are targeted by sophisticated nation-state-level adversaries, for example via software supply-chain attacks or spearphishing,” they wrote. “Moreover, some of the service providers have a history of being breached—for example, LassPass suffered breaches in 2015 and 2022, and another serious security incident in 2021.

They went on to write: “While none of the breaches we are aware of involved reprogramming the server to make it undertake malicious actions, this goes just one step beyond attacks on password manager service providers that have been documented. Active attacks more broadly have been documented in the wild.”

Part of the challenge of designing password managers or any end-to-end encryption service is the tendency for a false sense of security of the client.

“It’s a psychological problem when you’re writing both client and server software,” Paterson explained. “You should write the client super defensively, but if you’re also writing the server, well of course your server isn’t going to send malformed packets or bad info. Why would you do that?”

Marketing gimmickry or not, “zero-knowledge” is here to stay

In many of the cases, engineers have already fixed the weaknesses described after receiving private reports from the researchers. Engineers are still patching other vulnerabilities. In statements, Bitwarden, Lastpass, and Dashlane representatives noted the high bar of the threat model, despite statements on their websites that assure customers their wares will withstand it. Along with 1Password representatives, they also noted that their products regularly receive stringent security audits and undergo red-team exercises.

A Bitwarden representative wrote:

Bitwarden continually evaluates and improves its software through internal review, third-party assessments, and external research. The ETH Zurich paper analyzes a threat model in which the server itself behaves maliciously and intentionally attempts to manipulate key material and configuration values. That model assumes full server compromise and adversarial behavior beyond standard operating assumptions for cloud services.

LastPass said, “We take a multi‑layered, ongoing approach to security assurance that combines independent oversight, continuous monitoring, and collaboration with the research community. Our cloud security testing is inclusive of the scenarios referenced in the malicious-server threat model outlined in the research.”

Specific measures include:

A statement from Dashlane read, “Dashlane conducts rigorous internal and external testing to ensure the security of our product. When issues arise, we work quickly to mitigate any possible risk and ensure customers have clarity on the problem, our solution, and any required actions.”

1Password released a statement that read in part:

Our security team reviewed the paper in depth and found no new attack vectors beyond those already documented in our publicly available Security Design White Paper.

We are committed to continually strengthening our security architecture and evaluating it against advanced threat models, including malicious-server scenarios like those described in the research, and evolving it over time to maintain the protections our users rely on.

1Password also says that the zero-knowledge encryption it provides “means that no one but you—not even the company that’s storing the data—can access and decrypt your data. This protects your information even if the server where it’s held is ever breached.” In the company’s white paper linked above, 1Password seems to allow for this possibility when it says:

At present there’s no practical method for a user to verify the public key they’re encrypting data to belongs to their intended recipient. As a consequence it would be possible for a malicious or compromised 1Password server to provide dishonest public keys to the user, and run a successful attack. Under such an attack, it would be possible for the 1Password server to acquire vault encryption keys with little ability for users to detect or prevent it.

1Password’s statement also includes assurances that the service routinely undergoes rigorous security testing.

All four companies defended their use of the term “zero knowledge.” As used in this context, the term can be confused with zero-knowledge proofs, a completely unrelated cryptographic method that allows one party to prove to another party that they know a piece of information without revealing anything about the information itself. An example is a proof that shows a system can determine if someone is over 18 without having any knowledge of the precise birthdate.

The adulterated zero-knowledge term used by password managers appears to have come into being in 2007, when a company called SpiderOak used it to describe its cloud infrastructure for securely sharing sensitive data. Interestingly, SpiderOak formally retired the term a decade later after receiving user pushback.

“Sadly, it is just marketing hype, much like ‘military-grade encryption,’” Matteo Scarlata, lead author of the paper, said. “Zero-knowledge seems to mean different things to different people (e.g., LastPass told us that they won’t adopt a malicious server threat model internally). Much unlike ‘end-to-end encryption,’ ‘zero-knowledge encryption’ is an elusive goal, so it’s impossible to tell if a company is doing it right.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Password managers’ promise that they can’t see your vaults isn’t always true Read More »

sideways-on-the-ice,-in-a-supercar:-stability-control-is-getting-very-good

Sideways on the ice, in a supercar: Stability control is getting very good


To test stability control, it helps to have a wide-open space with very low grip.

A blue McLaren Artura drifting on a frozen lake

You can tell this photo was taken a day or two before we were there because the sun came out. Credit: McLaren

You can tell this photo was taken a day or two before we were there because the sun came out. Credit: McLaren

SAARISELKÄ, FINLAND—If you’re expecting it, the feeling in the pit of your stomach when the rear of your car breaks traction and begins to slide is rather pleasant. It’s the same exhilaration we get from roller coasters, but when you’re in the driver’s seat, you’re in charge of the ride.

When you’re not expecting it, though, there’s anxiety instead of excitement and, should the slide end with a crunch, a lot more negative emotions, too.

Thankfully, fewer and fewer drivers will have to experience that kind of scare thanks to the proliferation and sophistication of modern electronic stability and traction control systems. For more than 30 years, these electronic safety nets have grown in capability and became mandatory in the early 2010s, saving countless crashes in the process.

Through a combination of cutting engine power and individually braking each wheel, the computers that keep a watchful eye on things like lateral acceleration and wheel spin gather it all together with the idea that the car goes where the driver wants it rather than sideways or backward into whatever solid object lies along the new path of motion.

Obviously, the quickest way to find out whether this all works is to turn it off. And then find a slippery road, or just drive like an oaf. Yet even when automakers let journalists loose on racetracks, they invariably require that we keep some of the electronic safety net turned on. Even on track, you can hit things that will crumple a car—or worse—and with modern tire technology being what it is, the speeds involved when cars do let go tend to be quite high, particularly if it’s dry.

An orange McLaren Artura, seen from behind on a frozen lake. The rear is encrusted with snow.

The Artura is probably my favorite McLaren, as it’s smaller and more versatile than the more expensive, more powerful machines in the range.

Credit: Jonathan Gitlin

The Artura is probably my favorite McLaren, as it’s smaller and more versatile than the more expensive, more powerful machines in the range. Credit: Jonathan Gitlin

There are few environments that are more conducive to exploring the limits and capabilities of electronic chassis control. Ideally, you want a lot of wide-open space free of wildlife and people and a smooth, low-grip surface. A giant sand dune would work. Or a frozen lake. Which is why you can sometimes find automotive engineers hanging out in these remote, often extreme locations, braving the desert’s heat or an Arctic chill as they work on a prototype or fine-tune the next model.

And it’s no secret that sliding a car on the ice is a lot fun. So it’s not surprising that a cottage tourism industry exists that—for a suitable fee—will bring you north of the Arctic Circle where you can work on your car control and get some insight into just how hard those electronics are capable of working.

That explains why I left an extremely cold Washington, DC, to travel to an even colder Saariselkä in Finland, where McLaren operates its Arctic Experience program on a frozen lake in nearby Ivalo. The company does some development work here, though more of it happens across the border in Sweden. But for a few weeks each winter, it welcomes customers to its minimalist lodge to work on their car control. And earlier this month, Ars was among a group of journalists who got an abbreviated version of the experience.

Our car for the day was a Ventura Orange McLaren Artura, the brand’s plug-in hybrid supercar, wearing Pirelli’s Sottozero winter tires, each augmented by a few hundred metal spikes. Its total power and torque output is 671 hp (500 kW) and 531 lb-ft (720 Nm) combined from a 3.0 L twin-turbo V6 that generates 577 hp (430 kW) and 431 lb-ft (584 Nm), plus an axial flux electric motor that contributes an additional 94 hp (70 kW) and 166 lb-ft (225 Nm). All of that is sent to the rear wheels via an eight-speed dual-clutch transmission.

A McLaren Artura winter tire fitted with studs

Winter tires work well on snow, but for ice, you really need studs.

Credit: Jonathan Gitlin

Winter tires work well on snow, but for ice, you really need studs. Credit: Jonathan Gitlin

Where most hybrids use the electric motor to boost efficiency, McLaren mostly uses it to boost performance, providing an immediate shove and filling gaps in the torque band where necessary. In electric-only mode, it will do just that, right up to the 81 mph (130 km/h) speed limit of the mode. Being the sort of curious nerd I am, I took the opportunity to try all the different modes.

Once I got control of my stomach, that is.

Are you sure you should drink that?

Our first exercise was ironically the hardest: driving sideways around a plain old circle. A couple of these had been scribed into the ice—which freezes from November until April and was 28 inches (70 cm) thick, we learned—along with more than a dozen other, more involved courses. Even under the best of conditions, the Sun spends barely six hours a day on its shallow curve from horizon to horizon at this time of year. On the day of our visit, the horizon was an indistinct thing as heavy gray skies blended with the snow-covered ice.

The lack of a visual reference, mixed with 15 minutes of steady lateral G-forces, turned out to be unkind to my vestibular system, and about 10 minutes later, I found myself in shirtsleeves at minus-11˚F (minus-23˚C), saying goodbye to a cup of Earl Grey tea I’d previously and perhaps unwisely drunk a little earlier. At least I remembered to face downwind—given the sideways gale, it could have ended worse.

A number of circuits carved into the surface of a frozen lake

These are just some of the circuits that McLaren has carved into the ice in Ivalo. Beware of the innocent-looking circles—they’re deceptively hard and may turn your stomach.

Credit: McLaren

These are just some of the circuits that McLaren has carved into the ice in Ivalo. Beware of the innocent-looking circles—they’re deceptively hard and may turn your stomach. Credit: McLaren

Fortified with an anti-emetic and some extremely fresh air, I returned to the ice and can happily report that as long as you slide both left and right, you’re unlikely to get nauseous.

Getting an Artura sideways on a frozen lake is not especially complicated. With the powertrain set to Track, which prioritizes performance and keeps the V6 running the whole time, and with stability and traction control off, you apply enough power to break traction at the rear. Or a dab of brake could do the job, too, followed by some power. You steer more with your right foot than your hands, adding or subtracting power to reign in or amplify the slip angle. Your eyes are crucial to the process; if you look through the corner down the track, that’s probably where you’ll end up. Fixate on the next apex and you may quickly find yourself off-course.

Most of the mid-engined Artura’s 3,303 lbs (1,498 kg) live between its axles, and it’s a relatively easy car to catch once it begins to slide, with plenty of travel for the well-mapped throttle pedal.

As it turns out, that holds true even when you’re using only the electric motor. 166 lb-ft is more than enough to get the rear wheels spinning on the ice, but with just 94 hp, there isn’t really enough power to get the car properly sideways. So you can easily control a lazy slide around one of the handling courses, in near silence, to boot. Turn the electronic aids back on and things got much less dramatic; even with my foot to the floor, the Artura measured out minute amounts of power, keeping the car very much pointed where I steered it rather than requiring any opposite lock.

A person stands next to a McLaren Artura on a frozen lake

It feels like the edge of the world out here.

Credit: McLaren

It feels like the edge of the world out here. Credit: McLaren

Turn it on, turn it off

Back in track mode, with all 671 hp to play with, there was much more power than necessary to spin. But with the safety net re-enabled, driving around the handling course was barely any more dramatic than with a fraction of the power. The car’s electronic chassis control algorithms would only send as much power to the rear wheels as they could deploy, no matter how much throttle I applied. As each wheel lost grip and began to spin, its brake would intervene. And we went around the course, slowly but safely. As a demonstration of the effectiveness of modern electronic safety systems, it was very reassuring.

As I mentioned earlier, even when journalists are let loose in supercars on track, it’s with some degree of electronic assist enabled. Because for the sportier kind of car, you’ll often find some degree of halfway house between everything on and buttoned down and all the aids turned off. Here, the idea is to loosen the safety net and allow the car to move around, but only a little. Instead of just using the electronics to make things safe, they’ll also flatter the driver.

In McLaren’s case, that mode is called Variable Drift Control, which is a rather accurate name—in this mode, you set the maximum slip angle (from 1˚–15˚), and the car will not exceed that. And that’s exactly what it does. A slug of power will get the rear wheels spinning and the rear sliding, but only up to the set degree, at which point the brakes and powertrain will interrupt as necessary.

It’s very flattering, holding what feels like a lurid slide between turns with ease, without any concern that a lapse in concentration might leave the car requiring recovery after beaching on a few inches of snow. Even when your right foot is pinned to the firewall, the silicon brains running the show apply only as much torque as necessary, with the little icon flashing on the dash letting you know it’s intervening.

A man seen drifting a McLaren

If you have the space, there’s little more fun than drifting a car on ice. But it’s good to know that electronic stability control and traction control will help you out when you’re not trying to have fun.

Credit: McLaren

If you have the space, there’s little more fun than drifting a car on ice. But it’s good to know that electronic stability control and traction control will help you out when you’re not trying to have fun. Credit: McLaren

I can certainly see why OEMs ask that modes like VDC are the spiciest setting we try when they lend us their cars. They’re just permissive enough to break the rear loose and fire off a burst of adrenaline, yet cosseting enough that the ride almost certainly won’t end in tears. Fun though VDC was to play with, it does feel artificial once you get your eye in—particularly compared to the thrill of balancing an Artura on the throttle as you change direction through a series of corners or the satisfaction of catching and recovering a spin before it becomes too late.

But outside of a frozen lake, I’ll be content to keep some degree of driver aids running.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

Sideways on the ice, in a supercar: Stability control is getting very good Read More »

platforms-bend-over-backward-to-help-dhs-censor-ice-critics,-advocates-say

Platforms bend over backward to help DHS censor ICE critics, advocates say


Pam Bondi and Kristi Noem sued for coercing platforms into censoring ICE posts.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Pressure is mounting on tech companies to shield users from unlawful government requests that advocates say are making it harder to reliably share information about Immigration and Customs Enforcement (ICE) online.

Alleging that ICE officers are being doxed or otherwise endangered, Trump officials have spent the last year targeting an unknown number of users and platforms with demands to censor content. Early lawsuits show that platforms have caved, even though experts say they could refuse these demands without a court order.

In a lawsuit filed on Wednesday, the Foundation for Individual Rights and Expression (FIRE) accused Attorney General Pam Bondi and Department of Homeland Security Secretary Kristi Noem of coercing tech companies into removing a wide range of content “to control what the public can see, hear, or say about ICE operations.”

It’s the second lawsuit alleging that Bondi and DHS officials are using regulatory power to pressure private platforms to suppress speech protected by the First Amendment. It follows a complaint from the developer of an app called ICEBlock, which Apple removed from the App Store in October. Officials aren’t rushing to resolve that case—last month, they requested more time to respond—so it may remain unclear until March what defense they plan to offer for the takedown demands.

That leaves community members who monitor ICE in a precarious situation, as critical resources could disappear at the department’s request with no warning.

FIRE says people have legitimate reasons to share information about ICE. Some communities focus on helping people avoid dangerous ICE activity, while others aim to hold the government accountable and raise public awareness of how ICE operates. Unless there’s proof of incitement to violence or a true threat, such expression is protected.

Despite the high bar for censoring online speech, lawsuits trace an escalating pattern of DHS increasingly targeting websites, app stores, and platforms—many that have been willing to remove content the government dislikes.

Officials have ordered ICE-monitoring apps to be removed from app stores and even threatened to sanction CNN for simply reporting on the existence of one such app. Officials have also demanded that Meta delete at least one Chicago-based Facebook group with 100,000 members and made multiple unsuccessful attempts to unmask anonymous users behind other Facebook groups. Even encrypted apps like Signal don’t feel safe from officials’ seeming overreach. FBI Director Kash Patel recently said he has opened an investigation into Signal chats used by Minnesota residents to track ICE activity, NBC News reported.

As DHS censorship threats increase, platforms have done little to shield users, advocates say. Not only have they sometimes failed to reject unlawful orders that simply provided a “a bare mention of ‘officer safety/doxing’” as justification, but in one case, Google complied with a subpoena that left a critical section blank, the Electronic Frontier Foundation (EFF) reported.

For users, it’s increasingly difficult to trust that platforms won’t betray their own policies when faced with government intimidation, advocates say. Sometimes platforms notify users before complying with government requests, giving users a chance to challenge potentially unconstitutional demands. But in other cases, users learn about the requests only as platforms comply with them—even when those platforms have promised that would never happen.

Government emails with platforms may be exposed

Platforms could face backlash from users if lawsuits expose their communications to the government, a possibility in the coming months. Last fall, the EFF sued after DOJ, DHS, ICE, and Customs and Border Patrol failed to respond to Freedom of Information Act requests seeking emails between the government and platforms about takedown demands. Other lawsuits may surface emails in discovery. In the coming weeks, a judge will set a schedule for EFF’s litigation.

“The nature and content of the Defendants’ communications with these technology companies” is “critical for determining whether they crossed the line from governmental cajoling to unconstitutional coercion,” EFF’s complaint said.

EFF Senior Staff Attorney Mario Trujillo told Ars that the EFF is confident it can win the fight to expose government demands, but like most FOIA lawsuits, the case is expected to move slowly. That’s unfortunate, he said, because ICE activity is escalating, and delays in addressing these concerns could irreparably harm speech at a pivotal moment.

Like users, platforms are seemingly victims, too, FIRE senior attorney Colin McDonnell told Ars.

They’ve been forced to override their own editorial judgment while navigating implicit threats from the government, he said.

“If Attorney General Bondi demands that they remove speech, the platform is going to feel like they have to comply; they don’t have a choice,” McDonnell said.

But platforms do have a choice and could be doing more to protect users, the EFF has said. Platforms could even serve as a first line of defense, requiring officials to get a court order before complying with any requests.

Platforms may now have good reason to push back against government requests—and to give users the tools to do the same. Trujillo noted that while courts have been slow to address the ICEBlock removal and FOIA lawsuits, the government has quickly withdrawn requests to unmask Facebook users soon after litigation began.

“That’s like an acknowledgement that the Trump administration, when actually challenged in court, wasn’t even willing to defend itself,” Trujillo said.

Platforms could view that as evidence that government pressure only works when platforms fail to put up a bare-minimum fight, Trujillo said.

Platforms “bend over backward” to appease DHS

An open letter from the EFF and the American Civil Liberties Union (ACLU) documented two instances of tech companies complying with government demands without first notifying users.

The letter called out Meta for unmasking at least one user without prior notice, which groups noted “potentially” occured due to a “technical glitch.”

More troubling than buggy notifications, however, is the possibility that platforms may be routinely delaying notice until it’s too late.

After Google “received an ICE subpoena for user data and fulfilled it on the same day that it notified the user,” the company admitted that “sometimes when Google misses its response deadline, it complies with the subpoena and provides notice to a user at the same time to minimize the delay for an overdue production,” the letter said.

“This is a worrying admission that violates [Google’s] clear promise to users, especially because there is no legal consequence to missing the government’s response deadline,” the letter said.

Platforms face no sanctions for refusing to comply with government demands that have not been court-ordered, the letter noted. That’s why the EFF and ACLU have urged companies to use their “immense resources” to shield users who may not be able to drop everything and fight unconstitutional data requests.

In their letter, the groups asked companies to insist on court intervention before complying with a DHS subpoena. They should also resist DHS “gag orders” that ask platforms to hand over data without notifying users.

Instead, they should commit to giving users “as much notice as possible when they are the target of a subpoena,” as well as a copy of the subpoena. Ideally, platforms would also link users to legal aid resources and take up legal fights on behalf of vulnerable users, advocates suggested.

That’s not what’s happening so far. Trujillo told Ars that it feels like “companies have bent over backward to appease the Trump administration.”

The tide could turn this year if courts side with app makers behind crowdsourcing apps like ICEBlock and Eyes Up, who are suing to end the alleged government coercion. FIRE’s McDonnell, who represents the creator of Eyes Up, told Ars that platforms may feel more comfortable exercising their own editorial judgment moving forward if a court declares they were coerced into removing content.

DHS can’t use doxing to dodge First Amendment

FIRE’s lawsuit accuses Bondi and Noem of coercing Meta to disable a Facebook group with 100,000 members called “ICE Sightings–Chicagoland.”

The popularity of that group surged during “Operation Midway Blitz,” when hundreds of agents arrested more than 4,500 people over weeks of raids that used tear gas in neighborhoods and caused car crashes and other violence. Arrests included US citizens and immigrants of lawful status, which “gave Chicagoans reason to fear being injured or arrested due to their proximity to ICE raids, no matter their immigration status,” FIRE’s complaint said.

Kassandra Rosado, a lifelong Chicagoan and US citizen of Mexican descent, started the Facebook group and served as admin, moderating content with other volunteers. She prohibited “hate speech or bullying” and “instructed group members not to post anything threatening, hateful, or that promoted violence or illegal conduct.”

Facebook only ever flagged five posts that supposedly violated community guidelines, but in warnings, the company reassured Rosado that “groups aren’t penalized when members or visitors break the rules without admin approval.”

Rosado had no reason to suspect that her group was in danger of removal. When Facebook disabled her group, it told Rosado the group violated community standards “multiple times.” But her complaint noted that, confusingly, “Facebook policies don’t provide for disabling groups if a few members post ostensibly prohibited content; they call for removing groups when the group moderator repeatedly either creates prohibited content or affirmatively ‘approves’ such content.”

Facebook’s decision came after a right-wing influencer, Laura Loomer, tagged Noem and Bondi in a social media post alleging that the group was “getting people killed.” Within two days, Bondi bragged that she had gotten the group disabled while claiming that it “was being used to dox and target [ICE] agents in Chicago.”

McDonnell told Ars it seems clear that Bondi selectively uses the term “doxing” when people post images from ICE arrests. He pointed to “ICE’s own social media accounts,” which share favorable opinions of ICE alongside videos and photos of ICE arrests that Bondi doesn’t consider doxing.

“Rosado’s creation of Facebook groups to send and receive information about where and how ICE carries out its duties in public, to share photographs and videos of ICE carrying out its duties in public, and to exchange opinions about and criticism of ICE’s tactics in carrying out its duties, is speech protected by the First Amendment,” FIRE argued.

The same goes for speech managed by Mark Hodges, a US citizen who resides in Indiana. He created an app called Eyes Up to serve as an archive of ICE videos. Apple removed Eyes Up from the App Store around the same time that it removed ICEBlock.

“It is just videos of what government employees did in public carrying out their duties,” McDonnell said. “It’s nothing even close to threatening or doxing or any of these other theories that the government has used to justify suppressing speech.”

Bondi bragged that she had gotten ICEBlock banned, and FIRE’s complaint confirmed that Hodges’ company received the same notification that ICEBlock’s developer got after Bondi’s victory lap. The notice said that Apple received “information” from “law enforcement” claiming that the apps had violated Apple guidelines against “defamatory, discriminatory, or mean-spirited content.”

Apple did not reach the same conclusion when it independently reviewed Eyes Up prior to government meddling, FIRE’s complaint said. Notably, the app remains available in Google Play, and Rosado now manages a new Facebook group with similar content but somewhat tighter restrictions on who can join. Neither activity has required urgent intervention from either tech giants or the government.

McDonnell told Ars that it’s harmful for DHS to water down the meaning of doxing when pushing platforms to remove content critical of ICE.

“When most of us hear the word ‘doxing,’ we think of something that’s threatening, posting private information along with home addresses or places of work,” McDonnell said. “And it seems like the government is expanding that definition to encompass just sharing, even if there’s no threats, nothing violent. Just sharing information about what our government is doing.”

Expanding the definition and then using that term to justify suppressing speech is concerning, he said, especially since the First Amendment includes no exception for “doxing,” even if DHS ever were to provide evidence of it.

To suppress speech, officials must show that groups are inciting violence or making true threats. FIRE has alleged that the government has not met “the extraordinary justifications required for a prior restraint” on speech and is instead using vague doxing threats to discriminate against speech based on viewpoint. They’re seeking a permanent injunction barring officials from coercing tech companies into censoring ICE posts.

If plaintiffs win, the censorship threats could subside, and tech companies may feel safe reinstating apps and Facebook groups, advocates told Ars. That could potentially revive archives documenting thousands of ICE incidents and reconnect webs of ICE watchers who lost access to valued feeds.

Until courts possibly end threats of censorship, the most cautious community members are moving local ICE-watch efforts to group chats and listservs that are harder for the government to disrupt, Trujillo told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Platforms bend over backward to help DHS censor ICE critics, advocates say Read More »

nih-head,-still-angry-about-covid,-wants-a-second-scientific-revolution

NIH head, still angry about COVID, wants a second scientific revolution


Can we pander to MAHA, re-litigate COVID, and improve science at the same time?

Image of a man with grey hair and glasses, wearing a suit, gesturing as he talks.

Bhattacharya speaks before the Senate shortly after the MAHA event. Credit: Chip Somodevilla

Bhattacharya speaks before the Senate shortly after the MAHA event. Credit: Chip Somodevilla

At the end of January, Washington, DC, saw an extremely unusual event. The MAHA Institute, which was set up to advocate for some of the most profoundly unscientific ideas of our time, hosted leaders of the best-funded scientific organization on the planet, the National Institutes of Health. Instead of a hostile reception, however, Jay Bhattacharya, the head of the NIH, was greeted as a hero by the audience, receiving a partial standing ovation when he rose to speak.

Over the ensuing five hours, the NIH leadership and MAHA Institute moderators found many areas of common ground: anger over pandemic-era decisions, a focus on the failures of the health care system, the idea that we might eat our way out of some health issues, the sense that science had lost people’s trust, and so on. And Bhattacharya and others clearly shaped their messages to resonate with their audience.

The reason? MAHA (Make America Healthy Again) is likely to be one of the only political constituencies supporting Bhattacharya’s main project, which he called a “second scientific revolution.”

In practical terms, Bhattacharya’s plan for implementing this revolution includes some good ideas that fall far short of a revolution. But his motivation for the whole thing seems to be lingering anger over the pandemic response—something his revolution wouldn’t address. And his desire to shoehorn it into the radical disruption of scientific research pursued by the Trump administration led to all sorts of inconsistencies between his claims and reality.

If this whole narrative seems long, complicated, and confusing, it’s probably a good preview of what we can expect from the NIH over the next few years.

MAHA meets science

Despite the attendance of several senior NIH staff (including the directors of the National Cancer Institute and National Institute of Allergy and Infectious Diseases) and Bhattacharya himself, this was clearly a MAHA event. One of the MAHA Institute’s VPs introduced the event as being about the “reclaimation” of a “discredited” NIH that had “gradually given up its integrity.”

“This was not a reclamation that involved people like Anthony Fauci,” she went on to say. “It was a reclamation of ordinary Americans, men and women who wanted our nation to excel in science rather than weaponize it.”

Things got a bit strange. Moderators from the MAHA Institute asked questions about whether COVID vaccines could cause cancer and raised the possibility of a lab leak favorably. An audience member asked why alternative treatments aren’t being researched. A speaker who proudly announced that he and his family had never received a COVID vaccine was roundly applauded. Fifteen minutes of the afternoon were devoted to a novelist seeking funding for a satirical film about the pandemic that portrayed Anthony Fauci as an egomaniacal lightweight, vaccines as a sort of placebo, and Bhattacharya as the hero of the story.

The organizers also had some idea of who might give all of this a hostile review, as reporters from Nature and Science said they were denied entry.

In short, this was not an event you’d go to if you were interested in making serious improvements to the scientific method. But that’s exactly how Bhattacharya treated it, spending the afternoon not only justifying the changes he’s made within the NIH but also arguing that we’re in need of a second scientific revolution—and he’s just the guy to bring it about.

Here’s an extensive section of his introduction to the idea:

I want to launch the second scientific revolution.

Why this grandiose vision? The first scientific revolution you have… very broadly speaking, you had high ecclesiastical authority deciding what was true or false on physical, scientific reality. And the first scientific revolution basically took… the truth-making power out of the hands of high ecclesiastical authority for deciding physical truth. We can leave aside spiritual—that is a different thing—physical truth and put it in the hands of people with telescopes. It democratized science fundamentally, it took the hands of power to decide what’s true out of the hands of authority and put it in the hands of ridiculous geniuses and regular people.

The second scientific revolution, then, is very similar. The COVID crisis, if it was anything, was the crisis of high scientific authority geting to decide not just a scientific truth like “plexiglass is going to protect us from COVID” or something, but also essentially spiritual truth. How should we treat our neighbor? Well, we treat our neighbor as a mere biohazzard.

The second scientific revolution, then, is the replication revolution. Rather than using the metrics of how many papers are we publishing as a metric for success, instead, what we’ll look at as a metric for successful scientific idea is ‘do you have an idea where other people [who are] looking at the same idea tend to find the same thing as you?’ It is not just narrow replication of one paper or one idea. It’s a really broad science. It includes, for instance, reproduction. So if two scientists disagree, that often leads to constructive ways forward in science—deciding, well there some new ideas that may come out of that disagreement

That section, which came early in his first talk of the day, hit on themes that would resurface throughout the afternoon: These people are angry about how the pandemic was handled, they’re trying to use that anger to fuel fundamental change in how science is done in the US, and their plan for change has nearly nothing to do with the issues that made them angry in the first place. In view of this, laying everything out for the MAHA crowd actually does make sense. They’re a suddenly powerful political constituency that also wants to see fundamental change in the scientific establishment, and they are completely unbothered by any lack of intellectual coherence.

Some good

The problem Bhattacharya believes he identified in the COVID response has nothing to do with replication problems. Even if better-replicated studies ultimately serve as a more effective guide to scientific truth, it would do little to change the fact that COVID restrictions were policy decisions largely made before relevant studies could even be completed, much less replicated. That’s a serious incoherence that needs to be acknowledged up front.

But that incoherence doesn’t prevent some of Bhattacharya’s ideas on replication and research priorities from being good. If they were all he was trying to accomplish, he could be a net positive.

Although he is a health economist, Bhattacharya correctly recognized something many people outside science don’t: Replication rarely comes from simply repeating the same set of experiments twice. Instead, many forms of replication happen by poking at the same underlying problem from multiple directions—looking in different populations, trying slightly different approaches, and so on. And if two approaches give different answers, it doesn’t mean that either of them is wrong. Instead, the differences could be informative, revealing something fundamental about how the system operates, as Bhattacharya noted.

He is also correct that simply changing the NIH to allow it to fund more replicative work probably won’t make a difference on its own. Instead, the culture of science needs to change so that replication can lead to publications that are valued for prestige, job security, and promotions—something that will only come slowly. He is also interested in attaching similar value to publishing negative results, like failed hypotheses or problems that people can’t address with existing technologies.

The National Institutes of Health campus.

The National Institutes of Health campus. Credit: NIH

Bhattacharya also spent some time discussing the fact that NIH grants have become very risk-averse, an issue frequently discussed by scientists themselves. This aversion is largely derived from the NIH’s desire to ensure that every grant will produce some useful results—something the agency values as a way to demonstrate to Congress that its budget is being spent productively. But it leaves little space for exploratory science or experiments that may not work for technical reasons. Bhattacharya hopes to change that by converting some five-year grants to a two-plus-three structure, where the first two years fund exploratory work that must prove successful for the remaining three years to be funded.

I’m skeptical that this would be as useful as Bhattacharya hopes. Researchers who already have reason to believe the “exploratory” portion will work are likely to apply, and others may find ways to frame results from the exploratory phase as a success. Still, it seems worthwhile to try to fund some riskier research.

There was also talk of providing greater support for young researchers, another longstanding issue. Bhattacharya also wants to ensure that the advances driven by NIH-funded research are more accessible to the public and not limited to those who can afford excessively expensive treatments—again, a positive idea. But he did not share a concrete plan for addressing these issues.

All of this is to say that Bhattacharya has some ideas that may be positive for the NIH and science more generally, even if they fall far short of starting a second scientific revolution. But they’re embedded in a perspective that’s intellectually incoherent and seems to demand far more than tinkering around the edges of reproducibility. And the power to implement his ideas comes from two entities—the MAHA movement and the Trump administration—that are already driving changes that go far beyond what Bhattacharya says he wants to achieve. Those changes will certainly harm science.

Why a revolution?

There are many potential problems with deciding that pandemic-era policy decisions necessitate a scientific revolution. The most significant is that the decisions, again, were fundamentally policy decisions, meaning they were value-driven as much as fact-driven. Bhattacharya is clearly aware of that, complaining repeatedly that his concerns were moral in nature. He also claimed that “during the pandemic, what we found was that the engines of science were used for social control” and that “the lockdowns were so far at odds with human liberty.”

He may be upset that, in his view, scientists intrude upon spiritual truth and personal liberty when recommending policy, but that has nothing to do with how science operates. It’s unclear how changing how scientists prioritize reproducibility would prevent policy decisions he doesn’t like. That disconnect means that even when Bhattacharya is aiming at worthwhile scientific goals, he’s doing so accidentally rather than in a way that will produce useful results.

This is all based on a key belief of Bhattacharya and his allies: that they were right about both the science of the pandemic and the ethical implications of pandemic policies. The latter is highly debatable, and many people would disagree with them about how to navigate the trade-offs between preserving human lives and maximizing personal freedoms.

But there are also many indications that these people are wrong about the science. Bhattacharya acknowledged the existence of long COVID but doesn’t seem to have wrestled with what his preferred policy—encouraging rapid infection among low-risk individuals—might have meant for long COVID incidence, especially given that vaccines appear to reduce the risk of developing it.

Matthew Memoli, acting NIH Director prior to Bhattacharya and currently its principal deputy director, shares Bhattacharya’s view that he was right, saying, “I’m not trying to toot my own horn, but if you read the email I sent [about pandemic policy], everything I said actually has come true. It’s shocking how accurate it was.”

Yet he also proudly proclaimed, “I knew I wasn’t getting vaccinated, and my wife wasn’t, kids weren’t. Knowing what I do about RNA viruses, this is never going to work. It’s not a strategy for this kind [of virus].” And yet the benefits of COVID vaccinations for preventing serious illness have been found in study after study—it is, ironically, science that has been reproduced.

A critical aspect of the original scientific revolution was the recognition that people have to deal with facts that are incompatible with their prior beliefs. It’s probably not a great idea to have a second scientific revolution led by people who appear to be struggling with a key feature of the first.

Political or not?

Anger over Biden-era policies makes Bhattacharya and his allies natural partners of the Trump administration and is almost certainly the reason these people were placed in charge of the NIH. But it also puts them in an odd position with reality, since they have to defend policies that clearly damage science. “You hear, ‘Oh well this project’s been cut, this funding’s been cut,’” Bhattacharya said. “Well, there hasn’t been funding cut.”

A few days after Bhattacharya made this statement, Senator Bernie Sanders released data showing that many areas of research have indeed seen funding cuts.

Image of a graph with a series of colored lines, each of which shows a sharp decline at the end.

Bhattacharya’s claims that no funding had been cut appears to be at odds with the data.

Bhattacharya’s claims that no funding had been cut appears to be at odds with the data. Credit: Office of Bernard Sanders

Bhattacharya also acknowledged that the US suffers from large health disparities between different racial groups. Yet grants funding studies of those disparities were cut during DOGE’s purge of projects it labeled as “DEI.” Bhattacharya was happy to view that funding as being ideologically motivated. But as lawsuits have revealed, nobody at the NIH ever evaluated whether that was the case; Matthew Memoli, one of the other speakers, simply forwarded on the list of grants identified by DOGE with instructions that they be canceled.

Bhattacharya also did his best to portray the NIH staff as being enthused about the changes he’s making, presenting the staff as being liberated from a formerly oppressive leadership. “The staff there, they worked for many decades under a pretty tight regime,” he told the audience. “They were controlled, and now we were trying to empower them to come to us with their ideas.”

But he is well aware of the dissatisfaction expressed by NIH workers in the Bethesda Declaration (he met with them, after all), as well as the fact that one of the leaders of that effort has since filed for whistleblower protection after being placed on administrative leave due to her advocacy.

Bhattacharya effectively denied both that people had suffered real-world consequences in their jobs and funding and that the decision to sideline them was political. Yet he repeatedly implied that he and his allies suffered due to political decisions because… people left him off some email chains.

“No one was interested in my opinion about anything,” he told the audience. “You weren’t on the emails anymore.”

And he implied this sort of “suppression” was widespread. “I’ve seen Matt [Memoli] poke his head up and say that he was against the COVID vaccine mandates—in the old NIH, that was an act of courage,” Battacharya said. “I recognized it as an act of courage because you weren’t allowed to contradict the leader for fear that you were going to get suppressed.” As he acknowledged, though, Memoli suffered no consequences for contradicting “the leader.”

Bhattacharya and his allies continue to argue that it’s a serious problem that they suffered no consequences for voicing ideas they believe were politically disfavored; yet they are perfectly comfortable with people suffering real consequences due to politics. Again, it’s not clear how this sort of intellectual incoherence can rally scientists around any cause, much less a revolution.

Does it matter?

Given that politics has left Bhattacharya in charge of the largest scientific funding agency on the planet, it may not matter how the scientific community views his project. And it’s those politics that are likely at the center of Bhattacharya’s decision to give the MAHA Institute an entire afternoon of his time. It’s founded specifically to advance the aims of his boss, Secretary of Health Robert F. Kennedy Jr., and represents a group that has become an important component of Trump’s coalition. As such, they represent a constituency that can provide critical political support for what Bhattacharya hopes to accomplish.

Close-up of sterile single-use syringes individually wrapped in plastic and arranged in a metal tray, each containing a dose of COVID-19 vaccine.

Vaccine mandates played a big role in motivating the present leadership of the NIH.

Vaccine mandates played a big role in motivating the present leadership of the NIH. Credit: JEAN-FRANCOIS FORT

Unfortunately, they’re also very keen on profoundly unscientific ideas, such as the idea that ivermectin might treat cancer or that vaccines aren’t thoroughly tested. The speakers did their best not to say anything that might offend their hosts, in one example spending several minutes to gently tell a moderator why there’s no plausible reason to think ivermectin would treat cancer. They also made some supportive gestures where possible. Despite the continued flow of misinformation from his boss, Bhattacharya said, “It’s been really great to be part of administration to work for Secretary Kennedy for instance, whose only focus is to make America healthy.”

He also made the point of naming “vaccine injury” as a medical concern he suggested was often ignored by the scientific community, lumping it in with chronic Lyme disease and long COVID. Several of the speakers noted positive aspects of vaccines, such as their ability to prevent cancers or protect against dementia. Oddly, though, none of these mentions included the fact that vaccines are highly effective at blocking or limiting the impact of the pathogens they’re designed to protect against.

When pressed on some of MAHA’s odder ideas, NIH leadership responded with accurate statements on topics such as plausible biological mechanisms and the timing of disease progression. But the mere fact that they had to answer these questions highlights the challenges NIH leadership faces: Their primary political backing comes from people who have limited respect for the scientific process. Pandering to them, though, will ultimately undercut any support they might achieve from the scientific community.

Managing that tension while starting a scientific revolution would be challenging on its own. But as the day’s talks made clear, the challenges are likely to be compounded by the lack of intellectual coherence behind the whole project. As much as it would be good to see the scientific community place greater value on reproducibility, these aren’t the right guys to make that happen.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

NIH head, still angry about COVID, wants a second scientific revolution Read More »

why-darren-aronofsky-thought-an-ai-generated-historical-docudrama-was-a-good-idea

Why Darren Aronofsky thought an AI-generated historical docudrama was a good idea


We hold these truths to be self-evident

Production source says it takes “weeks” to produce just minutes of usable video.

Artist’s conception of critics reacting to the first episodes of “On This Day… 1776” Credit: Primordial Soup

Artist’s conception of critics reacting to the first episodes of “On This Day… 1776” Credit: Primordial Soup

Last week, filmmaker Darren Aronofsky’s AI studio Primordial Soup and Time magazine released the first two episodes of On This Day… 1776. The year-long series of short-form videos features short vignettes describing what happened on that day of the American Revolution 250 years ago, but it does so using “a variety of AI tools” to produce photorealistic scenes containing avatars of historical figures like George Washington, Thomas Paine, and Benjamin Franklin.

In announcing the series, Time Studios President Ben Bitonti said the project provides “a glimpse at what thoughtful, creative, artist-led use of AI can look like—not replacing craft but expanding what’s possible and allowing storytellers to go places they simply couldn’t before.”

The trailer for “On This Day… 1776.”

Outside critics were decidedly less excited about the effort. The AV Club took the introductory episodes to task for “repetitive camera movements [and] waxen characters” that make for “an ugly look at American history.” CNET said that this “AI slop is ruining American history,” calling the videos a “hellish broth of machine-driven AI slop and bad human choices.” The Guardian lamented that the “once-lauded director of Black Swan and The Wrestler has drowned himself in AI slop,” calling the series “embarrassing,” “terrible,” and “ugly as sin.” I could go on.

But this kind of initial reaction apparently hasn’t deterred Primordial Soup from its still-evolving efforts. A source close to the production, who requested anonymity to speak frankly about details of the series’ creation, told Ars that the quality of new episodes would improve as the team’s AI tools are refined throughout the year and as the team learns to better use them.

“We’re going into this fully assuming that we have a lot to learn, that this process is gonna evolve, the tools we’re using are gonna evolve,” the source said. “We’re gonna make mistakes. We’re gonna learn a lot… we’re going to get better at it, [and] the technology will change. We’ll see how audiences are reacting to certain things, what works, what doesn’t work. It’s a huge experiment, really.”

Not all AI

It’s important to note that On This Day… 1776 is not fully crafted by AI. The script, for instance, was written by a team of writers overseen by Aronofsky’s longtime writing partners Ari Handel and Lucas Sussman, as noted by The Hollywood Reporter. That makes criticisms like the Guardian’s of “ChatGPT-sounding sloganeering” in the first episodes both somewhat misplaced and hilariously harsh.

Our production source says the project was always conceived as a human-written effort and that the team behind it had long been planning and researching how to tell this kind of story. “I don’t think [they] even needed that kind of help or wanted that kind of [AI-powered writing] help,” they said. “We’ve all experimented with [AI-powered] writing and the chatbots out there, and you know what kind of quality you get out of that.”

What you see here is not a real human actor, but his lines were written and voiced by humans.

What you see here is not a real human actor, but his lines were written and voiced by humans. Credit: Primordial Soup

The producers also go out of their way to note that all the dialogue in the series is recorded directly by Screen Actors Guild voice actors, not by AI facsimiles. While recently negotiated union rules might have something to do with that, our production source also said the AI-generated voices the team used for temp tracks were noticeably artificial and not ready for a professional production.

Humans are also directly responsible for the music, editing, sound mixing, visual effects, and color correction for the project, according to our source. The only place the “AI-powered tools” come into play is in the video itself, which is crafted with what the announcement calls a “combination of traditional filmmaking tools and emerging AI capabilities.”

In practice, our source says, that means humans create storyboards, find visual references for locations and characters, and set up how they want shots to look. That information, along with the script, gets fed into an AI video generator that creates individual shots one at a time, to be stitched together and cleaned up by humans in traditional post-production.

That process takes the AI-generated cinema conversation one step beyond Ancestra, a short film Primordial Soup released last summer in association with Google DeepMind (which is not involved with the new project). There, AI tools were used to augment “live-action scenes with sequences generated by Veo.”

“Weeks” of prompting and re-prompting

In theory, having an AI model generate a scene in minutes might save a lot of time compared to traditional filmmaking—scouting locations, hiring actors, setting up cameras and sets, and the like. But our production source said the highly iterative process of generating and perfecting shots for On This Day… 1776 still takes “weeks” for each minutes-long video and that “more often than not, we’re pushing deadlines.”

The first episode of On this Day… 1776 features a dramatic flag raising.

Even though the AI model is essentially animating photorealistic avatars, the source said the process is “more like live action filmmaking” because of the lack of fine-grained control over what the video model will generate. “You don’t know if you’re gonna get what you want on the first take or the 12th take or the 40th take,” the source said.

While some shots take less time to get right than others, our source said the AI model rarely produces a perfect, screen-ready shot on the first try. And while some small issues in an AI-generated shot can be papered over in post-production with visual effects or careful editing, most of the time, the team has to go back and tell the model to generate a completely new video with small changes.

“It still takes a lot of work, and it’s not necessarily because it’s wrong, per se, so much as trying to get the right control because you [might] want the light to land on the face in the right way to try to tell the story,” the source said. “We’re still, we’re still striving for the same amount of control that we always have [with live-action production] to really maximize the story and the emotion.”

Quick shots and smaller budgets

Though video models have advanced since the days of the nightmarish clip of Will Smith eating spaghetti, hallucinations and nonsensical images are “still a problem” in producing On This Day… 1776, according to our source. That’s one of the reasons the company decided to use a series of short-form videos rather than a full-length movie telling the same essential story.

“It’s one thing to stay consistent within three minutes. It’s a lot harder and it takes a lot more work to stay consistent within two hours,” the source said. “I don’t know what the upper limit is now [but] the longer you get, the more things start to fall off.”

Stills from an AI-generated video of Will Smith eating spaghetti.

We’ve come a long way from the circa-2023 videos of Will Smith eating spaghetti.

We’ve come a long way from the circa-2023 videos of Will Smith eating spaghetti. Credit: chaindrop / Reddit

Keeping individual shots short also allows for more control and fewer “reshoots” for an AI-animated production like this. “When you think about it, if you’re trying to create a 20-second clip, you have all these things that are happening, and if one of those things goes wrong in 20 seconds, you have to start over,” our source said. “And the chance of something going wrong in 20 seconds is pretty high. The chance of something going wrong in eight seconds is a lot lower.”

While our production source couldn’t give specifics on how much the team was spending to generate so much AI-modeled video, they did suggest that the process was still a good deal cheaper than filming a historical docudrama like this on location.

“I mean, we could never achieve what we’re doing here for this amount of money, which I think is pretty clear when you watch this,” they said. In future episodes, the source promised, “you’ll see where there’s things that cameras just can’t even do” as a way to “make the most of that medium.”

“Let’s see what we can do”

If you’ve been paying attention to how fast things have been moving with AI-generated video, you might think that AI models will soon be able to produce Hollywood-quality cinema with nothing but a simple prompt. But our source said that working on On This Day… 1776 highlights just how important it is for humans to still be in the loop on something like this.

“Personally, I don’t think we’re ever gonna get there [replacing human editors],” he said. “We actually desperately need an editor. We need another set of eyes who can look at the cut and say, ‘If we get out of this shot a little early, then we can create a little bit of urgency. If we linger on this thing a little longer…’ You still really need that.”

AI Ben Franklin and AI Thomas Paine toast to the war propaganda effort.

AI Ben Franklin and AI Thomas Paine toast to the war propaganda effort. Credit: Primordial Soup

That could be good news for human editors. But On This Day… 1776 also suggests a world where on-screen (or even motion-captured) human actors are fully replaced by AI-generated avatars. When I asked our source why the producers felt that AI was ready to take over that specifically human part of the film equation, though, the response surprised me.

“I don’t know that we do know that, honestly,” they said. “I think we know that the technology is there to try. And I think as storytellers we’re really interested in using… all the different tools that we can to try to get our story across and to try to make audiences feel something.”

“It’s not often that we have huge new tools like this,” the source continued. “I mean, it’s never happened in my lifetime. But when you do [get these new tools], you want to start playing with them… We have to try things in order to know if it works, if it doesn’t work.”

“So, you know, we have the tools now. Let’s see what we can do.”

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Why Darren Aronofsky thought an AI-generated historical docudrama was a good idea Read More »