Author name: Rejus Almole

biotech-company-regeneron-to-buy-bankrupt-23andme-for-$256m

Biotech company Regeneron to buy bankrupt 23andMe for $256M

Biotechnology company Regeneron will acquire 23andMe out of bankruptcy for $256 million, with a plan to keep the DNA-testing company running without interruption and uphold its privacy-protection promises.

In its announcement of the acquisition, Regeneron assured 23andMe’s 15 million customers that their data—including genetic and health information, genealogy, and other sensitive personal information—would be safe and in good hands. Regeneron aims to use the large trove of genetic data to further its own work using genetics to develop medical advances—something 23andMe tried and failed to do.

“As a world leader in human genetics, Regeneron Genetics Center is committed to and has a proven track record of safeguarding the genetic data of people across the globe, and, with their consent, using this data to pursue discoveries that benefit science and society,” Aris Baras, senior vice president and head of the Regeneron Genetics Center, said in a statement. “We assure 23andMe customers that we are committed to protecting the 23andMe dataset with our high standards of data privacy, security, and ethical oversight and will advance its full potential to improve human health.”

Baras said that Regeneron’s Genetic Center already has its own genetic dataset from nearly 3 million people.

The safety of 23andMe’s dataset has drawn considerable concern among consumers, lawmakers, and regulators amid the company’s downfall. For instance, in March, California Attorney General Rob Bonta made the unusual move to urge Californians to delete their genetic data amid 23andMe’s financial distress. Federal Trade Commission Chairman Andrew Ferguson also weighed in, making clear in a March letter that “any purchaser should expressly agree to be bound by and adhere to the terms of 23andMe’s privacy policies and applicable law.”

Biotech company Regeneron to buy bankrupt 23andMe for $256M Read More »

f1-in-imola-reminds-us-it’s-about-strategy-as-much-as-a-fast-car

F1 in Imola reminds us it’s about strategy as much as a fast car


Who went home happy from Imola and why? F1’s title race heats up.

IMOLA, ITALY - MAY 17: Charles Leclerc of Monaco driving the (16) Scuderia Ferrari SF-25 on track during during Qualifying ahead of the F1 Grand Prix of Emilia-Romagna at Autodromo Internazionale Enzo e Dino Ferrari on May 17, 2025 in Imola, Italy

In Italy there are two religions, and one of them is Ferrari. Credit: Ryan Pierse/Getty Images

In Italy there are two religions, and one of them is Ferrari. Credit: Ryan Pierse/Getty Images

Formula 1’s busy 2025 schedule saw the sport return to its European heartland this past weekend. Italy has two races on the calendar this year, and this was the first, the (deep breath) “Formula 1 AWS Gran Premio Del Made in Italy e Dell’Emilia-Romagna,” which took place at the scenic and historic (another deep breath) Autodromo Enzo e Dino Ferrari, better known as Imola. It’s another of F1’s old-school circuits where overtaking is far from easy, particularly when the grid is as closely matched as it is. But Sunday’s race was no snoozer, and for a couple of teams, there was a welcome change in form.

Red Bull was one. The team has looked a bit shambolic at times this season, with some wondering whether this change in form was the result of a number of high-profile staff departures toward the end of last season. Things looked pretty bleak during the first of three qualifying sessions, when Yuki Tsunoda got too aggressive with a curb and, rather than finding lap time, found himself in a violent crash that tore all four corners off the car and relegated him to starting the race last from the pit lane.

2025 has also been trying for Ferrari. Italy expects a lot from the red team, and the replacement of Mattia Binotto with Frédéric Vasseur as team principal was supposed to result in Maranello challenging for championships. Signing Lewis Hamilton, a bona fide superstar with seven titles already on his CV, hasn’t exactly reduced the amount of pressure on Scuderia Ferrari, either.

Frederic Vasseur, Team Principal of Scuderia Ferrari, is at the Formula 1 AWS Gran Premio del Made in Italy e dell'Emilia-Romagna 2025 in Imola, Italy, on May 17, 2025, at Autodromo Internazionale Enzo e Dino Ferrari.

Ferrari team principal Frédéric Vasseur. Credit: Alessio Morgese/NurPhoto via Getty Images

Lewis Hamilton was much closer to teammate Charles Leclerc this weekend, which will be encouraging to everyone. After Hamilton’s exclusion from the Chinese Grand Prix, he has had to run a higher ride height, which has cost him speed relative to his younger teammate. Now it looks like he’s getting a handle on the car and lost out to Leclerc by 0.06 seconds in Q1 and 0.16 seconds in Q2. Unfortunately, Leclerc’s time was only good for 11th, and Hamilton’s was only good for 12th.

Sunday brought smiles for the Red Bull and Ferrari teams. In the hands of Verstappen, the Red Bull was about as fast as the black-and-orange McLarens, and while second was the best Verstappen could do in qualifying, the gap to McLaren’s Oscar Piastri was measured in the hundredths of a second.

Verstappen’s initial start from the line looked unremarkable, too—the Mercedes of George Russell seemed more of a threat to the pole man. But Verstappen saw an opportunity and drove around the outside almost before Piastri even registered he was there, seizing the lead of the race. Once the Red Bull driver was in clean air, he was able to stretch the gap to Piastri.

IMOLA, ITALY - MAY 18: Oscar Piastri of Australia driving the (81) McLaren MCL39 Mercedes leads Max Verstappen of the Netherlands driving the (1) Oracle Red Bull Racing RB21 George Russell of Great Britain driving the (63) Mercedes AMG Petronas F1 Team W16 Lando Norris of Great Britain driving the (4) McLaren MCL39 Mercedes Fernando Alonso of Spain driving the (14) Aston Martin F1 Team AMR25 Mercedes and the rest of the field at the start during the F1 Grand Prix of Emilia-Romagna at Autodromo Internazionale Enzo e Dino Ferrari on May 18, 2025 in Imola, Italy.

Oscar Piastri is seen here in the lead, but it wouldn’t last more than a corner. Credit: Mark Thompson/Getty Images

Getting past someone is notoriously hard at Imola. In a 2005 classic, Fernando Alonso held off Michael Schumacher’s much faster car for the entire race. Even though the cars are larger and heavier now and more closely matched, overtaking was still possible, like Norris’ pass on Russell.

Undercut? Overcut?

But when overtaking is as hard as it is at a track like Imola, teams will try to use strategy to pass each other with pit stops. Each driver has to make at least one pit stop, as drivers are required to use two different tire compounds during the race. But depending on other factors, like how much the tires degrade, a team might decide to do two or even three stops—the lap time lost in the pits by stopping more often can be less than the time lost running on worn-out rubber.

In recent years, the word “undercut” has crept into F1 vocab, and no, it doesn’t refer to the hairstyles favored by the more flamboyant drivers in the paddock. To undercut a rival means to make your pit stop before them and then, on fresh tires and with a clear track ahead, set fast lap after fast lap so that when your rival makes their stop, they emerge from the pits behind you.

The undercut doesn’t always work, but in Imola, it initially looked like it did. Charles Leclerc stopped on lap 10 and leapfrogged Russell’s Mercedes, as well as his former Ferrari teammate and now Williams driver Carlos Sainz. Since Piastri wasn’t closing on Verstappen up front, McLaren decided to bring him in for an early stop.

IMOLA, ITALY - MAY 18: Race winner Max Verstappen of the Netherlands and Oracle Red Bull Racing celebrates on the podium during the F1 Grand Prix of Emilia-Romagna at Autodromo Internazionale Enzo e Dino Ferrari on May 18, 2025 in Imola, Italy.

Verstappen’s wins this season are far from inevitable. Credit: Clive Rose/Getty Images

But his advantage on new tires was not enough to eat into Verstappen’s margin, and he did not emerge in clean air but rather had to overtake car after car on track as he sought to regain his position ahead of those who hadn’t stopped. Sometimes, a strategy is the wrong one.

McLaren’s other driver, Lando Norris, couldn’t make a dent on Red Bull’s race, either. Having recognized the two-stop undercut wouldn’t work, Norris had stayed out, but he was almost 10 seconds behind Verstappen when it was finally time to change tires on lap 29. Shortly afterward, Esteban Ocon pulled his Haas to the side of the track with a powertrain failure, triggering a virtual safety car. With all the cars required to drive around at a prescribed, reduced pace, Verstappen was able to take his pit stop while only losing half as much time as anyone who stopped under green flag conditions.

Victory required a little more. Kimi Antonelli’s Mercedes also ground to a halt in a position that required a full safety car. With some on fresh rubber and others not, there were battles aplenty, but Verstappen wasn’t involved in any and won by seven seconds over Norris, with the recovering Piastri a few more seconds down the road.

Meanwhile, Hamilton had been having a pretty good Sunday of his own. Although he started 12th, he finished fourth, to the delight of the partisan, flag-waving crowd. Some of that was thanks to Leclerc coming together with the Williams of Alex Albon; after that on-track scuffle was sorted, Albon lay fifth, with Leclerc at sixth. Albon was right to feel aggrieved that he lost fourth place but equalled his best finish of the year.

IMOLA, ITALY - MAY 18: Ferrari fans wave their flags in a grandstand prior to the F1 Grand Prix of Emilia-Romagna at Autodromo Internazionale Enzo e Dino Ferrari on May 18, 2025 in Imola, Italy.

A fine fourth and a sixth were redemption for the Tifosi. Credit: Bryn Lennon – Formula 1/Formula 1 via Getty Images

Leclerc needed to cede the place to Albon, but at the same time, his complaint about the amount of rules-lawyering that now accompanies every bit of wheel-to-wheel action is getting a bit tedious. If F1 isn’t careful, the rulebook will end up being too constraining, with drivers playing to the letter even if it’s bad for the sport and the show. And sixth place was still a decent result from 11th; the championships already look out of reach for Ferrari for 2025, but at least it’s in no danger of being overtaken by Williams in the tables, even if that is a threat on track.

McLaren is already at 279 points in the constructors’ championship, 132 points ahead of next-best Mercedes, so the constructors’ cup is looking somewhat secure. Things are a lot closer in the drivers’ standings, with Piastri on 146, Norris on 133, and Verstappen still entirely in the fight with 124 points.

Next weekend, it’s time for the Monaco Grand Prix.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

F1 in Imola reminds us it’s about strategy as much as a fast car Read More »

sierra-made-the-games-of-my-childhood.-are-they-still-fun-to-play?

Sierra made the games of my childhood. Are they still fun to play?


Get ready for some nostalgia.

My Ars colleagues were kicking back at the Orbital HQ water cooler the other day, and—as gracefully aging gamers are wont to do—they began to reminisce about classic Sierra On-Line adventure games. I was a huge fan of these games in my youth, so I settled in for some hot buttered nostalgia.

Would we remember the limited-palette joys of early King’s Quest, Space Quest, or Quest for Glory titles? Would we branch out beyond games with “Quest” in their titles, seeking rarer fare like Freddy Pharkas: Frontier Pharmacist? What about the gothic stylings of The Colonel’s Bequest or the voodoo-curious Gabriel Knight?

Nope. The talk was of acorns. [Bleeping] acorns, in fact!

The scene in question came from King’s Quest III, where our hero Gwydion must acquire some exceptionally desiccated acorns to advance the plot. It sounds simple enough. As one walkthrough puts it, “Go east one screen and north one screen to the acorn tree. Try picking up acorns until you get some dry ones. Try various spots underneath the tree.” Easy! And clear!

Except it wasn’t either one because the game rather notoriously won’t always give you the acorns, even when you enter the right command. This led many gamers to believe they were in the wrong spot, when in reality, they just had to keep entering the “get acorns” command while moving pixel by pixel around the tree until the game finally supplied them. One of our staffers admitted to having purchased the King’s Quest III hint book solely because of this “puzzle.” (The hint book, which is now online, says that players should “move around” the particular oak tree in question because “you can only find the right kind of acorns in one spot.”)

This wasn’t quite the “fun” I had remembered from these games, but as I cast my mind back, I dimly began to recall similar situations. Space Quest II: Vohaul’s Revenge had been my first Sierra title, and after my brother and I spent weeks on the game only to get stuck and die repeatedly in some pitch-dark tunnels, we implored my dad to call Sierra’s 1-900 pay hint line. He thought about it. I could see it pained him because he had never before (and never since!) called a 1-900 number in his life. In this case, the call cost a piratical 75 cents for the first minute and 50 cents for each additional minute. But after listening to us whine for several days straight, my dad decided that his sanity was worth the fee, and he called.

Much like with the acorn example above, we had known what to do—we had just not done it to the game’s rather exacting and sometimes obscure standards. The key was to use a glowing gem as a light source, which my brother and I had long understood. The problem was the text parser, which demanded that we “put gem in mouth” to use its light in the tunnels. There was no other place to put the gem, no other way to hold or attach it. (We tried them all.) No other attempts to use the light of this shining crystal, no matter how clear, well-intentioned, or succinctly expressed, would work. You put the gem in your mouth, or you died in the darkness.

Returning from my reveries to the conversation at hand, I caught Ars Senior Editor Lee Hutchinson’s cynical remark that these kinds of puzzles were “the only way to make 2–3 hours of ‘game’ last for months.” This seemed rather shocking, almost offensive. How could one say such a thing about the games that colored my memories of childhood?

So I decided to replay Space Quest II for the first time in 35 years in an attempt to defend my own past.

Big mistake.

Space Quest II screenshot.

We’re not on Endor anymore, Dorothy.

Play it again, Sam

In my memory, the Space Quest series was filled with sharply written humor, clever puzzles, and enchanting art. But when I fired up the original version of the game, I found that only one of these was true. The art, despite its blockiness and limited colors, remained charming.

As for the gameplay, the puzzles were not so much “clever” as “infuriating,” “obvious,” or (more often) “rather obscure.”

Finding the glowing gem discussed above requires you to swim into one small spot of a multi-screen river, with no indication in advance that anything of importance is in that exact location. Trying to “call” a hunter who has captured you does nothing… until you do it a second time. And the less said about trying to throw a puzzle at a Labian Terror Beast, typing out various word permutations while death bears down upon you, the better.

The whole game was also filled with far more no-warning insta-deaths than I had remembered. On the opening screen, for instance, after your janitorial space-broom floats off into the cosmic ether, you can walk your character right off the edge of the orbital space station he is cleaning. The game doesn’t stop you; indeed, it kills you and then mocks you for “an obvious lack of common sense.” It then calls you a “wing nut” with an “inability to sustain life.” Game over.

The game’s third screen, which features nothing more to do than simply walking around, will also kill you in at least two different ways. Walk into the room still wearing your spacesuit and your boss will come over and chew you out. Game over.

If you manage to avoid that fate by changing into your indoor uniform first, it’s comically easy to tap the wrong arrow key and fall off the room’s completely guardrail-free elevator platform. Game over.

Space Quest II screenshot.

Do NOT touch any part of this root monster.

Get used to it because the game will kill you in so, so many ways: touching any single pixel of a root monster whose branches form a difficult maze; walking into a giant mushroom; stepping over an invisible pit in the ground; getting shot by a guard who zips in on a hovercraft; drowning in an underwater tunnel; getting swiped at by some kind of giant ape; not putting the glowing gem in your mouth; falling into acid; and many more.

I used the word “insta-death” above, but the game is not even content with this. At one key point late in the game, a giant Aliens-style alien stalks the hallways, and if she finds you, she “kisses” you. But then she leaves! You are safe after all! Of course, if you have seen the films, you will recognize that you are not safe, but the game lets you go on for a bit before the alien’s baby inevitably bursts from your chest, killing you. Game over.

This is why the official hint book suggests that you “save your game a lot, especially when it seems that you’re entering a dangerous area. That way, if you die, you don’t have to retrace your steps much.” Presumably, this was once considered entertaining.

When it comes to the humor, most of it is broad. (When you are told to “say the word,” you have to say “the word.”) Sometimes it is condescending. (“You quickly glance around the room to see if anyone saw you blow it.”) Or it might just be potty jokes. (Plungers, jock straps, toilet paper, alien bathrooms, and fouling one’s trousers all make appearances.)

My total gameplay time: a few hours.

“By Grabthar’s hammer!” I thought. “Lee was right!”

When I admitted this to him, Lee told me that he had actually spent time learning to speedrun the Space Quest games during the pandemic. “According to my notes, a clean run of SQ2 in ‘fast’ mode—assuming good typing skills—takes about 20 minutes straight-up,” he said. Yikes.

Space Quest II screenshot.

What a fiendish plot!

And yet

The past was a different time. Computer memory was small, graphics capabilities were low, and computer games had emerged from the “let them live just long enough to encourage spending another quarter” arcade model. Mouse adoption took a while; text parsers made sense even though they created plenty of frustration. So yes—some of these games were a few hours of gameplay stretched out with insta-death, obscure puzzles, and the sheer amount of time it took just to walk across the game’s various screens. (Seriously, “walking around” took a ridiculous amount of the game’s playtime, especially when a puzzle made you backtrack three screens, type some command, and then return.)

Space Quest II screenshot.

Let’s get off this rock.

Judged by current standards, the Sierra games are no longer what I would play for fun.

All the same, I loved them. They introduced me to the joy of exploring virtual worlds and to the power of evocative artwork. I went into space, into fairy tales, and into the past, and I did so while finding the games’ humor humorous and their plotlines compelling. (“An army of life insurance salesmen?” I thought at the time. “Hilarious and brilliant!”)

If the games can feel a bit arbitrary or vexing today, my child-self’s love of repetition was able to treat them as engaging challenges rather than “unfair” design.

Replaying Space Quest II, encountering the half-remembered jokes and visual designs, brought back these memories. The novelist Thomas Wolfe knew that you can’t go home again, and it was probably inevitable that the game would feel dated to me now. But playing it again did take me back to that time before the Internet, when not even hint lines, insta-death, and EGA graphics could dampen the wonder of the new worlds computers were capable of showing us.

Space Quest II screenshot.

Literal bathroom humor.

Space Quest II, along with several other Sierra titles, is freely and legally available online at sarien.net—though I found many, many glitches in the implementation. Windows users can buy the entire Space Quest collection through Steam or Good Old Games. There’s even a fan remake that runs on macOS, Windows, and Linux.

Photo of Nate Anderson

Sierra made the games of my childhood. Are they still fun to play? Read More »

after-latest-kidnap-attempt,-crypto-types-tell-crime-bosses:-transfers-are-traceable

After latest kidnap attempt, crypto types tell crime bosses: Transfers are traceable

The sudden spike in copycat attacks in France, Belgium, and Spain over the last few months suggests that crypto robbery as a tactic has caught the attention of organized crime. (This week’s abduction attempt is already being investigated by the organized crime unit of the Parisian police.)

Crypto industry insiders seem convinced that organized crime likes these attacks because of a (mistaken) belief that crypto transfers are untraceable. So people like Chainalysis CEO Jonathan Levin are trying to clue in the crime bosses.

“For whatever reason, there is a perception that’s out there that crypto is an asset that is untraceable, and that really lends itself to criminals acting in a certain way,” Levin said at a recent conference covered by the trade publication Cointelegraph.

“Apparently, the [knowledge] that crypto is not untraceable hasn’t been received by some of the organized crime groups that are actually perpetrating these attacks, and some of them are concentrated in, you know, France, but not exclusively.”

After latest kidnap attempt, crypto types tell crime bosses: Transfers are traceable Read More »

apple’s-new-carplay-ultra-is-ready,-but-only-in-aston-martins-for-now

Apple’s new CarPlay Ultra is ready, but only in Aston Martins for now

It’s a few years later than we were promised, but an advanced new version of Apple CarPlay is finally here. CarPlay is Apple’s way of casting a phone’s video and audio to a car’s infotainment system, but with CarPlay Ultra it gets a big upgrade. Now, in addition to displaying compatible iPhone apps on the car’s center infotainment screen, CarPlay Ultra will also take over the main instrument panel in front of the driver, replacing the OEM-designed dials like the speedometer and tachometer with a number of different Apple designs instead.

“iPhone users love CarPlay and it has changed the way people interact with their vehicles. With CarPlay Ultra, together with automakers we are reimagining the in-car experience and making it even more unified and consistent,” said Bob Borchers, vice president of worldwide marketing at Apple.

However, to misquote William Gibson, CarPlay Ultra is unevenly distributed. In fact, if you want it today, you’re going to have to head over to the nearest Aston Martin dealership. Because to begin with, it’s only rolling out in North America with Aston Martin, inside the DBX SUV, as well as the DB12, Vantage, and Vanquish sports cars. It’s standard on all new orders, the automaker says, and will be available as a dealer-performed update for existing Aston Martins with the company’s in-house 10.25-inch infotainment system in the coming weeks.

“The next generation of CarPlay gives drivers a smarter, safer way to use their iPhone in the car, deeply integrating with the vehicle while maintaining the very best of the automaker. We are thrilled to begin rolling out CarPlay Ultra with Aston Martin, with more manufacturers to come,” Borchers said.

Apple’s new CarPlay Ultra is ready, but only in Aston Martins for now Read More »

xai’s-grok-suddenly-can’t-stop-bringing-up-“white-genocide”-in-south-africa

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa

Where could Grok have gotten these ideas?

The treatment of white farmers in South Africa has been a hobbyhorse of South African X owner Elon Musk for quite a while. In 2023, he responded to a video purportedly showing crowds chanting “kill the Boer, kill the White Farmer” with a post alleging South African President Cyril Ramaphosa of remaining silent while people “openly [push] for genocide of white people in South Africa.” Musk was posting other responses focusing on the issue as recently as Wednesday.

They are openly pushing for genocide of white people in South Africa. @CyrilRamaphosa, why do you say nothing?

— gorklon rust (@elonmusk) July 31, 2023

President Trump has long shown an interest in this issue as well, saying in 2018 that he was directing then Secretary of State Mike Pompeo to “closely study the South Africa land and farm seizures and expropriations and the large scale killing of farmers.” More recently, Trump granted “refugee” status to dozens of white Afrikaners, even as his administration ends protections for refugees from other countries

Former American Ambassador to South Africa and Democratic politician Patrick Gaspard posted in 2018 that the idea of large-scale killings of white South African farmers is a “disproven racial myth.”

In launching the Grok 3 model in February, Musk said it was a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct.” X’s “About Grok” page says that the model is undergoing constant improvement to “ensure Grok remains politically unbiased and provides balanced answers.”

But the recent turn toward unprompted discussions of alleged South African “genocide” has many questioning what kind of explicit adjustments Grok’s political opinions may be getting from human tinkering behind the curtain. “The algorithms for Musk products have been politically tampered with nearly beyond recognition,” journalist Seth Abramson wrote in one representative skeptical post. “They tweaked a dial on the sentence imitator machine and now everything is about white South Africans,” a user with the handle Guybrush Threepwood glibly theorized.

Representatives from xAI were not immediately available to respond to a request for comment from Ars Technica.

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa Read More »

google-deepmind-creates-super-advanced-ai-that-can-invent-new-algorithms

Google DeepMind creates super-advanced AI that can invent new algorithms

Google’s DeepMind research division claims its newest AI agent marks a significant step toward using the technology to tackle big problems in math and science. The system, known as AlphaEvolve, is based on the company’s Gemini large language models (LLMs), with the addition of an “evolutionary” approach that evaluates and improves algorithms across a range of use cases.

AlphaEvolve is essentially an AI coding agent, but it goes deeper than a standard Gemini chatbot. When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.

According to DeepMind, this AI uses an automatic evaluation system. When a researcher interacts with AlphaEvolve, they input a problem along with possible solutions and avenues to explore. The model generates multiple possible solutions, using the efficient Gemini Flash and the more detail-oriented Gemini Pro, and then each solution is analyzed by the evaluator. An evolutionary framework allows AlphaEvolve to focus on the best solution and improve upon it.

Credit: Google DeepMind

Many of the company’s past AI systems, for example, the protein-folding AlphaFold, were trained extensively on a single domain of knowledge. AlphaEvolve, however, is more dynamic. DeepMind says AlphaEvolve is a general-purpose AI that can aid research in any programming or algorithmic problem. And Google has already started to deploy it across its sprawling business with positive results.

Google DeepMind creates super-advanced AI that can invent new algorithms Read More »

netflix-will-show-generative-ai-ads-midway-through-streams-in-2026

Netflix will show generative AI ads midway through streams in 2026

Netflix is joining its streaming rivals in testing the amount and types of advertisements its subscribers are willing to endure for lower prices.

Today, at its second annual upfront to advertisers, the streaming leader announced that it has created interactive mid-roll ads and pause ads that incorporate generative AI. Subscribers can expect to start seeing the new types of ads in 2026, Media Play News reported.

“[Netflix] members pay as much attention to midroll ads as they do to the shows and movies themselves,” Amy Reinhard, president of advertising at Netflix, said, per the publication.

Netflix started testing pause ads in July 2024, per The Verge.

Netflix launched its ad subscription tier in November 2022. Today, it said that the tier has 94 million subscribers, compared to the 300 million total subscribers it claimed in January. The current number of ad subscribers represents a 34 percent increase from November. Half of new Netflix subscribers opt for the $8 per month option rather than ad-free subscriptions, which start at $18 per month, the company says.

Netflix will show generative AI ads midway through streams in 2026 Read More »

fighting-obvious-nonsense-about-ai-diffusion

Fighting Obvious Nonsense About AI Diffusion

Our government is determined to lose the AI race in the name of winning the AI race.

The least we can do, if prioritizing winning the race, is to try and actually win it.

It is one thing to prioritize ‘winning the AI race’ against China over ensuring that humanity survives, controls and can collectively steer our future. I disagree with that choice, but I understand it. This mistake is very human.

I also believe that more alignment and security efforts at anything like current margins not only do not slow our AI efforts, they would actively help us win the race against China, by enabling better diffusion and use of AI, and ensuring we can proceed with its development. So the current path is a mistake even if you do not worry about humanity dying or losing control over the future.

However, if you look at the idea of building smarter, faster, more capable, more competitive, freely copyable digital minds we don’t understand that can be given goals and think ‘oh that future will almost certainly stay under humanity’s control and not be a danger to us in any way’ (and when you put it like that, um, what are you thinking?) then I understand the second half of this mistake as well.

What is not an understandable mistake, what I struggle to find a charitable and patriotic explanation for, is to systematically cripple or give away many of America’s biggest and most important weapons in the AI race, in exchange for thirty pieces of silver and some temporary market share.

To continue alienating our most important and trustworthy allies with unnecessary rhetoric and putting up trading barriers with them. To attempt to put tariffs even on services like movies where we already dominate and otherwise give the most important markets, like the EU, every reason in their minds to put up barriers to our tech companies and question our reliability as an ally. And simultaneously in the name of building alliances put the most valuable resources with unreliable partners like Malaysia, Saudi Arabia, Qatar and the UAE.

Indeed, we have now scrapped the old Biden ‘AI diffusion’ rule with no sign of its replacement, and where did David Sacks gloat about this? Saudi Arabia, of course. This is what ‘trusted partners’ means to them. Meanwhile, we are warning sterny against use of Huawai’s AI chips, ensuring China keeps all those chips itself. Our future depends on who has the compute, who ends up with the chips. We seem to instead think the future is determined by the revenue from chip manufacturing? Why would that be a priority? What do these people even think is going on?

To not only fail to robustly support and bring down regulatory and permitting barriers to the nuclear power we urgently need to support our data centers, but to actively wipe out the subsidies on which the nuclear industry depends, as the latest budget aims to do with remarkably little outcry via gutting the LPO and tax credits, while China of course ramps up its nuclear power plant construction efforts, no matter what the rhetoric on this might say. Then to use our inability to power the data centers as a reason to put our strategically vital data centers, again, in places like the UAE, because they can provide that power. What do you even call that?

To fail to let our AI companies have the ability to recruit the best and brightest, who want to come here and help make America great, instead throwing up more barriers and creating a climate of fear I’m hearing is turning many of the best people away.

And most of all, to say that the edge America must preserve, the ‘race’ that we must ‘win,’ is somehow the physical production of advanced AI chips. So, people say, in order to maintain our edge in chip production, we should give that edge entirely away right now, allowing those chips to be diverted to China, as would be inevitable in the places that are looking to buy where we seem most eager to enable sales. Nvidia even outright advocates that it should be allowed to sell to China openly, and no one in Washington seems to hold them accountable for this.

And we are doing all this while many perpetuate the myth that our AI efforts are not very solidly ahead of China in the places that matter most, or threaten to lock in the world’s customers, because DeepSeek which is impressive but still very clearly substantially behind our top labs, or because TikTok and Temu exist while forgetting that the much bigger Amazon and Meta also exist.

Temu’s sales are less than a tenth of Amazon’s, and the rest of the world’s top four e-commerce websites are Shopify, Walmart.com and eBay. As worrisome as it is, TikTok is only the fourth largest social media app behind Facebook, YouTube and Instagram, and there aren’t signs of that changing. Imagine if that situation was reversed.

Earlier this week I did an extensive readthrough and analysis of the Senate AI Hearing.

Here, I will directly lay out my response to various claims by and cited by US AI Czar David Sacks about the AI Diffusion situation and the related topics discussed above.

  1. Some of What Is Being Incorrectly Claimed.

  2. Response to Eric Schmidt.

  3. China and the AI Missile Gap.

  4. To Preserve Your Tech Edge You Should Give Away Your Tech Edge.

  5. To Preserve Your Compute Edge You Should Sell Off Your Compute.

  6. Shouting From The Rooftops: The Central Points to Know.

  7. The Longer Explanations.

  8. The Least We Can Do.

There are multiple distinct forms of Obvious Nonsense to address, either as text or very directly implied, whoever you attribute the errors to:

David Sacks (US AI Czar): Writing in NYT, former Google CEO Eric Schmidt warns that “China Tech Is Starting to Pull Ahead”:

“China is at parity or pulling ahead of the United States in a variety of technologies, notably at the A.I. frontier. And it has developed a real edge in how it disseminates, commercializes and manufactures tech. History has shown us that those who adopt and diffuse a technology the fastest win.”

As he points out, diffusing a technology the fastest — and relatedly, I would add, building the largest partner ecosystem — are the keys to winning. Yet when Washington introduced an “AI Diffusion Rule”, it was almost 200 pages of regulation hindering adoption of American technology, even by close partners.

The Diffusion Rule is on its way out, but other regulations loom.

President Trump committed to rescind 10 regulations for every new regulation that is added.

If the U.S. doesn’t embrace this mentality with respect to AI, we will lose the AI race.

Sriram Krishnan: Something @DavidSacks and I and many others here have been emphasizing is the need to have broad partner ecosystems using American AI stack rather than onerous complicated regulations.

If the discussion was ‘a bunch of countries like Mexico, Poland and Portugal are in Tier 2 that should instead have been in Tier 1’ then I agree there are a number of countries that probably should have been Tier 1. And I agree that there might well be a simpler implementation waiting to be fond.

And yet, why is it that in practice, these ‘broad partner ecosystems using American AI’ always seem to boil down to a handful of highly questionably allied and untrustworthy Gulf States with oil money trying to buy global influence, perhaps with a side of Malaysia and other places that are very obviously going to leak to China? David Sacks literally seems to think that if you do not literally put the data center in specifically China, then that keeps it in friendly hands and out of China’s grasp, and that we can count on our great friendships and permanent alliances with places like Saudi Arabia. Um, no. Why would you think that?

That Eric Schmidt editorial quoted above is a royal mess. For example, you have this complete non-sequitur.

Eric Schmidt and Selina Xu: History has shown us that those who adopt and diffuse a technology the fastest win.

So it’s no surprise that China has chosen to forcefully retaliate against America’s recent tariffs.

China forcefully retaliated against America’s tariffs for completely distinct reasons. The story Schmidt is trying to imply here doesn’t make any sense. His vibe reports are Just So Stories, not backed up at all by economic or other data.

‘By some benchmarks’ you can show pretty much anything, but I mean wow:

Eric Schmidt and Selina Xu: Yet, as with smartphones and electric vehicles, Silicon Valley failed to anticipate that China would find a way to swiftly develop a cheap yet state-of-the-art competitor. Today’s Chinese models are very close behind U.S. versions. In fact, DeepSeek’s March update to its V3 large language model is, by some benchmarks, the best nonreasoning model.

Look. No. Stop.

He then pivots to pointing out that there are other ‘tech’ areas where China is competitive, and goes into full scaremonger mode:

Apps for the Chinese online retailers Shein and Temu and the social media platforms RedNote and TikTok are already among the most downloaded globally. Combine this with the continuing popularity of China’s free open-source A.I. models, and it’s not hard to imagine teenagers worldwide hooked on Chinese apps and A.I. companions, with autonomous Chinese-made agents organizing our lives and businesses with services and products powered by Chinese models.

As I noted above, ‘American online retailers like Amazon and Shopify and the social media platforms Facebook and Instagram are already not only among but the most used globally.’

There is a stronger case one can make with physical manufacturing, when Eric then pivots to electric cars (and strangely focuses on Xiaomi over BYD) and industrial robotics.

Then, once again, he makes the insane ‘the person behind is giving away their inferior tech so we should give away our superior tech to them, that’ll show them’ argument:

We should learn from what China has done well. The United States needs to openly share more of its A.I. technologies and research, innovate even faster and double down on diffusing A.I. throughout the economy.

When you are ahead and you share your model, you give your rivals that model for free, killing your lead and your business for some sort of marketing win, and also you’re plausibly creating catastrophic risk. When you are behind, and you share it, sure, I mean why not.

In any case, he’s going to get his wish. OpenAI is going to release an open weight reasoning model, reducing America’s lead in order to send the clear message that yes we are ahead. Hope you all think it was worth it.

The good AI argument is that China is doing a better job in some ways of AI diffusion, of taking its AI capabilities and using them for mundane utility.

Similarly, I keep seeing forms of an argument that says:

  1. America’s export controls have given us an important advantage in compute.

  2. China’s companies have been slowed down by this, but have managed to stay only somewhat behind us in spite of it (largely because following is much easier).

  3. Therefore, we should lift the controls and give up our compute edge.

I’m sorry, what?

At lunch during Selina’s trip to China, when U.S. export controls were brought up, someone joked, “America should sanction our men’s soccer team, too, so they will do better.” So that they will do better.

It’s a hard truth to swallow, but Chinese tech has become better despite constraints, as Chinese entrepreneurs have found creative ways to do more with less. So it should be no surprise that the online response in China to American tariffs has been nationalistic and surprisingly optimistic: The public is hunkering down for a battle and thinks time is on Beijing’s side.

I don’t know why Eric keeps talking about the general tariffs or trade war with China here, or rather I do and it’s very obviously a conflation designed as a rhetorical trick. That’s a completely distinct issue, and I here take no position on that fight other than to note that our actions were not confined to China, and we very obviously shouldn’t be going after our trading partners and allies in these ways – including by Sacks’s logic.

The core proposal here is that, again:

  1. We gave China less to work with, put them at a disadvantage.

  2. They are managing to compete with us despite (his word) the disadvantage.

  3. Therefore we should take away their disadvantage.

It’s literal text. “America should sanction our men’s soccer team, too, so they will do better.” Should we also go break their legs? Would that help?

Then there’s a strange mix of ‘China is winning so we should become a centrally planned economy,’ mixed with ‘China is winning so we cannot afford to ever have any regulations on everything.’ Often both are coming from the same people. It’s weird.

So, shouting from the rooftops, once more with feeling for the people in the back:

  1. America is ahead of China in AI.

  2. Diffusion rules serve to protect America’s technological lead where it matters.

  3. UAE, Qatar and Saudi Arabia are not reliable American allies, nor are they important markets for our technology. We should not be handing them large shares of the world’s most valuable resource, compute.

  4. The exact diffusion rule is gone but something similar must take its place, to do otherwise would be how America ‘loses the AI race.’

  5. Not having any meaningful regulations at all on AI, or ‘building machines that are smarter and more capable than humans,’ is not a good idea, nor would it mean America would ‘lose the AI race.’

  6. AI is currently virtually unregulated as a distinct entity, so ‘repeal 10 regulations for every one you add’ is to not regulate at all building machines that are soon likely to be smarter and more capable than humans, or anything else either.

  7. ‘Winning the AI race’ is about racing to superintelligence. It is not about who gets to build the GPU. The reason to ‘win’ the ‘race’ is not market share in selling big tech solutions. It is especially not about who gets to sell others the AI chips.

  8. If we care about American dominance in global markets, including tech markets, stop talking about how what we need to do is not regulate AI, and start talking about the things that will actually help us, or at least stop doing the things that actively hurt us and could actually make us lose.

  1. American AI chips dominate and will continue to dominate. Our access to compute dominates, and will dominate if we enact and enforce strong export controls. American models dominate, we are in 1st, 2nd and 3rd with (in some order) OpenAI, Google and Anthropic. We are at least many months ahead.

    1. There was this one time DeepSeek put out an excellent reasoning model called r1 and an app for it.

    2. Through a confluence of circumstances (including misinterpretation of its true training costs, its making a good clean app where it showed its chain of thought, Google being terrible at marketing, it beating several other releases by a few weeks, OpenAI’s best models being behind paywalls, China ‘missile gap’ background fears, comparing only in the realms where r1 was relevant, acting as if only open models count, etc), this caught fire for a bit.

    3. But after a while it became clear that while r1 was a great achievement and indicated DeepSeek was a serious competitor, it was still even at their highest point 4-6 months behind, fundamentally it was a ‘fast follow’ achievement which is very different from taking a lead or keeping pace, and as training costs are scaled up it will be very difficult for DeepSeek to keep pace.

    4. That doesn’t mean DeepSeek doesn’t matter. Without DeepSeek the company, China would be much further behind than this.

    5. In response to this, a lot of jingoism and fearmongering akin to Kennedy’s ‘missile gap’ happened, which continues to this day.

    6. There are of course other tech and no-tech areas where China is competitive, such as Temu and TikTok in tech. But that’s very different.

    7. China does have advantages, especially its access to energy, and if they were allowed to access large amounts of compute that would be worrisome.

  2. The diffusion rules serve to protect America’s technological lead where it matters.

    1. America makes the best AI chips.

    2. The reason this matters is that it lets us be the ones who have those chips.

    3. America’s lead has many causes but one main cause is that we have far more and better compute, due to superior access to the best AI chips.

  3. Biden’s Diffusion Rule placed some countries in Tier 2 that could reasonably have and probably should have (based on what I know) been placed in Tier 1, or at worst a kind of Tier 1.5 with only mildly harsher supervision.

    1. If you want to move places like the remaining NATO members into Tier 1, or do something with a similar effect? That seems reasonable to me.

    2. However this very clearly does not include the very countries that we keep talking about allowing to build massive data centers with American AI chips, like the UAE, Saudi Arabia and Qatar.

    3. When there is talk of robust American allies and who will build and use our technology, somehow the talk is almost always about gulf states and other unreliable allies that are trying to turn their wealth into world influence.

    4. I leave why this might be so as an exercise to the reader.

    5. Even if such states do stay on our side, you had better believe they will use the leverage this brings to extract various other concessions from us.

    6. There is also very real concern that placing these resources in such locations would cause them to be misused by bad actors, including for terrorism, including via CBRN risks. It is foolish not to realize this.

    7. There is a conflation of selling other countries American AI chips and having them build AI data centers, with those countries using America’s AIs and other American tech company products. We should care mostly about them using our software products. The main reason to build AI data centers in other countries that are not our closest most trustworthy allies is if we are unable to build those data centers in America or in our closest most trustworthy allies, which mostly comes down to issues of permitting and power supply, which we could do a lot more to solve.

    8. If you’re going to say ‘the two are closely related we don’t want to piss off our allies’ right about now, I am going to be rather speechless given what else we have been up to lately including in trade, you cannot be serious right now. Or, if you want to actually get serious about this across the board, good, let’s talk.

    9. Is this a sign we don’t fully trust some of these countries? Yes. Yes it is.

  4. The exact diffusion rule is going away but something similar must and will take its place, to do otherwise would be how America ‘loses the AI race.’

    1. If China could effectively access the best AI chips, that would get rid of one of our biggest and most important advantages. Given their edge in energy, it could over time reverse that advantage.

    2. The point of trying to prevent China from improving its chip production is to prevent China from having the resulting compute. If we sell the chips to prevent this, then they already have the compute now. You lose.

    3. It is very clear that our exports to the ‘tier 2’ countries that look to buy what looks suspiciously like a lot of chips are often diverted to use by China, with the most obvious example being those sold to Malaysia.

    4. We should also worry about what happens to data centers built in places like Saudi Arabia or the UAE.

    5. I will believe the replacement rule will have the needed teeth when I see it.

    6. That doesn’t mean we can’t find a better, simpler implementation that protects American chips from falling into Chinese hands. But we need some diffusion rule that we can enforce, and that in practice actually prevents the Chinese from buying or getting access to our AI chips in quantity.

    7. Yes, if we sell our best AI chips to everyone freely, as Nvidia wants to do, or do it in ways that are effectively the same thing, then that helps protect Nvidia’s profits and market share, and by denying others markets we do gain some edge in the ability to maintain our dominance in making AI chips.

    8. But so what? All we do is make a little money on the AI chips, and China gets to catch up in actually having and using the AI chips, which is what matters. We’d be sacrificing the future on the altar of Nvidia’s stock price. This is the capitalist selling the rope with which to hang him. ‘Winning the race’ to an ordinary tech market is not what matters. If the only way to protect our lead for a little longer there is to give away the benefits of the lead, of what use was the lead?

    9. It also would make very little difference to either Nvidia or its Chinese competitors.

    10. Nvidia can still sell as many chips as it can produce, well above cost. All the chips Nvidia is not allowed to sell to China, even the crippled A20s, will happily be purchased in Western markets at profitable prices, if Nvidia allows it, giving America and its allies more compute and China less compute.

    11. I would be happy, if necessary, to have USG purchase any chips that Nvidia or AMD or anyone else is unable to sell due to diffusion rules. We would have many good uses for them, we can use them for public compute resources for universities and startups or whatever if the military doesn’t want them. The cost is peanuts relative to the stakes. (Disclosure, I am a shareholder of Nvidia, etc, but also I am writing this entire post).

    12. Demand in China for AI chips greatly outstrips supply. They have no need for export markets for their chips, and indeed we should be happy if they choose to export some of them rather than keeping them for domestic use.

    13. China already sees AI chip production as central to its future and national security. They are already pushing as hard as they dare.

  5. Not having any meaningful regulations at all on AI, or ‘building machines that are smarter and more capable than humans,’ is not a good idea, nor would it mean America would ‘lose the AI race.’

    1. This is not a strawman position. The House is trying to impose a 10-year moratorium on state and local enforcement of any laws whatsoever related to AI, even a potential law banning CSAM, without offering anything to replace that in any way, and Congress notoriously can’t pass laws these days. We also have the call to ‘repeal 10 regulations for every new one,’ which is again de facto a call for no regulations at all (see #6).

    2. Highly capable AI represents an existential risk to humanity.

    3. If we ‘win the race’ by simply going ahead as fast as possible, it’s not America that win the future. The AIs win the future.

    4. I can’t go over all the arguments about that here, but seriously it should be utterly obvious that building more intelligent, capable, competitive, faster, cheaper minds and optimization engines, that can be freely copied and given whatever goals and tasks, is not a safe thing for humanity to do.

    5. I strongly believe it turns out it’s far more dangerous than I made it sound there, for many many reasons. I don’t have the space here to talk about why but seriously how do people claims this is a ‘safe’ action. What?

    6. Even if highly capable AI remains under our control, it is going to transform the world and all aspects of our civilization and way of life. The idea that we would not want to steer that at all seems rather crazy.

    7. Regulations do not need to ‘slow down’ AI in a meaningful way. Indeed, a total lack of meaningful regulations would slow down diffusion and practical use of AI, including for national security and core economic purposes, more than wise regulation, because no one is going to use AI they cannot trust.

    8. That goes to both people knowing that they can trust AI, and also to requiring the AIs be made trustworthy. Security is capability. We also need to protect our technology and intellectual property from theft if we want to keep a lead.

    9. A lack of such regulations would also mean falling back upon the unintended consequences of ordinary law as they then happen to apply to AI, which will often be extremely toxic for our ability to apply AI to the most valuable tasks.

    10. If we try to not regulate AI at all, the public will turn against AI. Americans already dislike AI, in a way the Chinese do not. We must build trust.

    11. China, like everyone else, already regulates AI. The idea that if we had a fraction of the regulations they do, or if we interfere with companies or the market a fraction of how much they constantly do so everywhere, that we suddenly ‘lose the race,’ is silly.

    12. We have a substantial lead in AI, despite many efforts to lose I discuss later. We are not in danger of ‘losing’ every time we breathe on the situation.

    13. Most of the regulations that are being pushed for are about transparency, often even transparency to the government, so we can know what the hell is going on, and so people can critique the safety and security plans of labs. They are about building state capacity to evaluate models, and using that, which actively benefits AI companies in various ways as discussed above.

    14. There are also real and important mundane harms to deal with now.

    15. Yes, if we were to impose highly onerous, no good, very bad regulations, in the style of the European Union, that would threaten our AI lead and be very bad. This is absolutely a real risk. But this type of accusation consistently gets levied against any bill attempting to do anything, anywhere, for any reason – or that someone is trying to ‘ban math’ or ‘kill AI’ or whatever. Usually this involves outright hallucinations about what is in the bill, or its consequences.

  6. AI is currently virtually unregulated as a distinct entity, so ‘repeal 10 regulations for every one you add’ is to not regulate at all building machines that are soon likely to be smarter and more capable than humans, or anything else either.

    1. There are many regulations that impact AI in various ways.

    2. Many of those regulations are worth repealing or reforming. For example, permitting reform on power plants and transmission lines. And there are various consequences of copyright, or of common law, that should be reconsidered for the AI age.

    3. What almost all these rules have in common is that they are not rules about AI. They are rules that are already in place in general, for other reasons. And again, I’d be happy to get rid of many of them, in general or for AI in particular.

    4. But yes, you are going to want to regulate AI, and not merely in the ‘light touch’ ways that are code words for doing nothing, or actively working to protect AI from existing laws.

    5. AI is soon going to be the central fact about the world. To suggest this level of non-intervention is not classical liberalism, it is anarchism.

    6. Anarchism does not tend to go well for the uncompetitive and disadvantaged, which in the future age of ASI would be the humans, and it fails to solve various important market failures, collective action and public goods problems and so on.

    7. The reason why a general hands-off approach has in the past tended to benefit humans, so long as you work to correct key market failures and solve particular collective action problems, is that humans are the most powerful optimization engines, and most intelligent and powerful minds, on the planet, and we have various helpful social dynamics and characteristics. All of that, and some other key underpinnings I could go into, often won’t apply to a future world with very powerful AI.

    8. If we don’t do sensible regulations now, while we can all navigate this calmly, it will get done after something goes wrong, and not calmly or wisely.

  7. ‘Winning the AI race’ is not about who gets to build the GPU. ‘Winning’ the ‘race’ is not important because of who gets market share in selling big tech solutions. It is especially not about who gets to sell others the AI chips. Winning the race is about the race to superintelligence.

    1. The major AI labs say we will likely reach AGI within Trump’s second term, with superintelligence (ASI) following soon thereafter. David Sacks himself endorses this view explicitly.

    2. ‘Winning the race’ to superintelligence is indeed very important. The way in which humanity reaches superintelligence (assuming we do reach it) will determine the future.

    3. That future might be anything from wonderful to worthless. It might or might not involve humanity surviving, or being in control over the future. It might or might not reflect different values, or be something we would find valuable.

    4. If we build a superintelligence before we know how to align it, meaning before we know how to get it to do what we want it to do, everyone dies, or at minimum we lose control over the future.

    5. If we build a superintelligence and know how to align it, but we don’t choose a good thing to align it to, meaning we don’t wisely choose how it will act, then the same thing happens. We die, or we lose control over the future.

    6. If we build a superintelligence and know how to align it, and align it in general to ‘whatever the local human tells it to do,’ even with restrictions on that, and give out copies, this results at best in gradual disempowerment of humanity and us losing control over the future and the future likely losing all value. This problem is hard.

    7. This is very different from a question like ‘who gets better market share for their AI products,’ whether that is hardware or software, and questions about things like commercial adaptation and lockin or tech stack usage or what not, as if AI was some ordinary technology.

    8. AI actually has remarkably little lock-in. You can mostly swap one model out for another at will if someone comes out with a better one. There’s no need to run a model that matches the particular AI chips you own, either. AI itself will be able to simplify the ‘migration’ process or any lock-in issues.

    9. It’s not that whose AI models people use doesn’t matter at all. But in a world in which we will soon reach superintelligence, it’s mostly about market share in the meantime to fund AI development.

    10. If we don’t soon reach superintelligence, then we’re dealing with a far more ‘ordinary’ technology, and yes we want market share, but it’s no longer an existentially important race, it won’t have dramatic lock-in effects, and getting to the better AI products first will still depend on us retaining our compute advantages as long as possible.

  8. If we care about American dominance in global markets, including tech markets, and especially if we care about winning the race to AGI and superintelligence and otherwise protecting American national security, stop talking about how what we need to do is not regulate AI, and start talking about the things that will actually help us, or at least stop doing the things that actively hurt us and could actually make us lose.

    1. Straight talk. While it’s not my primary focus because development of AGI and ASI is more important, I strongly agree that we want American tech, especially American software, being used as widely as possible, especially by allies, across as much of the tech stack as possible. Even more than that, I strongly want America to have the lead in frontier highly capable AI, including AGI and then ASI, in the ways that determine the future.

    2. If we want to do that, what is most important to accomplishing this?

    3. We need allies to work with us and use our tech. Everyone says this. That means we need to have allies! That means working with them, building trust. Make them want to build on our tech stacks, and buy our products.

    4. That also means not imposing tariffs on them, or making them lose trust in us and our technology. Various recent actions have made our allies lose trust, in ways that are causing them to be less trusting of American tech stacks. And when we go to trade wars with them, you know what our main exports are that they will go after? Things like AI.

    5. It also means focusing most on our most important and trustworthy allies that have the most important markets. That means places like our NATO allies, Japan, South Korea and Australia, not Saudi Arabia, Qatar and the UAE. Those later markets don’t matter zero, but they are relatively tiny.

    6. Yes, avoiding hypothetical sufficiently onerous regulation on AI directly, and I will absolutely be keeping an eye out for this. Most of the regulatory and legal barriers that matter lie elsewhere.

    7. The key barriers are in the world of atoms, not the world of bits.

    8. Energy generation and transmission, permitting reform.

    9. High-skilled immigration, letting talent come to America.

    10. Education reform so AI helps teach rather than helping students cheat.

    11. Want reshoring? Repeal the Jones Act so we can transmit the resulting goods. Automate the ports. Allow self-driving cars and trucks broadly. And so on.

    12. Regulations that prevent the application of AI to high value sectors, or otherwise hold back America. Broad versions of YIMBY for housing. Occupational licensing. FDA requirements. The list goes on. Unleash the abundance agenda, it mostly lines up with what AI needs. It’s time to build.

    13. Dealing with various implications of other laws that often were crazy already and definitely don’t make sense in an AI world.

    14. The list goes on.

Or, as Derek Thompson put it:

Derek Thompson: Trump’s new AI directive (quoted below from David Sacks) argues the US should take care to:

– respect our trading partners/allies rather than punish them with dumb rules that restrict trade

– respect “due process”

It’d be interesting to apply these values outside of AI!

Jordan Schneider: It’s an NVDA press release. Just absurd.

David Sacks continues to beat the drum that the diffusion rule ‘undermines the goal of winning the AI race,’ as if the AI race is about Nvidia’s market share. It isn’t.

If we want to avoid allocations of resources by governmental decision, overreach of our executive branch authorities to restrict trade, alienating US allies and lack of due process, Sacks’s key points here? Yeah, those generally sound like good ideas.

To that end, yes, I do believe we can improve on Biden’s proposed diffusion rules, especially when it comes to US allies that we can trust. I like the idea that we should impose less trade restrictions on these friendly countries, so long as we can ensure that the chips don’t effectively fall into the wrong hands. We can certainly talk price.

Alas, in practice, it seems like the actual plans are to sell massive amounts of AI chips to places like UAE, Saudi Arabia and Malaysia. Those aren’t trustworthy American allies. Those are places with close China ties. We all know what those sales really mean, and where they could easily be going. And those are chips we could have kept in more trustworthy and friendly hands, that are eager to buy them, especially if they have help facilitating putting those chips to good use.

The policy conversations I would like to be having would focus not only on how to best superchange American AI and the American economy, but also on how to retain humanity’s ability to steer the future and ensure AI doesn’t take control, kill everyone or otherwise wipe out all value. And ideally, to invest enough in AI alignment, security, transparency and reliability that there would start to be a meaningful tradeoff where going safer would also mean going slower.

Alas. We massively underinvesting in reliability and alignment and security purely from a practical utility perspective and we not even having that discussion.

Instead we are having a discussion about how, even if your only goal is ‘America must beat China and let the rest handle itself,’ to stop shooting ourselves in the foot on that basis alone.

The very least we can do is not shoot ourselves in the foot, and not sell out our future for a little bit of corporate market share or some amount of oil money.

Discussion about this post

Fighting Obvious Nonsense About AI Diffusion Read More »

microsoft-shares-its-process-(and-discarded-ideas)-for-redone-windows-11-start-menu

Microsoft shares its process (and discarded ideas) for redone Windows 11 Start menu

Microsoft put a lot of focus on Windows 11’s design when it released the operating system in 2021, making a clean break with the design language of Windows 10 (which had, itself, simply tweaked and adapted Windows 8’s design language from 2012). Since then, Microsoft has continued to modify the software’s design in bits and pieces, both for individual apps and for foundational UI elements like the Taskbar, system tray, and Windows Explorer.

Microsoft is currently testing a redesigned version of the Windows 11 Start menu, one that reuses most of the familiar elements from the current design but reorganizes them and gives users a few additional customization options. On its Microsoft Design blog today, the company walked through the new design and showed some of the ideas that were tried and discarded in the process.

This discarded Start menu design toyed with an almost Windows XP-ish left-hand sidebar, among other elements. Microsoft

Microsoft says it tested its menu designs with “over 300 Windows 11 fans” in unmoderated studies, “and dozens more” in “live co-creation calls.” These testers’ behavior and reactions informed what Microsoft kept and what it discarded.

Many of the discarded menu ideas include larger previews for recently opened files, more space given to calendar reminders, and recommended “For You” content areas; one has a “create” button that would presumably activate some generative AI feature. Looking at the discarded designs, it’s easier to appreciate that Microsoft went with a somewhat more restrained redesign of the Start menu that remixes existing elements rather than dramatically reimagining it.

Microsoft has also tweaked the side menu that’s available when you have a phone paired to your PC, making it toggleable via a button in the upper-right corner. That area is used to display recent texts and calls and other phone notifications, recent contacts, and battery information, among a couple other things.

Microsoft’s team wanted to make sure the new menu “felt like it belonged on both a [10.5-inch] Surface Go and a 49-inch ultrawide,” a nod to the variety of hardware Microsoft needs to consider when making any design changes to Windows. The menu the team landed on is essentially what has been visible in Windows Insider Preview builds for a month or so now: two rows of pinned icons, a “Recommended” section with recently installed apps, recently opened files, a (sigh) Windows Store app that Microsoft thinks you should try, and a few different ways to access all the apps on your PC. By default, these will be arranged by category, though you can also view a hierarchical alphabetized list like you can in the current Start menu; the big difference is that this view is at the top level of the Start menu in the new version, rather than being tucked away behind a button.

For more on the history of the Start menu from its inception in the early ’90s through the release of Windows 10, we’ve collected tons of screenshots and other reminiscences here.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Microsoft shares its process (and discarded ideas) for redone Windows 11 Start menu Read More »

fcc-threatens-echostar-licenses-for-spectrum-that-spacex-wants-to-use

FCC threatens EchoStar licenses for spectrum that SpaceX wants to use

“If SpaceX had done a basic search of public filings, it would know that EchoStar extensively utilizes the 2 GHz band and that the Commission itself has confirmed the coverage, utilization, and methodology for assessing the quality of EchoStar’s 5G network based on independent drive-tests,” EchoStar told the FCC. “EchoStar’s deployment already reaches over 80 percent of the United States population with over 23,000 5G sites deployed.”

There is also a pending petition filed by Vermont-based VTel Wireless, which asked the FCC to reconsider a 2024 decision to extend EchoStar construction deadlines for several spectrum bands. VTel was outbid by Dish in auctions for licenses to use AWS H Block and AWS-3 bands.

“In this case, teetering on the verge of bankruptcy, EchoStar found itself unable to meet the commitments previously made to the Commission in connection with its approval of T-Mobile’s merger with Sprint—an approval predicated on EchoStar constructing a fourth nationwide 5G broadband network by June 14, 2025,” VTel wrote in its October 2024 petition. “But with no notice to or input from the public, WTB [the FCC’s Wireless Telecommunications Bureau] apparently cut a deal with EchoStar to give it yet more time to complete that network and finally put its wireless licenses to use.”

FCC seeks public input

Carr’s letter said he asked FCC staff to investigate EchoStar’s compliance with construction deadlines and “to issue a public notice seeking comment on the scope and scale of MSS [mobile satellite service] utilization in the 2 GHz band that is currently licensed to EchoStar or its affiliates.” The AWS-4 band (2000-2020 MHz and 2180-2200 MHz) was originally designated for satellite service. The FCC decided to also allow terrestrial use of the frequencies in 2012 to expand mobile broadband access.

The FCC Space Bureau announced yesterday that it is seeking comment on EchoStar’s use of the 2GHz spectrum, and the Wireless Telecommunications Bureau is seeking comment on VTel’s petition for reconsideration.

“In 2019, EchoStar’s predecessor, Dish, agreed to meet specific buildout obligations in connection with a number of spectrum licenses across several different bands,” Carr wrote. “In particular, the FCC agreed to relax some of EchoStar’s then-existing buildout obligations in exchange for EchoStar’s commitment to put its licensed spectrum to work deploying a nationwide 5G broadband network. EchoStar promised—among other things—that its network would cover, by June 14, 2025, at least 70 percent of the population within each of its licensed geographic areas for its AWS-4 and 700 MHz licenses, and at least 75 percent of the population within each of its licensed geographic areas for its H Block and 600 MHz licenses.”

FCC threatens EchoStar licenses for spectrum that SpaceX wants to use Read More »

dutch-scientists-built-a-brainless-soft-robot-that-runs-on-air 

Dutch scientists built a brainless soft robot that runs on air 

Most robots rely on complex control systems, AI-powered or otherwise, that govern their movement. These centralized electronic brains need time to react to changes in their environment and produce movements that are often awkwardly, well, robotic.

It doesn’t have to be that way. A team of Dutch scientists at the FOM Institute for Molecular and Atomic Physics (AMOLF) in Amsterdam built a new kind of robot that can run, go over obstacles, and even swim, all driven only by the flow of air. And it does all that with no brain at all.

Sky-dancing physics

“I was in a lab, working on another project, and had to bend a tube to stop air from going through it. The tube started oscillating at very high frequency, making a very loud noise,” says Alberto Comoretto, a roboticist at AMOLF and lead author of the study. To see what was going on with the tube, Comoretto set up a high-speed camera and recorded the movement. He found that the movement resulted from the interplay between the air pressure inside the tube and the state of the tube itself.

When there was a kink in the tube, the increasing pressure pushed that kink along the tube’s length. That caused the pressure to decrease, which enabled a new kink to appear and the cycle to repeat. “We were super excited because we saw this self-sustaining, periodic, asymmetric motion,” Comoretto told Ars.

The first reason for Comoretto’s excitement was that the flapping tube in his lab was driven by the kind of airflow physics that Peter Marshall, Doron Gazit, and Aireh Dranger harnessed to build their famous dancing “Fly Guys” for the Olympic Games in Atlanta in 1996. The second reason was that asymmetry and periodicity he saw in the tube’s movement pattern were also present in the way all living things moved, from single-celled organisms to humans.

Dutch scientists built a brainless soft robot that runs on air  Read More »