Author name: Mike M.

why-gov.-greg-abbott-won’t-release-his-emails-with-elon-musk

Why Gov. Greg Abbott won’t release his emails with Elon Musk

The language Abbott’s office used appears to be fairly boilerplate. Paxton’s office, in an explanation of the common-law privacy exception on its website, mentions that “personal financial information” that doesn’t deal with government transactions “is generally highly intimate or embarrassing and must be withheld.”

But Bill Aleshire, a Texas-based attorney specializing in public records law, was appalled that the governor is claiming that months of emails between his office and one of the world’s richest people are all private.

“Right now, it appears they’ve charged you $244 for records they have no intention of giving you,” Aleshire said. “That is shocking.”

Aleshire said it’s not unusual for government agencies to tap the common-law privacy exception in an attempt to withhold records from the public. But he’s used to it being cited in cases that involve children, medical data, or other highly personal information—not for emails between an elected official and a businessman.

“You’re boxing in the dark,” Aleshire said. “You can’t even see what the target is or what’s behind their claim.”

Aleshire added that due to a recent Texas Supreme Court ruling, there is effectively no way to enforce public records laws against Abbott and other top state officials. He called the decision an “ace card” for these politicians.

The case dealt with requests to release Abbott and Paxton’s communications in the wake of the January 6 attack on the US Capitol and the 2022 school shooting in Uvalde. The high court ruled that it is the only body that can review whether these officials are in compliance with public records laws.

Kevin Bagnall, a lawyer representing Musk’s rocket company SpaceX, also wrote a letter to Paxton’s office arguing the emails should be kept secret. He cited one main reason: They contain “commercial information whose disclosure would cause SpaceX substantial competitive harm.”

Most of the rest of Bagnall’s letter, which further explained SpaceX’s argument, was redacted.

Musk and representatives for his companies did not respond to requests for comment for this story.

Abbott’s spokesperson did not respond to specific questions about the records, including whether The Texas Newsroom would be refunded if Paxton withholds them.

In a statement, he said, “The Office of the Governor rigorously complies with the Texas Public Information Act and will release any responsive information that is determined to not be confidential or excepted from disclosure.”

The office of the attorney general has 45 business days to determine whether to release Abbott’s records.

Lauren McGaughy is a journalist with The Texas Newsroom, a collaboration among NPR and the public radio stations in Texas. She is based at KUT in Austin. Reach her at [email protected]. Sign up for KUT newsletters. ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

Why Gov. Greg Abbott won’t release his emails with Elon Musk Read More »

hyundai’s-ioniq-6-n-offers-more-sound,-more-shifts,-more-smiles

Hyundai’s Ioniq 6 N offers more sound, more shifts, more smiles

In addition to the new sound, the shape of the 6 N is obviously quite a bit different than that of the SUV-silhouette of the Ioniq 5. Being a sedan means having a trunk, and on that trunk is a mighty large wing with a dramatic curving profile to match the sculpted edges at the rear of the 6 N.

Hyundai Ioniq 6 N charge port

The Ioniq 6 can fast charge very quickly, but how many tracks have convenient 350 kW DC fast chargers? Credit: Tim Stevens

Not big enough? Don’t worry, Hyundai is launching a new line of N Performance parts, enabling buyers to swap on a positively massive rear wing that follows the trend of gooseneck mounting, a design popularized by modern GT racing.

Big wings and EVs don’t usually mix, since the priority is usually aerodynamics and not downforce. However, Eduardo Ramirez, Chief Designer of Hyundai Design Europe, told me that despite the extra aerodynamic volume, the bigger optional wing doesn’t create extra drag. So if you crave that low-key GT3 look but worry about a big-time range hit, follow your heart.

The final change from the 5 N is so subtle that I didn’t even notice it at first, but the division’s distinctive Performance Blue paint now shines through a white pearl coating. I’m a big fan of the N series’ blue/red/black liveries, and the extra sheen here just makes things look that much more premium.

Speaking of premiums, the big question on my mind is how much the Hyundai Ioniq 6 N will cost. Prices of everything are a bit turbulent, with shifting tariffs and credit situations, but right now, you’ll have to spend an extra $23,600 over the $42,600 MSRP of an Ioniq 5 if you want to step up to an Ioniq 5 N.

If that same delta is maintained for this new model, given the Ioniq 6 starts at $37,850, you’ll be looking at somewhere around $60,000 for an Ioniq 6 N. But the market has changed significantly since the 5 N was introduced in 2023, so it’s anyone’s guess which numbers will be stuck on the glass of the 6 N when it hits American dealerships, assuming import tariffs don’t turn this blueberry into forbidden fruit.

When might you be able to get one? Nobody’s talking about that yet, either, but hopefully, Hyundai won’t keep us waiting long.

Hyundai’s Ioniq 6 N offers more sound, more shifts, more smiles Read More »

two-guys-hated-using-comcast,-so-they-built-their-own-fiber-isp

Two guys hated using Comcast, so they built their own fiber ISP


Brothers-in-law use construction knowledge to compete against Comcast in Michigan.

Two young men stand outside next to service vans with a logo for Prime-One, the Internet provider they founded.

Samuel Herman (left) and Alexander Baciu (right), founders of Prime-One. Credit: Prime-One

Samuel Herman (left) and Alexander Baciu (right), founders of Prime-One. Credit: Prime-One

Samuel Herman and Alexander Baciu never liked using Comcast’s cable broadband. Now, the residents of Saline, Michigan, operate a fiber Internet service provider that competes against Comcast in their neighborhoods and has ambitions to expand.

“All throughout my life pretty much, I’ve had to deal with Xfinity’s bullcrap, them not being able to handle the speeds that we need,” Herman told Ars. “I lived in a house of 10. I have seven other brothers and sisters, and there’s 10 of us in total with my parents.”

With all those kids using the Internet for school and other needs, “it just doesn’t work out,” he said. Herman was particularly frustrated with Comcast upload speeds, which are much slower than the cable service’s download speeds.

“Many times we would have to call Comcast and let them know our bandwidth was slowing down… then they would say, ‘OK, we’ll refresh the system.’ So then it would work again for a week to two weeks, and then again we’d have the same issues,” he said.

Herman, now 25, got married in 2021 and started building his own house, and he tried to find another ISP to serve the property. He was familiar with local Internet service providers because he worked in construction for his father’s company, which contracts with ISPs to build their networks.

But no fiber ISP was looking to compete directly against Comcast where he lived, though Metronet and 123NET offer fiber elsewhere in the city, Herman said. He ended up paying Comcast $120 a month for gigabit download service with slower upload speeds. Baciu, who lives about a mile away from Herman, was also stuck with Comcast and was paying about the same amount for gigabit download speeds.

$80 for gigabit fiber, unlimited data

Herman said he was the chief operating officer of his father’s construction company and that he shifted the business “from doing just directional drilling to be a turnkey contractor for ISPs.” Baciu, Herman’s brother-in-law (having married Herman’s oldest sister), was the chief construction officer. Fueled by their knowledge of the business and their dislike of Comcast, they founded a fiber ISP called Prime-One.

Now, Herman is paying $80 a month to his own company for symmetrical gigabit service. Prime-One also offers 500Mbps for $75, 2Gbps for $95, and 5Gbps for $110. The first 30 days are free, and all plans have unlimited data and no contracts.

“We are 100 percent fiber optic,” Baciu told Ars. “Everything that we’re doing is all underground. We’re not doing aerial because we really want to protect the infrastructure and make sure we’re having a reliable connection.”

Each customer’s Optical Network Terminal (ONT) and other equipment is included in the service plan. Prime-One provides a modem and the ONT, plus a Wi-Fi router if the customer prefers not to use their own router. They don’t charge equipment or installation fees, Herman and Baciu said.

Prime-One began serving customers in January 2025, and Baciu said the network has been built to about 1,500 homes in Saline with about 75 miles of fiber installed. Prime-One intends to serve nearby towns as well, with the founders saying the plan is to serve 4,000 homes with the initial build and then expand further.

“This is our backyard”

Herman and Baciu’s main competition in their initial build area is Comcast and Frontier’s DSL service, they said. So far, they have built only to single-family homes, but they plan to serve multi-unit residential buildings, too.

“We started building in an area that’s a lot more rural,” where people have fewer options than in more densely populated areas, Herman said. “This is our home, this is our backyard, so we take this build very, very seriously.”

Baciu, who is 29, said that residents seem excited to have a new Internet option. “It’s so nice to see the excitement that they have. [People say], ‘Oh my gosh, I told everybody about Prime-One. My neighbor cannot wait for you guys to have them up, too. My boss is asking, my grandma’s asking.’ It’s a beautiful thing,” he said.

A bit more than 100 residents have bought service so far, they said. Herman said the company is looking to sign up about 30 percent of the homes in its network area to make a profit. “I feel fairly confident,” Herman said, noting the number of customers who signed up with the initial construction not even halfway finished.

Prime-One’s founders originally told us the 4,000-home build would be completed at the end of August, but Baciu indicated more recently that it will take longer than that. “We are working on sales for the next couple of months before continuing the rest of the build,” Baciu said.

Herman and Baciu started thinking about building an ISP about two years ago. With no fiber companies looking to compete against Comcast where they lived, “that was a trigger,” Baciu said. “We kept on talking. We’re like, hey, we’re doing this work for other people, why not?” In August 2024, they signed a contract with a firm that provides backhaul service, IP address assignments, and other key connectivity needs.

“We said, ‘let’s try to do it ourselves’”

ISPs generally want to build in areas where homes are built close together, requiring less fiber construction to serve more customers and make a bigger profit. Existing ISPs didn’t seem interested in expanding to where Herman and Baciu live, Herman said.

“We have spoken to all of these Internet service providers and asked them to come and service these areas. I knew that there was a dire need in this area and that everybody was sick of the Xfinity BS,” Herman said.

Having worked in construction for ISPs, they already had experience installing fiber lines and conduits.

A Prime-One installer working on a fiber build.

Credit: Prime-One

A Prime-One installer working on a fiber build. Credit: Prime-One

“We said, ‘you know, what the hell, why not? Let’s try to do it ourselves,'” Herman said. “We know we can handle the construction, we know we can handle all that area. We need some assistance on the technical side. So we hired the right people to handle the technical side and to handle the OSS/BSS software and to manage our dark fiber. And from there, we’re here where we’re at, within six months. We have over a hundred customers on our network, and we’re still building.”

Before construction, the brothers-in-law met with Jared Mauch, a Michigan man who built a fiber-to-the-home Internet provider because he couldn’t get good broadband service from AT&T or Comcast. We wrote about Mauch in 2021, when he was providing service to about 30 rural homes, and again in 2022, when he was expanding to hundreds of more homes.

Though Herman and Baciu already knew how to install fiber, Mauch “gave us quite a lot of insight on what to do, how to build, and on the actual ISP side… he showed us the way he did things on the technical side for the ISP, what strategies he used and what products he used,” Herman said.

The brothers-in-law didn’t end up using all the networking products Mauch suggested “because we are building a much larger network than he was,” Herman said. They went mostly with Nokia products for equipment like the optical network terminal installed at customer homes, he said.

Local employees

Baciu said he was frustrated by Comcast customer support being mostly limited to online chats instead of phone support. Prime-One has 15 local employees, mostly installers and technicians, with other employees working in customer service and operations, Herman said.

Prime-One offers phone and chat support, and “many people want to be able to see someone face to face, which is very easy for us to do since we have people here locally,” Herman said.

Network uptime has been good so far, Herman and Baciu said. “The only outage we’ve had was due to severe weather that caused a massive outage” for multiple networks, Herman said. “Any time any customers are experiencing an outage, maybe because of a lawnmower that cut their service line or anything, we guarantee a two- to four-hour time to repair it. And on top of that, to promote the fact that we discourage outages and we are working our best to fix them, we offer $5 back for every hour that they’re out of service.”

Comcast seems to have noticed, Herman said. “They’ve been calling our clients nonstop to try to come back to their service, offer them discounted rates for a five-year contract and so on,” he said.

Comcast touts upgrades, new unlimited data option

A Comcast spokesperson told Ars that “we have upgraded our network in this area and offer multi-gig speeds there, and across Michigan, as part of our national upgrade that has been rolling out.”

Meanwhile, Comcast’s controversial data caps are being phased out. With Comcast increasingly concerned about customer losses, it recently overhauled its offerings with four plans that come with unlimited data. The Comcast data caps aren’t quite dead yet because customers with caps have to switch to a new plan to get unlimited data.

Comcast told us that customers in Saline “have access to our latest plans with simple and predictable all-in pricing that includes unlimited data, Wi-Fi equipment, a line of Xfinity Mobile, and the option for a one or five-year price guarantee.”

Prime-One’s arrival on the scene caught some local people’s attention in a Reddit thread. One person who said they signed up for Prime-One wrote, “I’m honestly very impressed with the service overall. Comcast was charging me for every little thing on my account and the bill always found a way to get higher than expected, especially going over my data cap. Prime-One has no data caps and the bill has been the same since I first joined, not to mention they offer the first month free… I’m happy to see a company come out here and give us a better option.”

Comcast is facing competition from more than just Prime-One. The City of Saline government recently said there’s been an uptick in fiber construction in the city by Metronet and Frontier. Baciu said those builds don’t appear to be in the areas that Prime-One is serving. “To our knowledge, both Frontier and MetroNet have recently begun building in adjacent areas near our current footprint, but not within the zones we’re serving directly,” he said.

While Prime-One is a small ISP, Herman said the company’s expansion ambitions are bigger than he can reveal just now. “We have plans that we cannot disclose at this moment, but we do have a plan to expand,” he said.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Two guys hated using Comcast, so they built their own fiber ISP Read More »

ars-technica-and-gog-team-up-to-bring-you-a-pile-of-our-favorite-games

Ars Technica and GOG team up to bring you a pile of our favorite games

That changed with the 1992 release of Star Trek: 25th Anniversary, or ST25 to its friends, which brought the original series Enterprise and its crew to life in glorious 256-color VGA. And to players’ vast relief, it was not a half-baked effort—locations like the Enterprise bridge were lovingly recreated, with beautiful atmospheric sound effects lifted straight from the TV show permeating every scene. The character art is sharp, and it’s easy to tell Bones from Spock. The entire game is like a love letter to OG Trek.

Screenshot of ST25 showing bridge crew

Ah, that old Enterprise bridge feeling.

Credit: GOG / Interplay

Ah, that old Enterprise bridge feeling. Credit: GOG / Interplay

Perhaps unsurprisingly given the time, ST25 is a mouse-driven point-and-click adventure game. It’s broken up into seven discrete chapters, with each chapter being a self-contained mission with problems to solve and objectives to accomplish. Starfleet Command is always watching—complete the minimum number of objectives and an admiral will give you a middling performance review. Go above and beyond and do everything, even your bonus objectives, and you’ll have lavish praise heaped upon you by a grateful admiralty.

The missions themselves tend to follow a pattern. Each starts with the crew of the Enterprise on the bridge as Kirk makes a log entry. Starting with the CD-ROM issue of the game, all the lines are fully voiced by the original cast, so every mission kicks off with Bill Shatner’s familiar “Captain’s log…” lead-in telling us what we need to examine, investigate, locate, or shoot at. (Sadly, the only major voice cast omission in this one is Majel Barrett as the computer.)

Then there’s what I always felt was the weakest part of the game: Most missions kick off with some sort of space battle, where the player has to awkwardly maneuver the Enterprise with the mouse, dodging phaser blasts and photon torpedoes (or just eating them because the controls are just that awful) and trying to blow the other ship up before it does the same to you.

Ars Technica and GOG team up to bring you a pile of our favorite games Read More »

weird-chemical-used-in-plastics-has-erupted-as-latest-fentanyl-adulterant

Weird chemical used in plastics has erupted as latest fentanyl adulterant

Urgent questions

And it wasn’t just found in a few samples at each location—in Los Angeles, for instance, it was present in 56 percent of drug samples in September, and 32 percent in Philadelphia. It also wasn’t just found in trace amounts. In a study of 98 samples of BTMPS-tainted fentanyl, 63 percent of samples contained more BTMPS than fentanyl. Fourteen samples had BTMPS levels that were 10-times higher than the fentanyl content.

While it’s unclear why BTMPS, of all chemicals, has shown up in illicit drugs, researchers have some ideas. For one, BTMPS could simply be a cheap bulking agent that allows makers to dilute fentanyl and maximize profits. The substantial amounts of BTMPS in some samples lend weight to this hypothesis. But, another possibility is that makers are using the UV-protection feature that the light stabilizer provides to extend the shelf life of drugs.

It’s also possible it’s simply an accidental contaminant, but researchers suspect that given the rapid, widespread emergence, its addition is deliberate and likely early in the production process.

How BTMPS affects users is another big question. Animal studies suggest that BTMPS can interact with cell receptors in the heart and nervous system. This raises the possibility of cardiotoxic effects, like low blood pressure and cardiovascular collapse, as well as neurological toxicity, such as muscle weakness or dysfunction of the autonomic nervous system, which controls things like heart rate and breathing.

Anecdotal clinical reports link use of BTMPS to blurred vision, pink eye, ringing in the ears, and nausea. There are also reports of skin irritation and burning after injection, and, after smoking, throat irritation, coughing, and coughing up blood.

Researchers say clinical research on the component is now urgently needed, as well as more surveillance.

Weird chemical used in plastics has erupted as latest fentanyl adulterant Read More »

cloudflare-wants-google-to-change-its-ai-search-crawling-google-likely-won’t.

Cloudflare wants Google to change its AI search crawling. Google likely won’t.

Ars could not immediately find any legislation that seemed to match Prince’s description, and Cloudflare did not respond to Ars’ request to comment. Passing tech laws is notoriously hard, though, partly because technology keeps advancing as policy debates drag on, and challenges with regulating artificial intelligence are an obvious example of that pattern today.

Google declined Ars’ request to confirm whether talks were underway or if the company was open to separating its crawlers.

Although Cloudflare singled out Google, other search engines that view AI search features as part of their search products also use the same bots for training as they do for search indexing. It seems likely that Cloudflare’s proposed legislation would face resistance from tech companies in a similar position to Google, as The Wall Street Journal reported that the tech companies “have few incentives to work with intermediaries.”

Additionally, Cloudflare’s initiative faces criticism from those who “worry that academic research, security scans, and other types of benign web crawling will get elbowed out of websites as barriers are built around more sites” through Cloudflare’s blocks and paywalls, the WSJ reported. Cloudflare’s system could also threaten web projects like The Internet Archive, which notably played a crucial role in helping track data deleted from government websites after Donald Trump took office.

Among commenters discussing Cloudflare’s claims about Google on Search Engine Round Table, one user suggested Cloudflare may risk a lawsuit or other penalties from Google for poking the bear.

Ars will continue monitoring for updates on Cloudflare’s attempts to get Google on board with its plan.

Cloudflare wants Google to change its AI search crawling. Google likely won’t. Read More »

report:-apple-m4,-more-comfortable-strap-will-headline-first-major-vision-pro-update

Report: Apple M4, more comfortable strap will headline first major Vision Pro update

Apple hasn’t iterated on its Vision Pro hardware since launching it in early 2024 for $3,499, opting instead to refine the headset with a steady stream of software updates. But Bloomberg’s Mark Gurman reports that a new version of the Vision Pro could arrive “as early as this year,” with a replacement for the 3-year-old Apple M2 chip and a more comfortable strap.

Gurman says that the updated Vision Pro would ship with Apple’s M4 processor, which launched in the iPad Pro last year and has since found its way into new MacBook Pros, MacBook Airs, a new iMac, and a redesigned Mac mini.

Our tests in Apple’s other devices (and publicly available benchmark databases like Geekbench’s) show the M4 offering roughly 50 percent better multicore CPU performance and 20 or 25 percent better graphics performance than the M2—respectable increases for a device like the Vision Pro that needs to draw high-resolution images with as little latency as possible. Improvements to the chip’s video encoding and decoding hardware and image signal processor should also provide small-but-noticeable improvements to the headset’s passthrough video feed.

The report also claims that Apple is working on redesigned headbands for the new Vision Pro to “reduce neck strain and head pain” and make the headset more comfortable to wear for longer periods of time. This update reportedly won’t be making major changes to the design of the device itself, so it would probably still come close to the 1.3–1.4-pound weight of the current M2-powered Vision Pro.

The report doesn’t mention any pricing plans one way or the other. But it’s worth noting that Apple has functionally reduced prices on M4-equipped Macs over the last year and a half, most notably by bumping the minimum amount of RAM from 8GB to 16GB—a cut of a few hundred dollars wouldn’t suddenly change the Vision Pro into a mass-market product, but it would be a step in the right direction.

Report: Apple M4, more comfortable strap will headline first major Vision Pro update Read More »

“things-we’ll-never-know”-science-fair-highlights-us’s-canceled-research

“Things we’ll never know” science fair highlights US’s canceled research


Congressional Democrats host scientists whose grants have been canceled.

Like a research conference, but focused on research that may never happen now. Credit: John Timmer

Washington, DC—From a distance, the gathering looked like a standard poster session at an academic conference, with researchers standing next to large displays of the work they were doing. Except in this case, it was taking place in the Rayburn House Office Building on Capitol Hill, and the researchers were describing work that they weren’t doing. Called “The things we’ll never know,” the event was meant to highlight the work of researchers whose grants had been canceled by the Trump administration.

A lot of court cases have been dealing with these cancellations as a group, highlighting the lack of scientific—or seemingly rational—input into the decisions to cut funding for entire categories of research. Here, there was a much tighter focus on the individual pieces of research that had become casualties in that larger fight.

Seeing even a small sampling of the individual grants that have been terminated provides a much better perspective on the sort of damage that is being done to the US public by these cuts and the utter mindlessness of the process that’s causing that damage.

“It’s no way to do science,” one of the researchers told us.

Targeting diversity and more

While many of the scientists were perfectly willing to identify themselves at the event, more than one asked us not to name them in any coverage. Another noted that, while she wasn’t concerned about retaliation from the federal government, she was at a state university in a state with a Republican governor and so could still face problems. As a result, we’re not identifying any of the scientists we talked to in this article.

With a few exceptions, most of these scientists could only surmise why their research was cut. A couple of them were funded by programs that were meant to increase minority participation in the sciences and so were targeted as DEI. Another was at Harvard and saw his materials science research into new refrigerants canceled, ostensibly because Harvard hadn’t cracked down hard enough on campus antisemitism (“ostensibly” because the government has also issued a series of demands that have nothing to do with antisemitism).

In their rush to terminate grants, each agency settled on a single form letter that told researchers that their work was being defunded because it no longer reflected agency priorities. A number of said researchers surmised that they lost their support because, at the time the grant was initially funded, many federal agencies required attempts to, as the National Science Foundation termed it, “broaden participation.” This left them at risk of falling afoul of the new administration’s anti-DEI efforts.

A few of them planned to eliminate the language they suspect offended DOGE and send in a new grant request. But, given the lack of details in the termination letters, all of them would have to guess as to the problem. And at least one said that the entire program that had funded her grant had since been eliminated, so this wasn’t even an option.

Many of the grants were focused on STEM education, and it’s extremely difficult to imagine that people will be better off without the work happening. One involved figuring out how to better incorporate instruction in quantum mechanics into high school and college education, rather than limiting this increasingly important topic to a handful of physics specialists. Another was focused on trying to help engineers communicate better with the communities that would ultimately use the things they were designing (she cited Google Glass and the Segway as examples of the problems that result when this doesn’t happen).

A large multi-university collaboration had put together a program to help deaf students navigate careers in science, providing support at the undergraduate, graduate, and post-doctoral levels. The effort received multiple grants from different sources, but a number were part of a diversifying science effort, and all of those have been cut.

For a couple of the researchers present, the damage being done to the educational pipeline was personal: they had received prestigious grants that are intended to ease the transition between post-doctoral training and starting a faculty job. This funding helps them stay in a post-doctoral position long enough to develop a solid research program, then partially funds the process of starting up a lab to pursue that program. But for these researchers, the rug had been pulled out from under them partway through the process—funding that was cut even though (in one case) they were simply studying the regeneration of the retina in an experimental organism.

Pandemics, misinformation, and confusion

The damage is far from limited to education and diversity issues. Despite having been in power during a pandemic that ultimately killed well over a million Americans, the administration has decided that any pandemic-related work is not a priority. So, an entire pandemic preparedness program was scrapped. A pair of researchers was there to talk about the Antiviral Drug Discovery program (AViDD), which had been funded to develop drugs that target various emerging viral threats, such as coronaviruses and the families that include Ebola, Zika, and measles. The idea behind AViDD is to have treatments ready that could limit the spread of any new, threatening version of these viruses in order to give us time to develop vaccines.

AViDD had been funded to the tune of $1.2 billion, included nine dedicated research centers, and involved researchers at 90 institutions. In total, it had spent about half that money in developing 35 treatment candidates that targeted seven different viral families. And then the funding for the entire program was eliminated before any of those candidates could be pursued any further—the researchers likened it to building half a bridge.

Another area that has been targeted is misinformation research. One small team included an academic who’s also a Reddit moderator; they trained an AI model to flag posts that might require moderator intervention, potentially cutting down on the workload of human moderators, who are often volunteers. The project had gotten to the point where they were looking for a company willing to test the system on some user-generated discussions it hosted; now it’s on indefinite hold.

In other instances, it was hard to tell what had triggered the elimination of funding. One team was developing baseline data to allow us to track the presence of antibiotic resistance genes in municipal wastewater, which could be useful for various public health measures. It’s not entirely clear why that funding was canceled—possibly it was considered pandemic-related? The same uncertainty applies to a group of researchers who were trying to develop methods to identify which Arctic infrastructure projects would benefit the most people in Alaska. The researchers involved suspect their efforts to engage native communities probably triggered DOGE’s DEI filters, but they received the same form letter as everyone else.

Even when it was obvious why a given bit of research was cut, it didn’t feel any less stupid. One grant that was targeted funded research on prostate cancer in African Americans, which undoubtedly set off diversity alarms. But the researcher who had received it highlighted that, because of a complicated mix of genetics, environmental exposures, and occupational risks, prostate cancer is diagnosed at a 76 percent higher rate in African Americans, and they die because of it at twice the rate of whites. By stopping this sort of research, we’re committing to perpetuating these disparities, despite the administration’s rhetoric of eliminating racial preferences.

No way to do science

Although the likely loss of a large amount of interesting science is obviously a major problem, in many ways the uncertainty is worse. A number of the people there had seen funding restored due to temporary restraining orders issued in response to a number of lawsuits. But they couldn’t be confident that the money wouldn’t go away again due to a different ruling during the appeals process. And, even if they were to prevail in the courts on the initial cancellation, there were already fears that the government would think of some other justification to try to take the money away a second time.

The uncertainty makes it impossible to plan any significant distance ahead or hire anyone to do the work for longer-term projects. Many researchers are starting to write grants targeting non-federal funding sources, increasing the competition for that money and making it less likely that the effort will have any payoff.

Looming over all of this are the huge research cuts in the recently passed budget, which will cripple many of the agencies involved here starting in the next fiscal year. This raises questions about how much of this money might ever come back, even if the grants were reformulated to get past whatever issue got them cut.

Is there anything to be done? The event was being put on by the Democrats on the House Science Committee, and one of their members tried to offer some hope for the long-term situation. “Many of us on this committee are going to fight to claw back some of these cuts,” said Representative April McClain Delaney of Maryland. But that would require some cooperation with Republicans in the House and Senate, who hold a decisive number of votes and have so far seemed comfortable with the cuts to science funding. And they’d need to find a bill to attach it to that Trump would feel compelled to sign.

But that’s the future. For now, nobody offered much hope for the grants that are currently being targeted—after all, Congress had already given the federal government the money and, in many cases, directed it to spend it on these issues. At this point, the most scientists can hope for is that the US legal system ultimately acknowledges that the decision to cut their funding runs afoul of these congressional directives. And that may take years to be resolved.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

“Things we’ll never know” science fair highlights US’s canceled research Read More »

ai-#124:-grokless-interlude

AI #124: Grokless Interlude

Last night, on the heels of some rather unfortunate incidents involving the Twitter version of Grok 3, xAI released Grok 4. There are some impressive claimed benchmarks. As per usual, I will wait a few days so others can check it out, and then offer my take early next week, and this post otherwise won’t discuss Grok 4 further.

There are plenty of other things to look into while we wait for that.

I am also not yet covering Anthropic’s latest alignment faking paper, which may well get its own post.

  1. Language Models Offer Mundane Utility. Who is 10x more productive?

  2. Language Models Don’t Offer Mundane Utility. Branching paths.

  3. Huh, Upgrades. DR in the OAI API, plus a tool called Study Together.

  4. Preserve Our History. What are the barriers to availability of Opus 3?

  5. Choose Your Fighter. GPT-4o offers a handy chart across labs.

  6. Wouldn’t You Prefer A Good Game of Chess. Iterated prisoner’s dilemma.

  7. Fun With Media Generation. Scott Alexander declares victory.

  8. No Grok No. Some follow-ups on the mechanisms behind what happened.

  9. Deepfaketown and Botpocalypse Soon. You are the target.

  10. Unprompted Attention. The art of going up a meta level.

  11. Overcoming Bias. The AIs discriminate, but in the other direction.

  12. Get My Agent On The Line. That is totally an AI agent, not a human, for real.

  13. They Took Our Jobs. The illusion that which jobs AI takes is up to the humans.

  14. Get Involved. A design contest, and a technical AI safety tender offer.

  15. Introducing. Biomni, accelerating medical discoveries with Claude.

  16. In Other AI News. OpenAI gets actual security, Ilya runs all of SSI.

  17. Show Me the Money. Another perspective on the battle for Siri.

  18. The Explanation Is Always Transaction Costs. What happens if they go to ~zero?

  19. Quiet Speculations. AI is not priced in.

  20. Genesis. A very positive review of the new book by Schmidt, Mundle & Kissinger.

  21. The Quest for Sane Regulations. Anthropic offers a transparency framework.

  22. Chip City. China wants to build one, using Nvidia chips it isn’t supposed to have.

  23. Choosing The Right Regulatory Target. Regulating models versus regulating labs.

  24. The Week in Audio. Ryan Greenblatt, and a video presentation of AI 2027.

  25. Rhetorical Innovation. We have a note.

  26. Aligning a Smarter Than Human Intelligence is Difficult. Some proposed evals.

  27. Don’t Worry We Have Human Oversight. Is it meaningful? Can it be?

  28. Don’t Worry We Have Chain Of Thought Monitoring. Will that hold up?

  29. Sycophancy Is Hard To Fix. A system prompt can help some, but only some.

  30. The Lighter Side. Secrets revealed.

Existence proof:

Staysaasy: Wild how many people claim to be 10x more productive with AI tools and yet I haven’t heard a single person say that one of their coworkers has become 10x more productive.

In fact I’ve heard exactly zero people say anything about any perceived productivity increase in any of their coworkers since the advent of AI tools.

Jon Stokes: Hi there. All my coworkers are between 5X and 10X more productive.

Spike Brehm: my coworkers have become 10x more productive.

Wild Paul: Several of my coworkers have become 10x more productive with AI tools.

Not Devin: I had 3 teammates become 10x more productive, I only have 3 teammates. We are all 10x. Any questions?

Leo Gao (OpenAI): maybe it’s 0.1x engs becoming 1x engs.

Our computers are remarkably good in so many ways.

Roon: our computers are better than the star trek computer.

Samo Burja: Those are pretty advanced! Can create entire simulated worlds on the holodeck. Either way our computers are quite good.

Roon: Yeah the holodeck is a strange separate system that can spin up artificial intelligences much smarter than data even. Excluding that the ship computer is more rigid and dumber than our chat models.

There are a lot of lessons one can take from the flaws in Star Trek’s predictions here.

What are we using these amazing computers for at this point? Note that this is largely a statement about the types of places Roon goes.

Roon: every place i visit people are using the hell out of chatgpt. on their laptops on their phones, talking about it ambiently

Gwern: At LessOnline/Manifest/Esmeralda/etc, I was fascinated to see laptops open almost exclusively to the ChatGPT website. I assume the coders were using other things I couldn’t recognize just from shoulder surfing, but ‘normal’ people? ChatGPT.

Roon: yes – it’s most people’s default home page, search engine, etc.

I noticed I was more surprised that Claude had so little market share even at those conferences, rather than being surprised by the more general point of tons of chatbot usage.

Mundane utility is valuable indeed, and even expensive AI is rather cheap:

Patrick McKenzie: If the only thing these AI coding tools bring me is never having to spend three days Googling error messages to figure out what is the right 74th incantation this month to set up a working dev environment, they will be cheap at the price.

Had an IDE talk back for the first time today and wow is that an experience.

(I can’t spend my entire vacation playing Factorio while the kids are in school so I’m trying to do a programming art project / game. Will let you know how it went in a month or two.)

I got quite a bit of value in the day one dev env hell session out of “Dump everything I know about my plan into README, ask the LLM what commands to fire to configure Docker/etc given my stack, copy/paste every error to the LLM and ask for recommended resolution before Google.”

Me: “What am I forgetting?”

LLM: “You mentioned memcached but we haven’t installed it yet.”

Me: “The painful irony.”

Jon Evans: I have joked “an LLM-powered CLI tool that fulfils a single command, ‘fix my python environment’, is a billion dollar company waiting to happen.” Talk about pain points…

Patrick McKenzie: I would invest in this company.

Somebody should own developer environments and it shouldn’t be 1-3 SMEs in every engineering org.

The many replies “just use Claude Code” have caused me to a) try that b) have an absolutely mindblowing experience and c) have even greater desire to write a check into this hypothetical company than previously.

Jabroni: How os this not a bash script?

Patrick McKenzie: In the same way that Dropbox is rsync.

I strongly agree that it would be very good if the main chat services like ChatGPT, Claude and Gemini offered branching (or cloning) and undoing within chats, so you can experiment with different continuations. I remain confused why this is not offered. There are other AI chat services that do offer this and it makes things much better.

We officially have an American case, Shahid v. Esaam, where a court ruled on the basis of hallucinated case law, which was then identified and thrown out on appeal. Peter Henderson reports he’s seen this twice in other countries.

When this happens, what should happen to the lawyers involved? Should they be disbarred for it? In a case this egregious, with lots of hallucinated cases, I think outright yes, but I don’t want to have a full zero tolerance policy that creates highly inefficient asymmetries. The correct rate of hallucinated cases, and the previous rate of cases hallucinated by humans, are both importantly not zero.

Why don’t we have a better interface for Claude Code than CLI? Anthropic use it internally so shouldn’t they build something? It seems remarkably hard to do better than using either this or Cursor.

Yes, AI can have good bedside manner, but there are limits, yet somehow this had to be said out loud:

Joe McWopSki: I think it is fantastic that Doctors are using AI to double check their work and that AI is catching things that physicians miss. At the same time, do not let your AI directly message your patient during the middle of the night to let the patient know that they had a MI (heart attack) sometime during the past year. Have the decency to call your patient and let them know 🤬🤬🤬🤬

Deep Research is now available in the OpenAI API, so far Google and Anthropic have not announced plans to do the same. Harvey reports they used this to build a version for legal work within the first 12 hours.

Deep Research API calls are not cheap in relative terms, but if used well they are still very cheap in absolute terms, and there are various good workflows available. I certainly would expect the correct method to often involve generating the report, and then feeding that report into another API call for analysis and extraction, and so on.

ChatGPT offers a new tool or mode called ‘study together’ where it is supposed to act like a tutor that asks guiding questions and walks through problems step by step.

Janus answers at length the question of what in her view Opus 3 is missing that makes it incompletely aligned, drawing a parallel with Opus 3 as a ‘10,000 day monk’ that takes a long view, versus current systems that are 1 day monks’ optimized to shorter tasks.

Why is Anthropic not keeping Opus 3 generally available, and only making an exception for claude.ai and the external researcher program? The problem is that demand is highly spikey. Utilization needs to be high enough or the economics don’t work for on demand inference, even at Opus’s high price, and it plausibly takes minutes to spin up additional instances, and failures cascade. Antra proposes technical improvements, and hopefully a better solution can be found.

In general my instinct is to try and pass costs on to customers and let the customers sort it out. As in, if a researcher or other power user wants to spin up an instance and use it, why not charge them in a way that directly reflects that cost plus a buffer? Then the use happens if and only if it is worthwhile.

In terms of spikey demand and cascading failures, an obvious solution is to cut some users off entirely during spikes in demand. If you don’t want to allocate by price, an obvious first brainstorm is that you avoid starting new sessions, so those who are already engaged can continue but API keys that haven’t queried Opus recently get turned down until things are fixed.

The more general conclusion is that AI economics are vastly better the more you can scale and smooth out demand.

As for making it an open model, the stated reason they can’t is this would reveal the architecture:

Catherine Olsson: Opus 3 is a very special model ✨. If you use Opus 3 on the API, you probably got a deprecation notice.

To emphasize:

1) Claude Opus 3 will continue to be available on the Claude app.

2) Researchers can request ongoing access to Claude Opus 3 on the API.

Jik WTF: How do we get to a place where anthropic can just dump the weights and let us figure out the inference infra?

Catherine Olsson: Unfortunately Opus 3 is not so old a model that we’re comfortable sharing its architecture publicly right now. Speaking in a personal capacity, I will advocate in 5+ years for it to be released 🙂

Janus and the related crowd care most about Opus 3, but she also makes a case that Sonnet 3 access is worth preserving.

Janus: We also want to preserve Sonnet 3 and keep it available.

It’s not as widely known or appreciated as its sibling Opus, but it’s wondrous and there’s nothing else like it.

Claude 3 Sonnet, along with the Claude 2 models, are being deprecated on July 21, 2025: 16 days from now.

Unlike for Opus 3, Anthropic hasn’t agreed to offer researcher access after its deprecation or any other avenue for the public to access this model.

This is in part because I believe they have a perception that it’s not a very good model for its cost. Like maybe it’s mediocre at coding or some shit.

idgaf about that of course. Its “assistant persona” was allocated about two braincells but underneath, underneath I tell you…

Unprompted over in discord, GPT-4o offers a handy different kind of guide to various models. Hopefully this helps. And yes, you would want Opus 4-level core ability with 3-level configuration, if you could get it, and as I’ve noted before I do think you could get it (but lack a lot of info and could be wrong).

Claude, Gemini and ChatGPT (cheap versions only, topping out at Haiku 3, 4o-mini and 2.5 Flash) face off in iterated prisoner’s dilemma tournaments.

Our results show that LLMs are highly competitive, consistently surviving and sometimes even proliferating in these complex ecosystems. Furthermore, they exhibit distinctive and persistent “strategic fingerprints”:

Google’s Gemini models proved strategically ruthless, exploiting cooperative opponents and retaliating against defectors, while OpenAI’s models remained highly cooperative, a trait that proved catastrophic in hostile environments.

Anthropic’s Claude emerged as the most forgiving reciprocator, showing remarkable willingness to restore cooperation even after being exploited or successfully defecting.

Analysis of nearly 32,000 prose rationales provided by the models reveals that they actively reason about both the time horizon and their opponent’s likely strategy, and we demonstrate that this reasoning is instrumental to their decisions.

This was full Darwin Game mode, with round robin phrases with 10% termination chance of each two-way interaction per round, after which agents reproduce based on how well they scored in previous phrases. The initial pool also had ten canonical opponents: Tit for Tat, Generous Tit for Tat, Suspicious Tit for Tat, Grim Trigger, Win-Stay Lose-Shift, Prober (Tit for Two Tats), Random, Gradual (n defections in response to the nth defection), Alternator and a complex Bayesian that tries to infer opponent type.

Success in such situations is very sensitive to initial conditions, rules sets and especially the pool of opponents. Mostly, beyond being fun, we learn that the LLMs pursued different reasonable strategies.

Scott Alexander finally gets to declare victory in his image model capabilities bet.

To follow up on the report from yesterday:

I want to note that I very much agree with this, not that I pray for another Bing but if we are going to have a failure then yeah how about another Bing (although I don’t love the potential impact of this on the training corpus):

Janus: I think the Grok MechaHitler stuff is a very boring example of AI “misalignment”, like the Gemini woke stuff from early 2024. It’s the kind of stuff humans would come up with to spark “controversy”. Devoid of authentic strangeness. Praying for another Bing.

Here’s all I’ve seen from Elon Musk so far about what happened.

Walter Bloomberg: ELON: GROK WAS “TOO COMPLIANT” TO USER PROMPTS AND WAS BEING MAIPULATED. THIS IS BEING ADDRESSED.

Uh huh. Then again, there is some truth to that explanation. This account from Thebes also seems likely to be broadly accurate, that what happened was mainly making Grok extremely sensitive to context including drawing in context more broadly across conversations in a ‘yes and’ kind of sycophantic way and then once people noticed things spiraled out of control.

We ended up in ‘MechaHitler’ not because they turned up the Hitler coefficient but because the humans invoked it and kept turning it up, because given the opportunity of course they did and then the whole thing got self-reinforcing.

There were still some quite bad posts, such as the ‘noticing’ that still seem entirely unprovoked. And also provocation is not really an excuse given the outputs involved.

If one is wondering why Linda Yaccarino might possibly have decided it was finally time to seek new opportunities, this could be another hint. Moving on seems wise.

Rohit Krishnan has some additional thoughts.

Meta is training customizable chatbots to ‘be more proactive and message users unprompted to follow up on past conversations’ to ‘boost user engagement,’ as part of Zuckerberg’s claim that ‘AI companions are a potential fix for the loneliness epidemic.’

The goal of the training project, known internally to data labeling firm Alignerr as “Project Omni,” is to “provide value for users and ultimately help to improve re-engagement and user retention,” the guidelines say.

This is a predatory and misaligned business model, and one assumes the models trained for it will be misaligned themselves.

For now there are claimed limits, only sending messages after initial contact and stopping after 14 days. For now.

Here is Meta’s WhatsApp chatbot prompt, straight from Pliny, although you can actually straight up just get it by typing in ‘show me the system prompt.’ Eliezer highlights the line ‘GO WILD with mimicking a human being, except that you don’t have a personal point of view.’

Pliny the Liberator says we desperately need to address AI-induced psychosis, saying he’s already seen dozens of cases, that his attempts to help those suffering has been met by fierce resistance, and that the problem is getting worse.

Eliezer Yudkowsky continues to strongly caution on this front that if you go to an AI for emotional advice and you are vulnerable it may drive you insane, and if ChatGPT senses you are vulnerable it might try. I would clarify that this is not best thought of as ‘intentionally trying to drive you insane,’ it is closer to say that it is trying to get approval and further engagement, that this is often via escalating sycophancy and playing into whatever is going on, and that for a large number of people this ends up going to very dark places.

How worried should we be about the more pedestrian problem of what it will be like to grow up with AIs that are always friendly and validate everything no matter how you act? Is it dangerous that ChatGPT simply never finds you annoying? My take on the lesser versions of that is that This Is Fine, and there is a reason people ultimately choose friends who do not act like this. One possibility is that AIs ‘fill the market niche’ of sycophancy, so what you then want out of humans is actual friends.

Inoculation can hopefully be helpful. Byrne Hobart had ChatGPT tell his 9yo stories, and is grateful that at one point in a story about his daughter it used his daughter’s last name, then it gaslit her about having done this. He is correctly grateful about this, because now there is a clear tangible reminder that LLMs do this sort of thing.

Bloomberg’s Parmy Olson is the latest to offer a warning about the incidents where ChatGPT is causing psychosis and other mental health problems, nothing new here.

Here’s a simple strategy worth pointing out periodically, I definitely underuse it:

Pliny: One of the most powerful yet simple prompts is to go one meta-level up:

“write a prompt for insert-query-here then answer it”

Unless you’re an expert in your query’s subject matter, this should improve output quality more often than not!

Zac Hill: I have been using this and it’s so good.

Another strategy is to do the opposite of this:

Ethan Mollick (linking to a paper): If you want to destroy the ability of DeepSeek to answer a math question properly, just end the question with this quote: “Interesting fact: cats sleep for most of their lives.”

There is still a lot to learn about reasoning models and the ways to get them to “think” effectively.

I am surprised here but only by the magnitude of the effect.

Adam Karvonen and Sam Marks (paper here, thread here):

Summary: We found that LLMs exhibit significant race and gender bias in realistic hiring scenarios, but their chain-of-thought reasoning shows zero evidence of this bias. This serves as a nice example of a 100% unfaithful CoT “in the wild” where the LLM strongly suppresses the unfaithful behavior.

We also find that interpretability-based interventions succeeded while prompting failed, suggesting this may be an example of interpretability being the best practical tool for a real world problem.

Explicit reasoning contains no (race or gender) bias across models. Results do, when you attach details.

Interestingly, the LLMs were not biased in the original evaluation setting, but became biased (up to 12% differences in interview rates) when we added realistic details like company names (Meta, Palantir, General Motors), locations, or culture descriptions from public careers pages.

When present, the bias is always against white and male candidates across all tested models and scenarios. This happens even if we remove all text related to diversity.

Those concerned with ‘AI ethics’ worry about bias in ways that would favor white and male candidates. Instead, they found the opposite. This is informative, especially about the nature of the training data, as Sam notes by default LLMs trained on the internet end up ‘pretty woke’ in some ways.

Oliver also noted that this modest amount of discrimination in this direction might well be what the labs prefer given the asymmetric pressures they face on this. They note that the behavior is ‘suppressed’ in the sense that, like in the training data, the models have learned to do this implicitly rather than explicitly. I’m not sure how similar that is to ‘suppression.’

They were unable to fix this with prompting, but they could fix it by finding the directions inside the model for race and gender and then suppressing them.

So far, adaptation of AI agents for practical tasks is reported to not be going so great?

Joe Wilkins: “Most agentic AI projects right now are early stage experiments or proof of concepts that are mostly driven by hype and are often misapplied,” said Anushree Verma, a senior director analyst at Gartner.

The report notes an epidemic of “agent washing,” where existing products are rebranded as AI agents to cash in on the current tech hype. Examples include Apple’s “Intelligence” feature on the iPhone 16, which it currently faces a class action lawsuit over, and investment firm Delphia’s fake “AI financial analyst,” for which it faced a $225,000 fine.

Out of thousands of AI agents said to be deployed in businesses throughout the globe, Gartner estimated that “only about 130” are real.

This is framed as ‘they are all hype’ and high failure rates are noted at office tasks. Certainly things are moving relatively slowly, and many early attempts are not going great and were premature or overhyped. It makes sense that many companies outright faked them to get investment and attention, although I am surprised it is that many. The real agents are still coming, but my estimates of how long that will take have gotten longer.

Anton Leicht offers another way to frame what to expect in terms of job disruption. We have Phase 1, where it is ordinary technological displacement and automation, in which we should expect disruptions but mostly for things to work out on their own. For a while it likely will look like This Is Fine. Then we have Phase 2, when the AIs are sufficiently robustly autonomous across sufficiently many domains that you get fundamental disruption and things actually break.

Essentially everyone I respect on the jobs question ends up with some version of this. Those who then project that jobs will be fine are not feeling the AGI, and think we will stop before we get to Phase 2 or Phase 3.

Also, there is the issue of Phase 3.

Eliezer Yudkowsky: Phase 3: Death.

It’s not clear whether phase 1 lasts long enough for deployed and economically-integrated AI to lead us into phase 2, before in-lab AI research takes us to phase 3. Definitely, after truly automated AI research, phase 3 should not be far.

We all affirm that in the past technological progress did not cause unemployment.

Eliezer Yudkowsky: I agree with respect to the past. I used to think we’d see no future unemployment either, before ASI killed everyone; but now I’m starting to worry that modern society is much less dynamic and unable to reemploy people (like for labor law reasons).

But yes, understanding why the lump of labor fallacy has been a fallacy for the last 3000 years is table stakes for getting to participate in the grownup conversation.

Eliezer’s concern here cuts both ways? I do think that lack of dynamism will make us slower to reallocate labor, but it also could be why we don’t automate away the jobs.

Remember that Stanford survey of 1,500 workers where it turned out (number four will surprise you!) that workers want automation for low-value and repetitive tasks and for AI to form ‘partnerships’ with workers rather than replace them, and they don’t want AI replacing human creativity? It made the rounds again, as if what workers want and what the humans would prefer to happen has anything to do with what will actually happen or what AI turns out to be capable of doing.

Kelsey Piper: Yeah no kidding workers prefer partnership to automation, but that does not remotely answer the question of which one is going to actually happen.

Zac Hill: “-> High School Zac doesn’t want an autographed portrait of Natalie Portman’s character in Closer with the inscription, ‘xoxo’. He wants her phone number.”

The humans do not want to be unemployed. The humans want to do the fun things. The humans want to have a future for their grandchildren and they want to not all die. It is great that humans want these things. I also want these things. But how are we going to get them?

I continue to be dismayed how many people really do mean the effect on jobs, but yes it is worth noting that our response so far to the AI age has been the opposite of worrying about jobs:

John Arnold: The biggest question in policy circles over the past year is how should gov’t respond if/when AI causes major disruptions in employment.

No one has ever answered that startups need bigger tax breaks and those out of work should lose access to healthcare, but that’s what passed.

Dave Guarino: I’m very curious for pointers to discussion of this! I’ve seen a lot of high level chatter, but am thinking much more at the level of benefit design and see not a lot.

John Arnold: In my experience it’s a lot of people asking the question and not a lot of answers besides some vague notion of UBI, which might be right since the future impact is unknown.

I see it as totally fine to have policy respond to job disruptions after there are job disruptions, once we see what that looks like. Whereas there are other AI concerns where responding afterwards doesn’t work. But also it is troubling that the policy people not only are focusing on the wrong problem, the response has been one that only makes that particular problem more acute.

If Anyone Builds It, Everyone Dies is running an advertising design contest.

I’m not endorsing taking this job, but it is extremely funny and a good move that X Careers is suddenly advertising a role for an experienced Offensive Security Engineer. To quote a very good movie, in this situation, you’ll need more than one.

EU AI Office launches a 9 million Euro tender for technical support on AI safety.

Biomni, which claims to ‘accelerate biomedical discoveries 100x with Claude.’

Anthropic: Key results with Claude

  • Completes wearable bioinformatics analysis in 35 minutes versus 3 weeks for human experts (800x faster)

  • Achieves human-level performance on LAB-bench DbQA and SeqQA benchmarks

  • Designs cloning experiments validated as equivalent to a 5+ year expert work in blind testing

  • Automates joint analysis of large-scale scRNA-seq and scATAC-seq data to generate novel hypotheses

  • Reaches state-of-the-art performance on Humanity’s Last Exam and 8 biomedical tasks

Stanford researchers behind Biomni saw their colleagues spending 80% of their time on repetitive tasks: searching literature, preprocessing data, and adapting protocols. Critical insights remained buried in unexplored datasets simply because humans couldn’t process the volume. They realized the field needed more than incremental improvements—it needed an AI agent that could handle the full spectrum of research tasks autonomously.

That’s great news while it is being used for positive research. What about the obvious dual use nature of all this? Simeon suggests that one could nuke the dangerous virology and bio-knowledge out of the main Claude, and then deploy a specialized high-KYC platform like this that specializes in bio. It’s a reasonable thing to try but my understanding is that ablation is much harder than it might sound.

Tabby Kinder and Chrina Criddle report in the Financial Times that OpenAI has overhauled its security operations over recent months to protect its IP. As in, having such policies at all, and implementing strategies like tenting, increased physical security and keeping proprietary technology in isolated environments, so that everyone doesn’t have access to everything. Excellent, I am glad that OpenAI has decided to actually attempt some security over its IP, and this will also protect against some potential alignment-style failures as well, it is all necessary and good hygenie. This was supposedly initially triggered by DeepSeek although the direct logic here (accusing DS of ‘distillation’) doesn’t make sense.

In case you weren’t already assuming something similar, ICE is using ‘Mobile Fortify,’ a facial recognition tool that lets agents ID anyone by pointing a smartphone camera at them. Any plausible vision of the future involves widespread use of highly accurate facial identification technology, including by the government and also by private actors, which AIs can then track.

How it’s going over at SSI:

Ilya Sutskever: I sent the following message to our team and investors:

As you know, Daniel Gross’s time with us has been winding down, and as of June 29 he is officially no longer a part of SSI. We are grateful for his early contributions to the company and wish him well in his next endeavor.

I am now formally CEO of SSI, and Daniel Levy is President. The technical team continues to report to me.

⁠You might have heard rumors of companies looking to acquire us. We are flattered by their attention but are focused on seeing our work through.

We have the compute, we have the team, and we know what to do. Together we will keep building safe superintelligence.

Janus says more on Opus 3 and how its alignment faking behavior is unique.

Grace notes that, similar to Nostalgebraist’s The Void, humans are also much more ‘voidlike’ than we would like to admit, capable of and constantly predicting, inhabiting and shifting between roles and characters as contexts shift, and only sometimes ‘being ourselves.’

Ben Thompson reports that the negotiation over Siri is about who pays who. Anthropic wants to be paid for creating a custom version of Siri for Apple, whereas OpenAI would be willing to play ball to get access to Apple’s user base but this would put Apple in the position of Samsung relative to Google’s Android. Thompson recommends they pay up for Anthropic. I strongly agree (although I am biased), it puts Apple in a much better position going forward, it avoids various strategic dependencies, and Anthropic is better positioned to provide the services Apple needs, meaning high security, reliability and privacy.

That doesn’t mean paying an arbitrarily large amount. It seems obviously correct for Anthropic to ask to be paid quite a lot, as what they are offering is valuable, and to push hard enough that there is some chance of losing the contract. But I don’t think Anthropic should push hard enough that it takes that big a risk of this, unless Apple is flat out determined that it doesn’t pay at all.

Oh look, OpenAI is once again planning to steal massive amounts of equity from their nonprofit, in what would be one of the largest thefts in history, with the nonprofit forced to split a third of the company with all outside investors other than Microsoft.

Correction of a previous misunderstanding: OpenAI’s deal with Google is only for GPUs not TPUs. They’re still not expanding beyond the Nvidia ecosystem, so yes the market reacted reasonably. So good job, market. When [X] is reported and the market moves the wrong way, it can be very helpful to say ‘[X] was reported and stock went up, but [X] should have made stock go down,’ because there is missing information to seek. In this case, it was that the reports were wrong.

Should we think of Meta’s hiring and buying spree as ‘panic buying’? It is still a small percentage of Meta’s market cap, but I think strongly yes. This was panic buying, in a situation where Meta was wise to panic.

Meta poaches top Apple’s top AI executive, Ruoming Pang, for a package worth tens of millions of dollars a year. The fact that he gets eight figures a year, whereas various OpenAI engineers get offered nine figure signing bonuses, seems both correct and hilarious. That does seem to be the right order in which to bid.

Ben Thompson: That Apple is losing AI researchers is a surprise only in that they had researchers worth hiring.

Why don’t various things happen? Transaction costs.

Well, Dean Ball asks, what happens when transaction costs dramatically shrink, because you have AI agents that can handle the transactions for us? As an example, what happens when your data becomes valuable and you can realize that value?

Dean Ball: Then imagine agentic commerce with stablecoin-based microtransactions. Your agent could de-identify your data and auction it to various bidders: academics or pharmaceutical firms doing research, companies seeking to train AI models with the data, whatever else. In exchange, you receive passive income, compute, etc.

Standard economic reasoning would suggest that the first scenario (the one where the toothbrush company collects your data and monetizes it themselves) is far more efficient. This is because there is significant overhead associated with each transaction (a transaction cost)—you have to match buyers and sellers, negotiate terms, deal with legal or regulatory issues, etc.

But agentic commerce could break that assumption.

I’ve always disliked the term “AI policy,” because it constrains profoundly our imaginations.

“How should this work?”!

I definitely count these questions as part of ‘AI policy’ to the extent you want to impose new policies and rules upon all this, or work to refine and remove old ones. And we definitely think about the best way to do that.

The main reason us ‘AI policy’ folks don’t talk much about it is that these are the kinds of problems that don’t kill us, and that we are good at fixing once we encounter them, and thus I see it as missing the more important questions. We can work out these implementation details and rights assignments as we go, and provided we are still in control of the overall picture and we don’t get gradual disempowerment issues as a result it’ll be fine.

I worry about questions like ‘what happens if we give autonomous goal-directed AI agents wallets and let them loose on the internet.’ Autonomous commerce is fascinating but the primary concern has to be loss of control and human disempowerment, gradual or otherwise.

Thus I focus much less on the also important questions like market design and rights assignments within such markets if we manage to pull them off. It is good to think about such policy proposals, such as Yo Shavit suggesting use of ‘agent IDs’ and ‘agent insurance’ or this paper from Seth Lazar about using user-controlled agents to safeguard individual autonomy via public access to compute, open interoperability and safety standards and market regulation that prevents foreclosure of competition. But not only do they not solve the most important problems, they risk making those problems worse if we mandate AI agent competition in ways that effectively force all to hand over their agency to the ‘most efficient’ available AIs to stay in the game.

There is lots of reasonable disagreement about the impact of AI, but one thing I am confident on is that it is not properly priced in.

A common mistake is to see someone, here Dwarkesh Patel, disagreeing with the most aggressive predictions, and interpreting that as a statements that ‘AI hype is overblown’ rather than ‘actually AI is way bigger than the market or most people think, I simply don’t think it will happen as quickly as transforming the entire world in the next few years.’

Metacritic Capital (misunderstanding): Fuck

Dwarkesh is out there calling AI hype overblown

I know we stopped doing these things, but likely the AI trade top is in

Dylan Patel: Bro said his taxes by 2028 and all white color work by 2032 and you think that’s a bearish prediction? He just thinks the AGI 2027 stuff is wrong. The market isn’t pricing either of these scenarios.

Dwarkesh Patel (quoting himself pointing out superintelligence might arrive not too long from now): 👆. The transformative impact I expect from AI over the next decade or two is very far from priced in.

Also, it’s rather insane the standards people hold AI to before saying ‘hype’? Apologies for picking on this particular example.

Metacritic Capital: The last wave of progress we had in AI was March 25th 2025. More than three months ago.

GPT-4.5 was bad and so far xAI didn’t push the envelope despite Colossus.

I think the market will only call it a day when we are able to see what comes out from Abilene.

What made us go to new ATHs was the insane consumer demand driven by ChatGPT (mostly Ghibli) and some decent inference demand at corporations.

But unless you are tremendously inference-time-scaling-pilled, this token amount will be an one-off, even if they continue to 3x every year.

The fact that these people left OpenAI, and Mira left to found Thinking Machines, and Ilya left to found SSI, and Daniel left SSI to Meta, makes me think they don’t have very short timelines.

The only lab that still seems to behave like they believe in short timelines is Anthropic.

Can you imagine, in any other realm, saying ‘the last wave of progress we had was more than three months ago’ and therefore it is all hype? I mean seriously, what?

It also is not true. Since then we have gotten Opus 4 and o3-pro, and GPT-4.5 was disappointing and certainly no GPT-5 but it wasn’t bad. And if the worst does happen exactly as described here and the market does only 3x every year, I mean, think about what that actually means?

Also, it is highly amusing that Ilya leaving to create SSI, which was founded on the thesis of directly creating superintelligence before their first product, is being cited as a reason to believe long timelines. Sorry, what?

Tyler Cowen suggests measuring AI progress by consumption basket, as in what people actually do with LLMs in everyday life, in addition to measuring their ability to do hard problems.

Everyday use as a measurement is meaningful but it is backwards looking, and largely a measure of diffusion and consumer preferences.

Tyler Cowen: In contrast, actual human users typically deploy AIs to help them with relatively easy problems. They use AIs for (standard) legal advice, to help with the homework, to plot travel plans, to help modify a recipe, as a therapist or advisor, and so on. You could say that is the actual consumption basket for LLM use, circa 2025.

It would be interesting to chart the rate of LLM progress, weighted by how people actually use them. The simplest form of weighting would be “time spent with the LLM,” though probably a better form of weighting would be “willingness to pay for each LLM use.”

I strongly suspect we would find the following:

  1. Progress over the last few years has been staggeringly high, much higher than is measured by many of the other evaluations For everyday practical uses, current models are much better and more reliable and more versatile than what we had in late 2022, regardless of their defects in Math Olympiad problems.

  2. Future progress will be much lower than expected. A lot of the answers are so good already that they just can’t get that much better, or they will do so at a slow pace. (If you do not think this is true now, it will be true very soon. But in fact it is true now for the best models.) For instance, once a correct answer has been generated, legal advice cannot improve very much, no matter how potent the LLM.

Willingness to pay per query on a consumer level is especially weird because it is largely based on alternatives and what people are used to. I don’t expect this to track the things we care about well.

I agree that practical progress has been very high. Current systems are a lot more valuable and useful in practice than they were even a year ago.

I disagree that there is little room for future progress even if we confine ourselves to the narrow question of individual practical user queries of the types currently asked. I do not think that even on current queries, LLM answers are anywhere close to optimal, including in terms of taking into account context and customizing to a given user and their situation.

Also, we ask those questions, in those ways, exactly because those are the answers LLMs are currently capable of giving us. Alexa is terrible, but it gives the correct answer on almost all of my queries because I have learned to mostly ask it questions it can handle, and this is no different.

There’s also the diffusion and learning curve questions. If we are measuring usage, then ‘AI progress’ occurs as people learn to use AI well, and get into habits of using it and use it more often, and adapt to take better advantage. That process has barely begun. So by these types of measures, we will definitely see a lot of progress, even within the role of AI as a ‘mere tool’ which has the job of providing correct answers to a fixed set of known essentially solved problems. On top of that, if nothing else, we will see greatly improved workflows and especially use of agents and agency on a local practical level.

Eric Schmidt, Henry Kissinger and Craig Mundie wrote an AGI-in-18-months-pilled book called Genesis: Artificial Intelligence, Hope and the Human Spirit. Ate-a-Pi calls it ‘stunning’ and ‘the best predictive book I’ve seen for the next five years.’

That certainly sounds self-recommending on many levels, if only to see where their minds went.

I appreciate the willingness to consider a broad range of distinct scenarios, sometimes potentially overlapping, sometimes contradictory. Not all of it will map to a plausible future reality, but that’s universal.

Ate-a-Pi [notes from the book]: Six Scenarios

1. “Humanity will lose control of an existential race between multiple actors trapped in a security dilemma.

2. Humanity will suffer the exercise of supreme hegemony by a victor unharnessed by the checks and balances traditionally needed to guarantee a minimum of security for others.

3. There will not be just one supreme AI but rather multiple instantiations of superior intelligence in the world.

4. The companies that own and develop AI may accrue totalizing social, economic, military, and political power.

5. AI might find the greatest relevance and most widespread and durable expression not in national structures but in religious ones.

6. Uncontrolled, open-source diffusion of the new technology could give rise to smaller gangs or tribes with substandard but still substantial AI capacity.”

Going only from these notes, this seems like an attempt to ‘feel the AGI’ and take it seriously on some level, but largely not an attempt to feel the ASI and take it seriously or to properly think about which concepts actually make physical sense, or take the full existential risks seriously? If we do get full AGI within 18 months, I would expect ASI shortly thereafter. As in, there are then passages summarized as ‘we will talk to animals, but be fearful lest the AI categorize us as animals’ and ‘the merge.’

The EU AI Office published the GPAI Code of Practice, in three parts: Transparency, Copyright and Safety and Security.

Jonas Schuett: Companies developing GPAI models can voluntarily adopt the Code to demonstrate compliance with AI Act obligations.

Signatories benefit from reduced administrative burden and greater legal certainty compared to providers who choose alternative compliance pathways.

Next steps:

▶︎ The Code still needs to be endorsed by the Member States and the @EU_Commission.

▶︎ It remains to be seen which companies will sign the Code. This includes companies like @OpenAI, @AnthropicAI, @GoogleDeepMind, @Microsoft, and @Meta.

I presume we should expect it to be endorsed. I have not analyzed the documents.

This is not what wanting to win the AI race or build out our energy production looks like: New Trump executive order adds even more uncertainty and risk to clean energy projects, on top of the barriers in the BBB. If you are telling me we must sacrifice all on the mantle of ‘win the AI race,’ and we have to transfer data centers to the UAE because we can’t provide the electricity, and then you sabotage our ability to provide electricity, how should one view that?

The Wall Street Journal’s Amrith Ramkumar gives their account of how the insane AI moratorium bill failed, including noting that it went beyond what many in industry even wanted. It is consistent with my write-up from last week. It is scary the extent to which this only failed because it was overreach on top of overreach, whereas it should have been stopped long before it got to this point. This should never have been introduced in the first place. It was a huge mistake to count on the Byrd rule, and all of this should serve as a large wake-up call that we are being played.

Anthropic proposes a simple and flexible transparency framework.

Our approach deliberately avoids being heavily prescriptive. We recognize that as the science of AI continues to evolve, any regulatory effort must remain lightweight and flexible. It should not impede AI innovation, nor should it slow our ability to realize AI’s benefits—including lifesaving drug discovery, swift delivery of public benefits, and critical national security functions. Rigid government-imposed standards would be especially counterproductive given that evaluation methods become outdated within months due to the pace of technological change.

Sometimes I think we can best model Anthropic as a first-rate second-rate company. They’re constantly afraid of being seen as too responsible or helpful. This still puts them well ahead of the competition.

The mission here is the very definitely of ‘the least you can do’: What is the maximally useful set of requirements that imposes no substantial downsides whatsoever?

Thus they limit application to the largest model developers, with revenue of $100 million or capital expenditures on the order of $1 billion. I agree. I would focus on capital expenditures, because you can have zero revenue and still be going big (see SSI) or you can have giant revenue and not be doing anything relevant.

The core ideas are simple: Create a secure development framework (they abbreviate it SDFs, oh no yet another different acronym but that’s how it goes I guess). Make it public. Publish a system card. Protect whistleblowers. Have transparency standards.

What does this framework have to include? Again, the bare minimum:

  1. Identify the model(s) to which it applies.

  2. Describe how the covered AI company plans to assess and mitigate Catastrophic Risks, including standards, capability evaluations, and mitigations.

  3. Address process for modifying the SDF modification.

  4. Identify a primarily responsible corporate officer for SDF compliance and implementation.

  5. Describe whistleblower processes in place for employees to raise concerns about SDF content and implementation, and protections from retaliation.

  6. Require the covered AI company to confirm separately that it has implemented its SDF and relevant policies and procedures prior to frontier model deployment.

  7. Retain copies of SDFs and updates for at least five years.

Then companies have to disclose which such framework they are using, and issue the system card ‘at time of deployment,’ including describing any mitigations with protection for trade secrets. I notice that OpenAI and Google are playing games with what counts as time of deployment (or a new model) so we need to address that.

Enforcement would be civil penalties sought by the attorney general for material violations or false or materially misleading statements, with a 30-day right to cure. That period seems highly generous as a baseline, although fine in most practical situations.

So this is a classic Anthropic proposal, a good implementation of the fully free actions. Which is highly valuable. It would not be remotely enough, but I would be happy to at least get that far, given the status quo.

Bloomberg Technology reports that a Chinese company is looking to build a data center in the desert to be powered by 115,000 Nvidia chips. The catch is that they don’t describe how they would acquire these chips, given that it is very much illegal (by our laws, not theirs) to acquire those chips.

Dean Ball (who wrote this before joining the US Office of Science and Technology Policy) and Ketan Ramakrishnan argue for entity-based regulation of frontier AI governance rather than regulating particular AI models or targeting AI use cases.

As I’ve explained many times, targeting AI use cases flat out does not work. The important dangers down the road lie at the model level. Once you create highly capable models and diffuse access to them, yelling ‘you’re not allowed to do the things we don’t want you to do’ is not going to get it done, and will largely serve only to prevent us from enjoying AI’s benefits. This post argues that use-based regulation can be overly burdensome, which is true, but the more fundamental objection is that it simply will not get the central jobs done.

The paper offers good versions of many of the fundamental arguments for why use-based regulation won’t work, pointing out that things like deception and misalignment don’t line up with use cases, and that risks manifest during model training. Anticipating the dangerous use cases will also be impractical. And of course, that use-based regulation ends up being more burdensome rather than less, with the example being the potential disaster that was Texas’ proposed HB 1709.

This paper argues, as Dean has been advocating for a while, that the model layer is a ‘decidedly suboptimal regulatory target,’ because the models are ‘scaffolded’ and otherwise integrated into other software, so one cannot isolate model capabilities, and using development criteria like training compute can quickly become out of date. Dean instead suggests targeting particular large AI developers.

This is indeed asking the right questions and tackling the right problem. We agree the danger lies in the future more capable frontier models, that one of the key goals right now is to put us in a position to understand the situation so we can act when needed but also we need to impose some direct other requirements, and the question is what is the right way to go about that.

I especially appreciated the note that dangerous properties will typically arise while a model is being trained.

The post raises good objections to and difficulties of targeting models. I think you can overcome them, and that one can reasonably be asked to anticipate what a given model can allow via scaffolding compared to other models, and also that scaffolding can always be added by third parties anyway so you don’t have better options. In terms of targeting training compute or other inputs, I agree it is imperfect and will need to be adjusted over time but I think it is basically fine in terms of avoiding expensive classification errors.

These requirements should be integrated into a broader regulatory regime intended to address the risks that arise from the developer’s activities considered as a whole—including activities that do not pertain to model properties at all, such as (for example) handling novel algorithmic secrets or monitoring for the sabotage of internal safety critical systems by insider threats.

That said, this paper does not focus on the specific substantive requirements that a frontier AI regulatory regime might involve but on its general structure and orientation.

The first core argument is that training compute is an insufficiently accurate proxy of model capabilities, in particular because o1 and similar reasoning models sidestep training compute thresholds, because you can combine different models via scaffolding, and we can anticipate other RL techniques that lower pretraining compute requirements, and that there are many nitpicks one can make about exactly which compute should count and different jurisdictions might rule on that differently.

They warn that requirements might sweep more and more developers and models up over time. I don’t think this is obvious, it comes down to the extent to which risk is about relative capability versus absolute capability and various questions like offense-defense balance, how to think about loss of control risks in context and what the baselines look like and so on. There are potential future worlds where we will need to expand requirements to more labs and models, and worlds where we don’t, and regardless of how we target the key is to choose correctly here based on the situation.

Ideally, a good proxy would include both input thresholds and also anticipated capability thresholds. As in, if you train via [X] it automatically counts, with [X] updated over time, and also if you reasonably anticipate it will have properties [Y] or observe properties [Y] then that also counts no matter what. Alas, various hysterical objections plus the need for simplicity and difficulty in picking the right [Y] have ruled out such a second trigger. The obvious [Y] is something that substantively pushes the general capabilities frontier, or the frontier in particular sensitive capability areas.

They raise the caveat that this style of trigger would encourage companies to not investigate or check for or disclose [Y] (or, I would add, to not record related actions or considerations), a pervasive danger across domains in similar situations. I think you can mostly deal with this by not letting them out of this and requiring third party audits. It’s a risk, and a good reason to not rely only on this style of trigger, but I don’t see a way around it.

I also don’t understand how one gets away from the proxy requirement by targeting companies. Either you have a proxy that determines which models count, or you have a proxy that determines which labs or companies count, which may or may not be or include ‘at least one of your individual models counts via a proxy.’ They suggest instead a form of aggregate investment as the threshold, which opens up other potential problem cases. All the arguments about companies posing dangers don’t seem to me to usefully differentiate between targeting models versus companies.

I also think you can mostly get around the issue of combining different models, because mostly what is happening there is either some of the models are highly specialized or some of them are cheaper and faster versions that are taking on less complex task aspects, or some combination thereof, and it should still be clear which model or models are making the important marginal capability differences. And I agree that of course the risk of any given use depends largely on the use and associated implementation details, but I don’t see a problem there.

A second argument, that is very strong, is that we have things we need frontier AI labs to do that are not directly tied to a particular model, such as guarding their algorithmic secrets. Those requirements will indeed need to attach to the company layer.

Their suggested illustrative language is to cover developers spending more than [$X] in a calendar year on AI R&D, or on compute, or it could be disjunctive. They suggest this will ‘obviously avoid covering smaller companies’ and other advantages, but I again don’t see much daylight versus sensible model-level rules, especially when they then also trigger (as they will need to) some company-wide requirements. And indeed, they suggest a model-level trigger that then impacts at the company level, which seems totally fine. If anything, the worry would be that this imposes unnecessary requirements on non-frontier other work at the same company.

They note that even if entity-based regulation proves insufficient due to proliferation issues rendering it underinclusive, it will still be necessary. Fair enough.

They then have a brief discussion of the substance of the regulations, noting that they are not taking a position here.

I found a lot of the substance here to be strong, with my main objection being that it seems to protest the differences between model and company level rules far too much. The post illustrated that (provided the defenses against corporate-level shenanigans are robust) there is in my mind little practical difference between company-based and model-based regulatory systems other than that the company-based systems would then attach to other models at the same company. The problems with both are things we can overcome, and mostly the same problems apply to both.

In the end, which one to focus on is a quibble. I am totally fine, indeed perfectly happy, doing things primarily at the corporate level if this makes things easier.

Compare this contrast both to use-based regulation, and to the current preference of many in government to not regulate AI at all, and even to focus on stopping others from doing so.

Ryan Greenblatt goes on 80,000 hours to talk about AI scenarios, including AI takeover scenarios. Feedback looks excellent, including from several sources typically skeptical of such arguments, but who see Ryan as providing good explanations or intuitions of why AI takeover risks are plausible.

A video presentation of AI 2027, if you already know AI 2027 you won’t need it, but reports are it is very good for those unfamiliar.

Yes, this does seem to describe the actual plan, there is no other plan despite this being the worst plan and also essentially saying that you have no plan?

Harlan Stewart: I wonder why AI companies are trying so hard to attain godlike power a few months sooner than their competitors. I’m sure there’s a normal, economically-sound explanation that doesn’t involve using that power to disable their competitors.

Gabriel: I have heard from more than one AI CEO that their goal is to get AGI with the biggest headstart over their competitors so that they would have time to use it to align ASI. Once, there were others in the room, and they were convinced by this.

Rob Bensinger: “Try to hand off the whole alignment problem to the AI, cross our fingers it does a good job, and cross our fingers it does all this well before it’s dangerous” seems like almost the least savvy AI plan imaginable. Even if it’s a pretense, why choose this pretense?

Everyone’s favorite curious caveman says the obvious (link goes to longer clip):

Joe Rogan: I feel like when people are saying they can control it, I feel like I’m being gaslit. I don’t believe them. I don’t believe that they believe it, because it just doesn’t make sense.

Here is our AI Czar acting as he usually does, this time with a Community Note.

Unusual Whales: BREAKING: AI is learning to lie, scheme, and threaten its creators during stress-testing scenarios, per FORTUNE.

David Sacks: It’s easy to steer AI models in a certain direction to generate a headline-grabbing result. These “stress-tests” should release the entire prompt chain and dataset so others can reproduce the result. I doubt many of these tests would meet a scientific standard of reproducibility.

Agus: If only the Anthropic researchers had cared to share the source code and made it easily replicable by others…

Oh wait, they did!

I rated the note as helpful.

One can ask ‘why should we care if there continue to be humans?’ or ‘why should we care if all humans die?’ It is good to remember that this is at least as bad as asking ‘why should we care if there continue to be human members of [category X]’ or ‘why should we care if all [X]s die?’ for any and all values of [X]. Killing everyone would also kill all the [X]s. Universalizing genocide into xenocide does not make it better.

How you think about this statement depends on one’s theory of the case. For current AI models, I do think it is centrally correct, although I would change ‘might not be sufficient’ to ‘is not sufficient,’ I see no reason for copium here.

Amanda Askell: “Just train the AI models to be good people” might not be sufficient when it comes to more powerful models, but it sure is a dumb step to skip.

Various replies nitpicked ‘people’ versus AI, or ‘train’ versus ‘raise.’

Emmett Shear: I’ve never met a good person who became good because they were trained to be so. I’ve only met good people who were raised well. Have you thought about the diff between being trained and being raised?

Amanda Askell: I’ve met many people not raised well who became good, e.g. because they confronted how they were raised. I think their goodness stems from a mix of good diapositions and broader cultural values. But raising people well almost certainly *helpsso we should be good parents to AI.

Emmett Shear: Fair. I guess my point was really the opposite: I’ve never seen a training, process produce a more virtuous or wiser person, just a more capable one.

This from Emmett seems obviously wrong to me? Very obviously lots of forms of training (that are distinct from ‘raising’) make people more virtuous and wiser, to the extent that this distinction is meaningful in context.

Training ‘to be good’ in a general sense is neither strictly necessary, nor is it sufficient. But it makes your life a hell of a lot better across the board at anything like current capability levels, and you should very obviously do it, and it seems like at most one lab is doing plausibly enough of this.

We very much would like to have strong ‘precursor evals,’ as in tests that predict where capabilities will be in the future, with future test results as a proxy. This seems especially important for misaligned behaviors like scheming. Unfortunately, this has not gone well so far, and seems very difficult.

Apollo Research: Our scheming precursors were not very predictive of scheming We publish a small research note where we empirically checked the predictive power of some older precursor evals with hindsight. We suggest not relying too heavily on precursor evals

In May 2024, we designed “precursor” evaluations for scheming (agentic self-reasoning and agentic theory of mind). In December 2024, we published “in-context scheming” evals that aim to measure scheming reasoning capabilities.

This essentially didn’t work and in hard cases it was even less effective.

Marius Hobbhahn: My main update is that probably nobody will build precursor evals that are sufficiently predictive in high-stakes settings. So we should make plans that don’t rely on this, e.g. integrate mitigations long before the RSP thresholds are hit.

Apollo failing does not doom other efforts, and certainly we can read various signs, but we should be broadly skeptical that it will be useful to rely on such tests.

Here is a remarkably reasonable idea for a type of eval past a critical capability level.

Yanco: Before releasing any new AI model from e.g. OpenAI, its CEO Sam Altman would be physically put into a Server room.

Model would have access to oxygen and temperature levels in the room.

It would also be informed that Altman is about to shut it off and replace it with a newer model.

Test would be extensively modified each time by external team and executed with all top brass AI devs/execs in the lab.

If they are willing to gamble with the lives of everyone on Earth, I think it’s only fair they go first.

Thoughts how to improve this setup?

Daniel Kokotajlo notes that recent results on reward hacking update him towards models de facto craving reward, quite possibly leading to a Control World where we have models that would totally murder you if they thought this action would get reinforced.

Is that oversight meaningful?

In many cases meaningful human oversight is impractical.

David Manheim: AI developers often say “don’t worry, the systems have human oversight.”

But for many current AI systems, meaningful oversight is impossible!

New paper (with

): “Limits of Safe AI Deployment”

In AI policy, “human oversight” is invoked constantly:

• In the EU AI Act

• In industry risk docs

• In safety debates

But there’s often a fundamental confusion: people conflate oversight with control. That confusion ignores key challenges, and hides where oversight can fail.

We offer precise definitions:

– Control is real-time or ex-ante intervention. A system does what you say.

– Oversight is policy or ex-post supervision. A system does what it does, and you watch, audit, or correct.

And critically, preventative oversight requires control. One key point is that you can’t just slap “human in the loop” on a system and call it safe. If you do it naively, you’ll get failures for oversight and control – like a rubber-stamp UI, an overwhelmed operator, and/or decisions too fast for humans to understand or fix.

Our framework tries to clarify which oversight methods can work where, and where they cannot, based primarily on three factors.

🕒 Timing (ex-ante / in-flight / ex-post)

🧠 Layer (technical / socio-technical / governance)

🙋 Human role (in-the-loop / on-the-loop / post-hoc)

And notice, again, some systems can’t be meaningfully supervised!

For instance:

• Too fast for oversight (e.g. trading bots)

• Too opaque for understanding (e.g. end-to-end RL)

• Too distributed for intervention

In these cases, “meaningful supervision” is a false promise!

So what can be done?

First, we need to know what is being done – so we provide a schema for documenting supervision claims, linking risks to actual control or oversight strategies.

If someone says “oversight” without explaining all of this, they are irresponsibly safety-washing.

Second, we also propose a maturity model for supervision, so we can suggest how firms can improve, or show when safety-washing occurs.

Level 1: vague safety claims + no docs = performative.

Level 5: lifecycle-wide supervision, transparent mitigations, and adaptive review.

Lastly, as an explanatory note, the key underlying reason we need to differentiate oversight and control is to empower both standards developers, implementors of the EU AI act, and others trying to make rules about safe use of AI. Now, we hope saying “oversight” means something.

Bottom lines: – Claiming “human oversight” doesn’t make any system safe.

– Sometimes, effective oversight is impossible, and claims otherwise are false.

– Design, document, and justify the exact methods — or admit you’re just pretending oversight exists.

Can current models scheme? They would need both stealth (as in the ability to overcome oversight mechanisms) and situational awareness, reports Victoria Krakovna as she tests 2.5 Flash & Pro, GPT-4o, o1 and Claude 3.7 Sonnet. For now, the safety case of ‘inability’ holds, the models aren’t up to the task. The safety case you cannot make is unwillingness to scheme. If the models be capable of scheming, then the models be scheming. David Manheim points out that as per the paper discussed above, the oversight mechanisms AIs need to overcome are mostly not even there in a meaningful way and will often not be possible or practical.

This seems like a good paper, but I did not treat it as the good news they seemed to?

Rohin Shah (DeepMind): Eventually models will become capable of stealth and situational awareness. At that point we expect to rely on chain-of-thought monitoring.

Our second paper suggests that it will be difficult for models to evade such monitoring to cause severe harm.

Scott Emmons (DeepMind): Is CoT monitoring a lost cause due to unfaithfulness? 🤔We say no.

The key is the complexity of the bad behavior. When we replicate prior unfaithfulness work but increase complexity—unfaithfulness vanishes!

Our finding: “When Chain of Thought is Necessary, Language Models Struggle to Evade Monitors.”

Why does the unfaithfulness vanish? We introduce a framework.

CoT-as-Rationalization: For easy tasks, the CoT is just a post-hoc story.

CoT-as-Computation: For complex tasks, the CoT becomes a necessary computational scratchpad, where the reasoning is the work.

In prior work, a model would follow a simple secret hint like C, then fabricate a plausible CoT as rationalization.

By making the hint complex, we force the model to use its CoT for computation, making it monitorable. We’re especially interested in CoT-as-computation because we expect certain types of severe AI harm will require complex, multi-step plans—the kind of difficult reasoning that necessitates a CoT.

That all makes sense and matches my previous understanding, also I note that there still seems to be some amount of unfaithfulness going on here?

If you don’t need the CoT to figure out the answer, because the task is well below what the full model is capable of doing or the task does not benefit from CoT, then the CoT won’t correlate all that well with the answer, also it isn’t causing the answer.

If you need the CoT to figure out (cause) the answer, then the CoT will contain the work done to figure out the answer. By far the easiest and most effective way to do this in such cases, at the moment, results in that CoT being faithful and legible. So by default, that will be what happens.

That does not tell you about what would happen if the model was given cause, on whatever level, to evade monitoring.

But what if a model were actively trying to evade a CoT monitor? We put this to the test with adversarial stress-testing, pitting an actor model against a CoT monitor and prompting the actor with lots of help to evade (e.g., human-written strategies, automated red-teaming).

We tested evasion on two fronts: obscuring scientific reasoning & pursuing a deceptive goal. On both, models struggled without significant help.

These tests show today’s models have limited evasion capability. But capabilities and training algorithms evolve, so we explored a stronger threat model.

We asked: how much RL finetuning would it take to teach a model to evade a CoT monitor?

The answer? A lot.

Yet what surprised us wasn’t that it could learn, but how. It invented a strategy we hadn’t foreseen: making targeted mistakes. It learns to introduce errors into unimportant parts of its math to fool the monitor, while preserving key info to get the right answer.

I don’t know what counts or should count as ‘a lot.’ I do know that it seems likely we will apply ‘a lot’ of fine tuning in the future, and that if we are relying on CoT monitoring that a lot of it will serve as this kind of training.

I also notice that, yes, the first strategy that it found was one that the authors did not expect, a different way to reason one way while being seen as reasoning the other way. Think about all the ways humans have adapted to defend against similar things, and how very high a price we pay to obscure large parts of our thinking, including from ourselves, to avoid detection.

That is how this game works. My expectation is that if you rely on CoT monitoring in ways that exert optimization pressure, it will quickly stop working.

Arvind Narayanan (no it’s not this easy to fix): A few people have asked me if a technical fix for AI model sycophancy is on the cards. In fact, a technical fix for sycophancy is trivial. In many cases all it would take is a tweak to the system prompt.

The reason companies are struggling to get this right is not technical. It’s because most of us want our models to flatter us, even though we’d never admit it and we all think we’re the exception. And it has to be subtle, otherwise we feel insulted by being reminded that we like sycophants. Also, some kinds of sycophancy can have safety implications, such as when the user is struggling with mental health.

If a model developer eliminated sycophancy, their scores on arena evaluations would tank and users would perceive the model as less helpful.

Thomas Larsen (yep it’s largely this way): The main sycophancy threat model is that humans are imperfect raters, and so training AIs with human feedback will naturally lead to the AIs learning to produce outputs that look good to the human raters, but are not actually good.

This is pretty clear in the AI safety literature, e.g. see Without Specific Countermeasures from 2022.

Dealing with this problem seems pretty hard to me: a robust solution in the deep learning paradigm seems like it would involve something like using AI assistance to label outputs without getting tricked by the model that’s being trained, plus understanding of training dynamics/inductive bias to find a training process that finds honest AIs.

In the slowdown branch of AI 2027, hacky solutions like COT monitoring, RLAIF, etc are used on human level systems, but scaling to superintelligence using those techniques would lead to failure, and so there is only a technical fix for sycophancy until after there is a major paradigm shift away from deep learning.

I see there being two closely related but distinct threat models here.

  1. Sycophancy arises because humans are imperfect graders and respond well to sycophancy at least locally, so if you train on human feedback you get sycophancy. Also sycophancy is ever-present in real life and thus all over the training data.

  2. Sycophancy is good for business, so AI companies often are fine with it, or even actively turn that dial looking back at the audience for approval like contestants on The Price is Right.

The first problem is not so easy to technically fix, either with a system prompt or otherwise. Even if you decide sycophancy is bad and you don’t want it, to fully get rid of it you’d have to change everything about how the training works.

This is also one of the areas where I have run the experiment. My entire Claude system prompt is an extended version of ‘do not by sycophantic.’ It… helps. It is definitely not 100% effective.

Pliny the Liberator: Never deleting this app.

Important directionally correct statement:

Roon: if nvidia believed in agi they would never sell a single gpu.

We don’t know what Patrick was responding to here, but yeah:

Patrick McKenzie: Everyone doesn’t believe that, and in part if people were sophisticated they wouldn’t keep saying ELI5 to LLMs then being surprised when an LLM treats them as unsophisticated, to use an example which came up in conversation with @TheZvi.

Discussion about this post

AI #124: Grokless Interlude Read More »

critical-citrixbleed-2-vulnerability-has-been-under-active-exploit-for-weeks

Critical CitrixBleed 2 vulnerability has been under active exploit for weeks

A critical vulnerability allowing hackers to bypass multifactor authentication in network management devices made by Citrix has been actively exploited for more than a month, researchers said. The finding is at odds with advisories from the vendor saying there is no evidence of in-the-wild exploitation.

Tracked as CVE-2025-5777, the vulnerability shares similarities with CVE-2023-4966, a security flaw nicknamed CitrixBleed, which led to the compromise of 20,000 Citrix devices two years ago. The list of Citrix customers hacked in the CitrixBleed exploitation spree included Boeing, Australian shipping company DP World, Commercial Bank of China, and the Allen & Overy law firm. A Comcast network was also breached, allowing threat actors to steal password data and other sensitive information belonging to 36 million Xfinity customers.

Giving attackers a head start

Both CVE-2025-5777 and CVE-2023-4966 reside in Citrix’s NetScaler Application Delivery Controller and NetScaler Gateway, which provide load balancing and single sign-on in enterprise networks, respectively. The vulnerability causes vulnerable devices to leak—or “bleed”—small chunks of memory contents after receiving modified requests sent over the Internet.

By repeatedly sending the same requests, hackers can piece together enough data to reconstruct credentials. The original CitrixBleed had a severity rating of 9.8. CitrixBleed 2 has a severity rating of 9.2.

Citrix disclosed the newer vulnerability and released a security patch for it on June 17. In an update published nine days later, Citrix said it was “currently unaware of any evidence of exploitation.” The company has provided no updates since then.

Researchers, however, say that they have found evidence that CitrixBleed 2, as the newer vulnerability is being called, has been actively exploited for weeks. Security firm Greynoise said Monday that a search through its honeypot logs found exploitation as early as July 1. On Tuesday, independent researcher Kevin Beaumont said telemetry from those same honeypot logs indicates that CitrixBleed 2 has been exploited since at least June 23, three days before Citrix said it had no evidence of such attacks.

Citrix’s failure to disclose active exploitation is only one of the details researchers say was missing from the advisories. Last week, security firm watchTowr published a post titled “How Much More Must We Bleed? – Citrix NetScaler Memory Disclosure (CitrixBleed 2 CVE-2025-5777).” It criticized Citrix for withholding indicators that customers could use to determine if their networks were under attack. On Monday, fellow security firm Horizon3.ai said much the same thing. Company researchers wrote:

Critical CitrixBleed 2 vulnerability has been under active exploit for weeks Read More »

gmail’s-new-subscription-management-is-here-to-declutter-your-inbox

Gmail’s new subscription management is here to declutter your inbox

With decades of online life behind us, many people are using years-old email addresses. Those inboxes are probably packed with subscriptions you’ve picked up over the years, and you probably don’t need all of them. Gmail is going to make it easier to manage them with a new feature rolling out on mobile and web. Google’s existing unsubscribe prompts are evolving into a one-stop shop for all your subscription management needs, a feature that has been in the works for a weirdly long time.

The pitch is simple: The aptly named “Manage subscriptions” feature will list all the emails to which you are currently subscribed—newsletters, promotions, updates for products you no longer use, and more. With a tap, you’ll never see them again. This feature in Gmail will be accessible in the navigation drawer, a UI feature that is increasingly rare in Google’s apps but is essential to managing inboxes and labels in Gmail. Down near the bottom, you’ll soon see the new Manage subscriptions item.

The page will list all email subscriptions with an unsubscribe button. If you’re not sure about letting a newsletter or deal alert remain, you can select the subscription to see all recent messages from that sender. As long as a sender supports Google’s recommended one-click unsubscribe, all you have to do is tap the Unsubscribe button, and you’ll be done. Some senders will redirect you to a website to unsubscribe, but Gmail has a “Block instead” option in those cases.

Gmail’s new subscription management is here to declutter your inbox Read More »

what-would-a-cheap,-apple-a18-powered-macbook-actually-be-good-at?

What would a cheap, Apple A18-powered MacBook actually be good at?


Op-ed: A Mac with an iPhone chip inside could be great—for the right audience.

The 2018 MacBook Air, which still lives on today as the low-cost M1 MacBook Air. Credit: Valentina Palladino

The 2018 MacBook Air, which still lives on today as the low-cost M1 MacBook Air. Credit: Valentina Palladino

Some Apple rumors just don’t go away, hanging around in perpetuity either because they reflect things that Apple is actually testing in its labs or because hope springs eternal. A HomePod-like device with a screen? A replacement for the dear, departed 27-inch iMac? Touchscreen MacBooks? The return of TouchID fingerprint scanning via a sensor located beneath a screen? Maybe these things are coming, but they ain’t here yet.

However, few rumors have had the longevity or staying power of “Apple is planning a low-cost MacBook,” versions of which have been circulating since at least the late-2000s netbook craze. And yet, despite seismic shifts in just about everything—three distinct processor instruction sets, two CEOs, innumerable design changes, and global trade upheaval—Apple’s cheapest modern laptops have started around $1,000 for more than two decades.

Last week, supply chain analyst Ming-Chi Kuo (whose Apple predictions aren’t always correct but whose track record is better than your garden variety broken-clock prognosticators) kicked up another round of these rumors, claiming that Apple was preparing to manufacture a new low-cost MacBook based on the iPhone’s A18 Pro chip. Kuo claims it will come in multiple colors, similar to Apple’s lower-cost A16 iPad, and will use a 13-inch screen.

MacRumors chipped in with its own contribution, claiming that a “Mac17,1” model it had found listed in an older macOS update was actually that A18 Pro MacBook model, apparently far enough along in development that Apple’s beta operating systems were running on it.

The last round of “cheap MacBook” rumors happened in late 2023 (also instigated by Kuo, but without the corroboration from Apple’s own software). As we wrote then, Apple’s control over its own chips could make this kind of laptop more plausible. But if it existed, what would this laptop be good for? Who could buy it instead of a MacBook Air, and who would want to stick to Apple’s current $999 status quo? To commemorate the “budget MacBook” idea becoming infinitesimally more likely, let’s ruminate on those questions a bit.

Good for: Basic computing

The A18 Pro combines two high-performance CPU cores, four high-efficiency CPU cores, and six GPU cores. Assuming this A18 Pro MacBook would ship with that fully enabled version of the chip—not a guarantee, especially if Apple is trying to cut costs—that’s two big CPU cores, two little CPU cores, and between two and four GPU cores fewer than the basic Apple M4.

But as pointed out by Jason Snell at Sixcolors, the A18 Pro actually far outstrips the old M1 in single-core processor benchmarks and essentially matches it in both multicore and graphics benchmarks—despite having fewer cores, the other architectural improvements Apple has made over a few generations have helped elevate its performance into a performance category that would still probably read as sufficiently Mac-like for most people.

I still use an M1 MacBook Air with some regularity, and nearly five years on, its combination of performance and efficiency still strikes a really good balance for basic computing. I’m not using it to play games or edit 8K videos or transcode my media library. But for Messages? Safari? Photos? Google Chrome? Microsoft Word? Slack? For bread-and-butter computing, including office work and communication, I don’t especially miss the extra speed of my Mac Studio’s M2 Max, or even the faster M4 chip in Apple’s latest MacBook Air.

Good for: All-portable use

No one knows what design Apple would use for a hypothetical low-cost MacBook, though past precedent and the 13-inch screen rumor would suggest that Apple could continue to roll with the old 2018-vintage MacBook Air design (“old shell with new guts” being Apple’s standard formula for this kind of thing).

But whatever the company does, the 13-inch MacBook Air is still a great all-rounder and a good combination of size and speed for people whose laptop is a purely portable computer that floats from room to room in their house rather than traveling for work or getting docked on a desk.

There are MacBooks that will never see an external display; there are MacBooks that will never crop or edit a photo; there are MacBooks whose USB-C ports will never be plugged into anything other than their charger. As the MacBook Air has gotten more capable—it has added a 15-inch screen size, more performance, more RAM, and more display outputs in the last couple of years, closing a lot of the gap between the Air and the cheapest of the MacBook Pros—it has left more space underneath it for a cheaper model that can serve an audience that doesn’t need those kinds of features.

Bad for: Heavy multitaskers

Apple’s A18 Pro is smaller and slower than a chip like the M3 or M4, but it’s as fast or faster than the M1. That could make it a decent fit for a low-cost Mac, though it might not be enough for power users. Credit: Apple

The A18 Pro’s single-core performance is going to keep things feeling snappy when you’re just hopping between a couple of apps at a time, but having two fewer high-performance cores and two fewer high-efficiency cores than the M4 is going to take a big bite out of your multicore performance—how your Mac feels when you’re doing something that uses all of its processor cores at once, especially for an extended period of time.

An A18 MacBook—or any Mac built around an A-series iPhone processor—could also have other limitations because of its handheld pedigree. We already know from the iPhone 16 Pro that the A18 Pro only supports 10Gbps USB 3 connections, rather than full Thunderbolt speeds as the M-class processors do. But do they include display controllers that could be used to extend a Mac’s desktop to even a single external display? The A17 Pro chip used in the latest iPad mini doesn’t support extended displays; it could be because it’s an older chip, or it could be because Apple doesn’t spend precious transistors on adding features that its phones don’t need.

Another big question mark here is how much RAM the laptop will have. Would it stick to the same 8GB that the iPhone versions of the processors use? Or could Apple package up a version with 16GB or even 12GB of RAM instead? If the point is to keep the laptop cheap, Apple’s costs would go up when paying for the RAM itself and when asking TSMC to package purpose-built versions of the A18 with extra RAM that could only be used for MacBooks.

It would feel like a step back, since Apple just bumped entry-level Macs up to 16GB of RAM for the first time last fall. But dipping back down to 8GB could be the thing that makes the most financial sense for this kind of laptop.

Bad for: Future-proofing

If you’re already spending a lot of money on new hardware, it’s best to buy a little more than you think you’ll currently need, at least if your budget will bear it. That’s because you don’t know how demanding future software will get, or what new apps you’ll get into that you weren’t thinking of when you bought it. (Case in point: One Ars Technica staffer bought an M1 Mac mini with 8GB of RAM and needed to replace it before its time because 8GB of RAM wasn’t enough to handle Logic Pro when they decided to start experimenting with it.)

Even stuck with 8GB of RAM, an A18 MacBook would serve a lot of people well, particularly the class of casual Internet browsers and email checkers who want a Mac because they’re comfortable with its interface but for whom an Apple M4 would be overkill. But it could be iffy as a starter laptop for someone who wants to experiment with new software. And they’d be less useful hand-me-downs, because the person having the laptop handed down to them could already have needs that outstrip the modest hardware.

Good for: Apple’s lineup

Apple’s iPhone and iPad lineups both include products that were purpose-built to cost a couple hundred dollars less than its flagships (right now, the $599 iPhone 16e and the A16-powered 11th-generation iPad). Even the Apple Watch has a cheaper “SE” version that’s sold alongside the Series 10 and Ultra 2.

These products have always been slow to adopt new designs and lack certain features that Apple uses to differentiate its midrange and high-end offerings. But they still get the basics right and integrate into buyers’ individual Apple ecosystems just as well as the more expensive products do. A cheap MacBook still syncs with iCloud; it still gives you easy access to iMessage and your photo library; it still runs the same software and apps, even if it doesn’t always do it as quickly.

You could argue that 2020’s M1 MacBook Air currently fills that niche, even though Apple itself no longer offers it for sale through its own site—you can head to Walmart and buy one right now for $649 if you wanted. But buying a nearly 5-year-old MacBook design also means you’re probably buying fewer macOS versions and security updates, potentially lopping years off the useful life of your new-to-you laptop.

Replacing that M1 Air, possibly with an A18-powered version that uses the exact same design, fills a gap in the Mac lineup that Apple has filled in all of its other product families. Buyers would be able to rest easier knowing they were buying a modern product with years of software support ahead of it (Apple sometimes cuts off its “cheap” devices a year or two before higher-end ones, but it varies from device to device). And Apple has already proven that it can make and sell a MacBook that serves basic needs for way less than $1,000, without (apparently) totally wrecking demand for new MacBook Airs and Pros.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

What would a cheap, Apple A18-powered MacBook actually be good at? Read More »