Author name: Mike M.

titan-sub-implosion-caused-by-absolutely-bonkers-“toxic-workplace-environment”

Titan sub implosion caused by absolutely bonkers “toxic workplace environment”

In a 300-plus page final report released today, the US Coast Guard analyzed the 2023 Titan sub implosion from every conceivable angle and came to a clear conclusion: OceanGate CEO Stockton Rush was a dangerous and deeply unpleasant boss.

His company used “intimidation tactics” to sidestep regulatory scrutiny, it was a “toxic” workplace, and its safety culture was “critically flawed.” The Titan itself was “undocumented, unregistered, non-certificated, [and] unclassed.” As for Rush, he managed to “completely ignore vital inspections, data analyses, and preventative maintenance procedures.” The result was a “catastrophic event” that occurred when 4,930 pounds per square inch of water pressure cracked the sub open and crushed its five occupants during a dive to the Titanic wreckage site.

Had Rush somehow survived, the report says, he would have been referred for prosecution.

Stockton Rush shows David Pogue the game controller that pilots the OceanGate Titan sub during a CBS Sunday Morning segment broadcast in November 2022.

OceanGate CEO Stockton Rush shows David Pogue the 2010-era game controller used to pilot the Titan sub during a CBS Sunday Morning segment broadcast in November 2022. Credit: CBS Sunday Morning

Throwing the controller

One small story about a video game controller shows what Rush was like to work for. You may remember Rush from an infamous 2022 CBS Sunday Morning segment, where Rush showed journalist David Pogue around the Titan sub. “We run the whole thing with this game controller,” Rush said, holding up a Logitech F710 controller with 3D-printed thumbstick extensions. Pogue chuckled, saying, “Come on!” as he covered his face with his hand.

The game controller had been used in OceanGate subs for years by that point; a 2014 video showed one being used to control the company’s earlier Cyclops I submersible. In 2016, OceanGate took the Cyclops I to dive the wreck of the Andrea Doria outside of Nantucket, Massachusetts. (Seinfeld fans will remember that an entire episode is taken up with George’s quest to get an apartment that was about to go to an Andrea Doria survivor.)

The OceanGate team spent two days at the site, running 2D and 3D scans of the sunken ship, until Rush got the Cyclops I “stuck under the bow of the Andrea Doria wreckage”—and he couldn’t get the sub free. According to the report, Rush then “experienced a ‘meltdown’ and refused to let [the assistant pilot] assist in resolving the situation. When a mission specialist suggested that Mr. Rush hand over the controller to the assistant pilot, the assistant pilot reported that the controller was thrown at him. Upon obtaining the controller, the assistant pilot was able to free the Cyclops I from the wreckage.”

Titan sub implosion caused by absolutely bonkers “toxic workplace environment” Read More »

openai-releases-its-first-open-source-models-since-2019

OpenAI releases its first open source models since 2019

OpenAI is releasing new generative AI models today, and no, GPT-5 is not one of them. Depending on how you feel about generative AI, these new models may be even more interesting, though. The company is rolling out gpt-oss-120b and gpt-oss-20b, its first open weight models since the release of GPT-2 in 2019. You can download and run these models on your own hardware, with support for simulated reasoning, tool use, and deep customization.

When you access the company’s proprietary models in the cloud, they’re running on powerful server infrastructure that cannot be replicated easily, even in enterprise. The new OpenAI models come in two variants (120b and 20b) to be run on less powerful hardware configurations. Both are transformers with configurable chain of thought (CoT), supporting low, medium, and high settings. The lower settings are faster and use fewer compute resources, but the outputs are better with the highest setting. You can set the CoT level with a single line in the system prompt.

The smaller gpt-oss-20b has a total of 21 billion parameters, utilizing mixture-of-experts (MoE) to reduce that to 3.6 billion parameters per token. As for gpt-oss-120b, its 117 billion parameters come down to 5.1 billion per token with MoE. The company says the smaller model can run on a consumer-level machine with 16GB or more of memory. To run gpt-oss-120b, you need 80GB of memory, which is more than you’re likely to find in the average consumer machine. It should fit on a single AI accelerator GPU like the Nvidia H100, though. Both models have a context window of 128,000 tokens.

Credit: OpenAI

The team says users of gpt-oss can expect robust performance similar to its leading cloud-based models. The larger one benchmarks between the o3 and o4-mini proprietary models in most tests, with the smaller version running just a little behind. It gets closest in math and coding tasks. In the knowledge-based Humanity’s Last Exam, o3 is far out in front with 24.9 percent (with tools), while gpt-oss-120b only manages 19 percent. For comparison, Google’s leading Gemini Deep Think hits 34.8 percent in that test.

OpenAI releases its first open source models since 2019 Read More »

enough-is-enough—i-dumped-google’s-worsening-search-for-kagi

Enough is enough—I dumped Google’s worsening search for Kagi


I like how the search engine is the product instead of me.

Artist's depiction of the article author heaving a large multicolored

“Won’t be needing this anymore!” Credit: Aurich “The King” Lawson

“Won’t be needing this anymore!” Credit: Aurich “The King” Lawson

Mandatory AI summaries have come to Google, and they gleefully showcase hallucinations while confidently insisting on their truth. I feel about them the same way I felt about mandatory G+ logins when all I wanted to do was access my damn YouTube account: I hate them. Intensely.

But unlike those mandatory G+ logins—on which Google eventually relented before shutting down the G+ service—our reading of the tea leaves suggests that, this time, the search giant is extremely pleased with how things are going.

Fabricated AI dreck polluting your search? It’s the new normal. Miss your little results page with its 10 little blue links? Too bad. They’re gone now, and you can’t get them back, no matter what ephemeral workarounds or temporarily functional flags or undocumented, could-fail-at-any-time URL tricks you use.

And the galling thing is that Google expects you to be a good consumer and just take it. The subtext of the company’s (probably AI-generated) robo-MBA-speak non-responses to criticism and complaining is clear: “LOL, what are you going to do, use a different search engine? Now, shut up and have some more AI!”

But like the old sailor used to say: “That’s all I can stands, and I can’t stands no more.” So I did start using a different search engine—one that doesn’t constantly shower me with half-baked, anti-consumer AI offerings.

Out with Google, in with Kagi.

What the hell is a Kagi?

Kagi was founded in 2018, but its search product has only been publicly available since June 2022. It purports to be an independent search engine that pulls results from around the web (including from its own index) and is aimed at returning search to a user-friendly, user-focused experience. The company’s stated purpose is to deliver useful search results, full stop. The goal is not to blast you with AI garbage or bury you in “Knowledge Graph” summaries hacked together from posts in a 12-year-old Reddit thread between two guys named /u/WeedBoner420 and /u/14HitlerWasRight88.

Kagi’s offerings (it has a web browser, too, though I’ve not used it) are based on a simple idea. There’s an (oversimplified) axiom that if a good or service (like Google search, for example, or good ol’ Facebook) is free for you to use, it’s because you’re the product, not the customer. With Google, you pay with your attention, your behavioral metrics, and the intimate personal details of your wants and hopes and dreams (and the contents of your emails and other electronic communications—Google’s got most of that, too).

With Kagi, you pay for the product using money. That’s it! You give them some money, and you get some service—great service, really, which I’m overall quite happy with and which I’ll get to shortly. You don’t have to look at any ads. You don’t have to look at AI droppings. You don’t have to give perpetual ownership of your mind-palace to a pile of optioned-out tech bros in sleeveless Patagonia vests while you are endlessly subjected to amateur AI Rorschach tests every time you search for “pierogis near me.”

How much money are we talking?

I dunno, about a hundred bucks a year? That’s what I’m spending as an individual for unlimited searches. I’m using Kagi’s “Professional” plan, but there are others, including a free offering so that you can poke around and see if the service is worth your time.

image of kagi billing panel

This is my account’s billing page, showing what I’ve paid for Kagi in the past year. (By the time this article runs, I’ll have renewed my subscription!)

Credit: Lee Hutchinson

This is my account’s billing page, showing what I’ve paid for Kagi in the past year. (By the time this article runs, I’ll have renewed my subscription!) Credit: Lee Hutchinson

I’d previously bounced off two trial runs with Kagi in 2023 and 2024 because the idea of paying for search just felt so alien. But that was before Google’s AI enshittification rolled out in full force. Now, sitting in the middle of 2025 with the world burning down around me, a hundred bucks to kick Google to the curb and get better search results feels totally worth it. Your mileage may vary, of course.

The other thing that made me nervous about paying for search was the idea that my money was going to enrich some scumbag VC fund, but fortunately, there’s good news on that front. According to the company’s “About” page, Kagi has not taken any money from venture capitalist firms. Instead, it has been funded by a combination of self-investment by the founder, selling equity to some Kagi users in two rounds, and subscription revenue:

Kagi was bootstrapped from 2018 to 2023 with ~$3M initial funding from the founder. In 2023, Kagi raised $670K from Kagi users in its first external fundraise, followed by $1.88M raised in 2024, again from our users, bringing the number of users-investors to 93… In early 2024, Kagi became a Public Benefit Corporation (PBC).

What about DuckDuckGo? Or Bing? Or Brave?

Sure, those can be perfectly cromulent alternatives to Google, but honestly, I don’t think they go far enough. DuckDuckGo is fine, but it largely utilizes Bing’s index; and while DuckDuckGo exercises considerable control over its search results, the company is tied to the vicissitudes of Microsoft by that index. It’s a bit like sitting in a boat tied to a submarine. Sure, everything’s fine now, but at some point, that sub will do what subs do—and your boat is gonna follow it down.

And as for Bing itself, perhaps I’m nitpicky [Ed. note: He is!], but using Bing feels like interacting with 2000-era MSN’s slightly perkier grandkid. It’s younger and fresher, yes, but it still radiates that same old stanky feeling of taste-free, designed-by-committee artlessness. I’d rather just use Google—which is saying something. At least Google’s search home page remains uncluttered.

Brave Search is another fascinating option I haven’t spent a tremendous amount of time with, largely because Brave’s cryptocurrency ties still feel incredibly low-rent and skeevy. I’m slowly warming up to the Brave Browser as a replacement for Chrome (see the screenshots in this article!), but I’m just not comfortable with Brave yet—and likely won’t be unless the company divorces itself from cryptocurrencies entirely.

More anonymity, if you want it

The feature that convinced me to start paying for Kagi was its Privacy Pass option. Based on a clean-sheet Rust implementation of the Privacy Pass standard (IETF RFCs 9576, 9577, and 9578) by Raphael Robert, this is a technology that uses cryptographic token-based auth to send an “I’m a paying user, please give me results” signal to Kagi, without Kagi knowing which user made the request. (There’s a much longer Kagi blog post with actual technical details for the curious.)

To search using the tool, you install the Privacy Pass extension (linked in the docs above) in your browser, log in to Kagi, and enable the extension. This causes the plugin to request a bundle of tokens from the search service. After that, you can log out and/or use private windows, and those tokens are utilized whenever you do a Kagi search.

image of a kagi search with privacy pass enabled

Privacy pass is enabled, allowing me to explore the delicious mystery of pierogis with some semblance of privacy.

Credit: Lee Hutchinson

Privacy pass is enabled, allowing me to explore the delicious mystery of pierogis with some semblance of privacy. Credit: Lee Hutchinson

The obvious flaw here is that Kagi still records source IP addresses along with Privacy Pass searches, potentially de-anonymizing them, but there’s a path around that: Privacy Pass functions with Tor, and Kagi maintains a Tor onion address for searches.

So why do I keep using Privacy Pass without Tor, in spite of the opsec flaw? Maybe it’s the placebo effect in action, but I feel better about putting at least a tiny bit of friction in the way of someone with root attempting to casually browse my search history. Like, I want there to be at least a SQL JOIN or two between my IP address and my searches for “best Mass Effect alien sex choices” or “cleaning tips for Garrus body pillow.” I mean, you know, assuming I were ever to search for such things.

What’s it like to use?

Moving on with embarrassed rapidity, let’s look at Kagi a bit and see how using it feels.

My anecdotal observation is that Kagi doesn’t favor Reddit-based results nearly as much as Google does, but sometimes it still has them near or at the top. And here is where Kagi curb-stomps Google with quality-of-life features: Kagi lets you prioritize or de-prioritize a website’s prominence in your search results. You can even pin that site to the top of the screen or block it completely.

This is a feature I’ve wanted Google to get for about 25 damn years but that the company has consistently refused to properly implement (likely because allowing users to exclude sites from search results notionally reduces engagement and therefore reduces the potential revenue that Google can extract from search). Well, screw you, Google, because Kagi lets me prioritize or exclude sites from my results, and it works great—I’m extraordinarily pleased to never again have to worry about Quora or Pinterest links showing up in my search results.

Further, Kagi lets me adjust these settings both for the current set of search results (if you don’t want Reddit results for this search but you don’t want to drop Reddit altogether) and also globally (for all future searches):

image of kagi search personalization options

Goodbye forever, useless crap sites.

Credit: Lee Hutchinson

Goodbye forever, useless crap sites. Credit: Lee Hutchinson

Another tremendous quality-of-life improvement comes via Kagi’s image search, which does a bunch of stuff that Google should and/or used to do—like giving you direct right-click access to save images without having to fight the search engine with workarounds, plugins, or Tampermonkey-esque userscripts.

The Kagi experience is also vastly more customizable than Google’s (or at least, how Google’s has become). The widgets that appear in your results can be turned off, and the “lenses” through which Kagi sees the web can be adjusted to influence what kinds of things do and do not appear in your results.

If that doesn’t do it for you, how about the ability to inject custom CSS into your search and landing pages? Or to automatically rewrite search result URLs to taste, doing things like redirecting reddit.com to old.reddit.com? Or breaking free of AMP pages and always viewing originals instead?

Image of kagi custom css field

Imagine all the things Ars readers will put here.

Credit: Lee Hutchinson

Imagine all the things Ars readers will put here. Credit: Lee Hutchinson

Is that all there is?

Those are really all the features I care about, but there are loads of other Kagi bits to discover—like a Kagi Maps tool (it’s pretty good, though I’m not ready to take it up full time yet) and a Kagi video search tool. There are also tons of classic old-Google-style inline search customizations, including verbatim mode, where instead of trying to infer context about your search terms, Kagi searches for exactly what you put in the box. You can also add custom search operators that do whatever you program them to do, and you get API-based access for doing programmatic things with search.

A quick run-through of a few additional options pages. This is the general customization page. Lee Hutchinson

I haven’t spent any time with Kagi’s Orion browser, but it’s there as an option for folks who want a WebKit-based browser with baked-in support for Privacy Pass and other Kagi functionality. For now, Firefox continues to serve me well, with Brave as a fallback for working with Google Docs and other tools I can’t avoid and that treat non-Chromium browsers like second-class citizens. However, Orion is probably on the horizon for me if things in Mozilla-land continue to sour.

Cool, but is it any good?

Rather than fill space with a ton of comparative screenshots between Kagi and Google or Kagi and Bing, I want to talk about my subjective experience using the product. (You can do all the comparison searches you want—just go and start searching—and your comparisons will be a lot more relevant to your personal use cases than any examples I can dream up!)

My time with Kagi so far has included about seven months of casual opportunistic use, where I’d occasionally throw a query at it to see how it did, and about five months of committed daily use. In the five months of daily usage, I can count on one hand the times I’ve done a supplementary Google search because Kagi didn’t have what I was looking for on the first page of results. I’ve done searches for all the kinds of things I usually look for in a given day—article fact-checking queries, searches for details about the parts of speech, hunts for duck facts (we have some feral Muscovy ducks nesting in our front yard), obscure technical details about Project Apollo, who the hell played Dupont in Equilibrium (Angus Macfadyen, who also played Robert the Bruce in Braveheart), and many, many other queries.

Image of Firefox history window showing kagi searches for july 22

A typical afternoon of Kagi searches, from my Firefox history window.

Credit: Lee Hutchinson

A typical afternoon of Kagi searches, from my Firefox history window. Credit: Lee Hutchinson

For all of these things, Kagi has responded quickly and correctly. The time to service a query feels more or less like Google’s service times; according to the timer at the top of the page, my Kagi searches complete in between 0.2 and 0.8 seconds. Kagi handles misspellings in search terms with the grace expected of a modern search engine and has had no problem figuring out my typos.

Holistically, taking search customizations into account on top of the actual search performance, my subjective assessment is that Kagi gets me accurate, high-quality results on more or less any given query, and it does so without festooning the results pages with features I find detractive and irrelevant.

I know that’s not a data-driven assessment, and it doesn’t fall back on charts or graphs or figures, but it’s how I feel after using the product every single day for most of 2025 so far. For me, Kagi’s search performance is firmly in the “good enough” category, and that’s what I need.

Kagi and AI

Unfortunately, the thing that’s stopping me from being completely effusive in my praise is that Kagi is exhibiting a disappointing amount of “keeping-up-with-the-Joneses” by rolling out a big ‘ol pile of (optional, so far) AI-enabled search features.

A blog post from founder Vladimir Prelovac talks about the company’s use of AI, and it says all the right things, but at this point, I trust written statements from tech company founders about as far as I can throw their corporate office buildings. (And, dear reader, that ain’t very far).

image of kagi ai features

No thanks. But I would like to exclude AI images from my search results, please.

Credit: Lee Hutchinson

No thanks. But I would like to exclude AI images from my search results, please. Credit: Lee Hutchinson

The short version is that, like Google, Kagi has some AI features: There’s an AI search results summarizer, an AI page summarizer, and an “ask questions about your results” chatbot-style function where you can interactively interrogate an LLM about your search topic and results. So far, all of these things can be disabled or ignored. I don’t know how good any of the features are because I have disabled or ignored them.

If the existence of AI in a product is a bright red line you won’t cross, you’ll have to turn back now and find another search engine alternative that doesn’t use AI and also doesn’t suck. When/if you do, let me know, because the pickings are slim.

Is Kagi for you?

Kagi might be for you—especially if you’ve recently typed a simple question into Google and gotten back a pile of fabricated gibberish in place of those 10 blue links that used to serve so well. Are you annoyed that Google’s search sucks vastly more now than it did 10 years ago? Are you unhappy with how difficult it is to get Google search to do what you want? Are you fed up? Are you pissed off?

If your answer to those questions is the same full-throated “Hell yes, I am!” that mine was, then perhaps it’s time to try an alternative. And Kagi’s a pretty decent one—if you’re not averse to paying for it.

It’s a fantastic feeling to type in a search query and once again get useful, relevant, non-AI results (that I can customize!). It’s a bit of sanity returning to my Internet experience, and I’m grateful. Until Kagi is bought by a value-destroying vampire VC fund or implodes into its own AI-driven enshittification cycle, I’ll probably keep paying for it.

After that, who knows? Maybe I’ll throw away my computers and live in a cave. At least until the cave’s robot exclusion protocol fails and the Googlebot comes for me.

Photo of Lee Hutchinson

Lee is the Senior Technology Editor, and oversees story development for the gadget, culture, IT, and video sections of Ars Technica. A long-time member of the Ars OpenForum with an extensive background in enterprise storage and security, he lives in Houston.

Enough is enough—I dumped Google’s worsening search for Kagi Read More »

at-$250-million,-top-ai-salaries-dwarf-those-of-the-manhattan-project-and-the-space-race

At $250 million, top AI salaries dwarf those of the Manhattan Project and the Space Race


A 24 year-old AI researcher will earn 327x what Oppenheimer made while developing the atomic bomb.

Silicon Valley’s AI talent war just reached a compensation milestone that makes even the most legendary scientific achievements of the past look financially modest. When Meta recently offered AI researcher Matt Deitke $250 million over four years (an average of $62.5 million per year)—with potentially $100 million in the first year alone—it shattered every historical precedent for scientific and technical compensation we can find on record. That includes salaries during the development of major scientific milestones of the 20th century.

The New York Times reported that Deitke had cofounded a startup called Vercept and previously led the development of Molmo, a multimodal AI system, at the Allen Institute for Artificial Intelligence. His expertise in systems that juggle images, sounds, and text—exactly the kind of technology Meta wants to build—made him a prime target for recruitment. But he’s not alone: Meta CEO Mark Zuckerberg reportedly also offered an unnamed AI engineer $1 billion in compensation to be paid out over several years. What’s going on?

These astronomical sums reflect what tech companies believe is at stake: a race to create artificial general intelligence (AGI) or superintelligence—machines capable of performing intellectual tasks at or beyond the human level. Meta, Google, OpenAI, and others are betting that whoever achieves this breakthrough first could dominate markets worth trillions. Whether this vision is realistic or merely Silicon Valley hype, it’s driving compensation to unprecedented levels.

To put these salaries in a historical perspective: J. Robert Oppenheimer, who led the Manhattan Project that ended World War II, earned approximately $10,000 per year in 1943. Adjusted for inflation using the US Government’s CPI Inflation Calculator, that’s about $190,865 in today’s dollars—roughly what a senior software engineer makes today. The 24-year-old Deitke, who recently dropped out of a PhD program, will earn approximately 327 times what Oppenheimer made while developing the atomic bomb.

Many top athletes can’t compete with these numbers. The New York Times noted that Steph Curry’s most recent four-year contract with the Golden State Warriors was $35 million less than Deitke’s Meta deal (although soccer superstar Cristiano Ronaldo will make $275 million this year as the highest-paid professional athlete in the world).  The comparison prompted observers to call this an “NBA-style” talent market—except the AI researchers are making more than NBA stars.

Racing toward “superintelligence”

Mark Zuckerberg recently told investors that Meta plans to continue throwing money at AI talent “because we have conviction that superintelligence is going to improve every aspect of what we do.” In a recent open letter, he described superintelligent AI as technology that would “begin an exciting new era of individual empowerment,” despite declining to define what superintelligence actually is.

This vision explains why companies treat AI researchers like irreplaceable assets rather than well-compensated professionals. If these companies are correct, the first to achieve artificial general intelligence or superintelligence won’t just have a better product—they’ll have technology that could invent endless new products or automate away millions of knowledge-worker jobs and transform the global economy. The company that controls that kind of technology could become the richest company in history by far.

So perhaps it’s not surprising that even the highest salaries of employees from the early tech era pale in comparison to today’s AI researcher salaries. Thomas Watson Sr., IBM’s legendary CEO, received $517,221 in 1941—the third-highest salary in America at the time (about $11.8 million in 2025 dollars). The modern AI researcher’s package represents more than five times Watson’s peak compensation, despite Watson building one of the 20th century’s most dominant technology companies.

The contrast becomes even more stark when considering the collaborative nature of past scientific achievements. During Bell Labs’ golden age of innovation—when researchers developed the transistor, information theory, and other foundational technologies—the lab’s director made about 12 times what the lowest-paid worker earned.  Meanwhile, Claude Shannon, who created information theory at Bell Labs in 1948, worked on a standard professional salary while creating the mathematical foundation for all modern communication.

The “Traitorous Eight” who left William Shockley to found Fairchild Semiconductor—the company that essentially birthed Silicon Valley—split ownership of just 800 shares out of 1,325 total when they started. Their seed funding of $1.38 million (about $16.1 million today) for the entire company is a fraction of what a single AI researcher now commands.

Even Space Race salaries were far cheaper

The Apollo program offers another striking comparison. Neil Armstrong, the first human to walk on the moon, earned about $27,000 annually—roughly $244,639 in today’s money. His crewmates Buzz Aldrin and Michael Collins made even less, earning the equivalent of $168,737 and $155,373, respectively, in today’s dollars. Current NASA astronauts earn between $104,898 and $161,141 per year. Meta’s AI researcher will make more in three days than Armstrong made in a year for taking “one giant leap for mankind.”

The engineers who designed the rockets and mission control systems for the Apollo program also earned modest salaries by modern standards. A 1970 NASA technical report provides a window into these earnings by analyzing salary data for the entire engineering profession. The report, which used data from the Engineering Manpower Commission, noted that these industry-wide salary curves corresponded directly to the government’s General Schedule (GS) pay scale on which NASA’s own employees were paid.

According to a chart in the 1970 report, a newly graduated engineer in 1966 started with an annual salary of between $8,500 and $10,000 (about $84,622 to $99,555 today). A typical engineer with a decade of experience earned around $17,000 annually ($169,244 today). Even the most elite, top-performing engineers with 20 years of experience peaked at a salary of around $278,000 per year in today’s dollars—a sum that a top AI researcher like Deitke can now earn in just a few days.

Why the AI talent market is different

An image of a faceless human silhouette (chest up) with exposed microchip contacts and circuitry erupting from its open head. This visual metaphor explores transhumanism, AI integration, or the erosion of organic thought in the digital age. The stark contrast between the biological silhouette and mechanical components highlights themes of technological dependence or posthuman evolution. Ideal for articles on neural implants, futurism, or the ethics of human augmentation.

This isn’t the first time technical talent has commanded premium prices. In 2012, after three University of Toronto academics published AI research, they auctioned themselves to Google for $44 million (about $62.6 million in today’s dollars). By 2014, a Microsoft executive was comparing AI researcher salaries to NFL quarterback contracts. But today’s numbers dwarf even those precedents.

Several factors explain this unprecedented compensation explosion. We’re in a new realm of industrial wealth concentration unseen since the Gilded Age of the late 19th century. Unlike previous scientific endeavors, today’s AI race features multiple companies with trillion-dollar valuations competing for an extremely limited talent pool. Only a small number of researchers have the specific expertise needed to work on the most capable AI systems, particularly in areas like multimodal AI, which Deitke specializes in. And AI hype is currently off the charts as “the next big thing” in technology.

The economics also differ fundamentally from past projects. The Manhattan Project cost $1.9 billion total (about $34.4 billion adjusted for inflation), while Meta alone plans to spend tens of billions annually on AI infrastructure. For a company approaching a $2 trillion market cap, the potential payoff from achieving AGI first dwarfs Deitke’s compensation package.

One executive put it bluntly to The New York Times: “If I’m Zuck and I’m spending $80 billion in one year on capital expenditures alone, is it worth kicking in another $5 billion or more to acquire a truly world-class team to bring the company to the next level? The answer is obviously yes.”

Young researchers maintain private chat groups on Slack and Discord to share offer details and negotiation strategies. Some hire unofficial agents. Companies not only offer massive cash and stock packages but also computing resources—the NYT reported that some potential hires were told they would be allotted 30,000 GPUs, the specialized chips that power AI development.

Also, tech companies believe they’re engaged in an arms race where the winner could reshape civilization. Unlike the Manhattan Project or Apollo program, which had specific, limited goals, the race for artificial general intelligence ostensibly has no ceiling. A machine that can match human intelligence could theoretically improve itself, creating what researchers call an “intelligence explosion” that could potentially offer cascading discoveries—if it actually comes to pass.

Whether these companies are building humanity’s ultimate labor replacement technology or merely chasing hype remains an open question, but we’ve certainly traveled a long way from the $8 per diem that Neil Armstrong received for his moon mission—about $70.51 in today’s dollars—before deductions for the “accommodations” NASA provided on the spacecraft. After Deitke accepted Meta’s offer, Vercept co-founder Kiana Ehsani joked on social media, “We look forward to joining Matt on his private island next year.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

At $250 million, top AI salaries dwarf those of the Manhattan Project and the Space Race Read More »

rip-corporation-for-public-broadcasting:-1967–2026

RIP Corporation for Public Broadcasting: 1967–2026

Despite the protests of millions of Americans, the Corporation for Public Broadcasting (CPB) announced it will be winding down its operations after the White House deemed NPR and PBS a “grift” and pushed for a Senate vote that eliminated its entire budget.

The vote rescinded $1.1 billion that Congress had allocated to CPB to fund public broadcasting for fiscal years 2026 and 2027. In a press release, CPB explained that the cuts “excluded funding for CPB for the first time in more than five decades.” CPB president and CEO Patricia Harrison said the corporation had no choice but to prepare to shut down.

“Despite the extraordinary efforts of millions of Americans who called, wrote, and petitioned Congress to preserve federal funding for CPB, we now face the difficult reality of closing our operations,” Harrison said.

Concerned Americans also rushed to donate to NPR and PBS stations to confront the funding cuts, The New York Times reported. But those donations, estimated at around $20 million, ultimately amounted to too little, too late to cover the funding that CPB lost.

As CPB takes steps to close, it expects that “the majority of staff positions will conclude with the close of the fiscal year on September 30, 2025.” After that, a “small transition team” will “ensure a responsible and orderly closeout of operations” by January 2026. That team “will focus on compliance, final distributions, and resolution of long-term financial obligations, including ensuring continuity for music rights and royalties that remain essential to the public media system.”

“CPB remains committed to fulfilling its fiduciary responsibilities and supporting our partners through this transition with transparency and care,” Harrison said.

NPR mourns loss of CPB

In a statement, NPR’s president and CEO, Katherine Maher, mourned the loss of CPB, warning that it was a “vital source of funding for local stations, a champion of educational and cultural programming, and a bulwark for independent journalism.”

RIP Corporation for Public Broadcasting: 1967–2026 Read More »

under-rfk-jr,-cdc-skips-study-on-vaccination-rates,-quietly-posts-data-on-drop

Under RFK Jr, CDC skips study on vaccination rates, quietly posts data on drop

Vaccination rates among the country’s kindergartners have fallen once again, with coverage of the measles, mumps, and rubella (MMR) vaccination dropping from 92.7 percent in the 2023–2024 school year to 92.5 percent in 2024–2025. The percentage changes are small across the board, but they represent thousands of children and an ongoing downward trend that makes the country more vulnerable to outbreaks.

In the latest school year, an estimated 286,000 young children were not fully protected against measles. At the same time, the country has seen numerous explosive measles outbreaks, with case counts in 2025 already higher than any other year since the highly infectious disease was declared eliminated in 2000. In fact, the case count is at a 33-year high.

The latest small decline is one in a series that is eroding the nation’s ability to keep bygone infectious diseases at bay. In the 2019–2020 school year, 95 percent of kindergartners were protected against measles and other serious childhood diseases, such as polio. That 95 percent coverage is the target that health experts say prevents an infectious disease from spreading in a community. But amid the pandemic, vaccination rates fell, dropping to 93.9 percent MMR coverage in the 2020–2021 year, and have kept creeping downward.

Anti-vaccine era

At the height of the pandemic, some slippage in immunization coverage could be blamed on disrupted access. But anti-vaccine sentiments and misinformation are clearly playing a large role as vaccination continues to decline and access has largely resumed. For the 2024–2025 school year, nonmedical exemptions for childhood vaccinations once again hit a new high. These are exemptions driven by ideology and have risen with the influence of anti-vaccine voices, including current health secretary and fervent anti-vaccine advocate Robert F. Kennedy Jr.

Under RFK Jr, CDC skips study on vaccination rates, quietly posts data on drop Read More »

ukraine-rescues-soldier-via-drone-delivery-of-complete-e-bike

Ukraine rescues soldier via drone delivery of complete e-bike

Details from a frontline war zone are almost impossible to verify, but the brigade has shared plenty of footage, including shots of the drone lifting the bike and a soldier riding it back to safety along a treeline. (Both sides are now making widespread use of e-bikes and motorcycles for quick infantry assaults after three years of drone warfare have wiped out many of the traditional armored vehicles.)

Photo of drone command center.

The drone command center that ran the operation.

In their telling, a soldier with the callsign “Tankist” was holding a frontline position that came under attack, and a number of his comrades were killed. Tankist found himself cut off from safety and had to hold the position alone for several days.

To retrieve him, brigade staff devised a plan to deliver an e-bike via heavy bomber drone. The first drone was shot down, while the second failed under the weight. But the third attempt was successful, and Tankist was finally able to zip back toward Ukrainian lines. (He apparently hit a landmine on the way and survived that, too, finishing the trip on a second delivered e-bike.)

Amazon, of course, has had “drone delivery” in view for years and is currently testing delivery drones at locations around the US, including Pontiac, Michigan; Phoenix, Arizona; and Waco, Texas.

But these drones will only deliver packages weighing under 5 lbs—an e-bike weighs considerably more.

Ukraine rescues soldier via drone delivery of complete e-bike Read More »

the-curious-case-of-russia’s-charm-offensive-with-nasa-this-week

The curious case of Russia’s charm offensive with NASA this week

Although NASA and its counterpart in Russia, Roscosmos, continue to work together on a daily basis, the leaders of the two organizations have not held face-to-face meetings since the middle of the first Trump administration, back in October 2018.

A lot has changed in the nearly eight years since then, including the Russian invasion of Ukraine, the rocky departure of Roscosmos leader Dmitry Rogozin in 2022 who was subsequently dispatched to the front lines of the war, several changes in NASA leadership, and more.

This drought in high-level meetings was finally broken this week when the relatively new leader of Roscosmos, Roscosmos Director General Dmitry Bakanov, visited the United States to view the launch of the Crew-11 mission from Florida, which included cosmonaut Oleg Platonov. Bakanov has also met with some of NASA’s human spaceflight leaders at Johnson Space Center in Houston.

Notably, NASA has provided almost no coverage of the visit. However, the state-operated Russian news service, TASS, has published multiple updates. For example, on Thursday at Kennedy Space Center, TASS reported that Bakanov and Acting NASA Administrator Sean Duffy discussed the future of the International Space Station.

Future of ISS partnership

“The conversation went quite well,” Bakanov is quoted as saying. “We agreed to continue using the ISS until 2028. It’s important that the new NASA chief confirmed this. We will work on the deorbiting process until 2030.”

A separate TASS report also quoted Duffy as saying NASA and Roscosmos should continue to work together despite high geopolitical tensions on Earth.

“What’s unique is we might find disagreement with conflict here, which we have,” Duffy said. “We have wild disagreement with the Russians on Ukraine, but what you see is we find points of agreement and points of partnership, which is what we have with the International Space Station and Russians, and so through hard times, we don’t throw those relationships away. We’re going to continue to work on the problems that we have here, but we’re going to continue to build alliances and partnerships and friendships as humanity continues to advance in space exploration.”

The curious case of Russia’s charm offensive with NASA this week Read More »

delta-denies-using-ai-to-come-up-with-inflated,-personalized-prices

Delta denies using AI to come up with inflated, personalized prices

Delta scandal highlights value of transparency

According to Delta, the company has “zero tolerance for discriminatory or predatory pricing” and only feeds its AI system aggregated data “to enhance our existing fare pricing processes.”

Rather than basing fare prices on customers’ personal information, Carter clarified that “all customers have access to the same fares and offers based on objective criteria provided by the customer such as origin and destination, advance purchase, length of stay, refundability, and travel experience selected.”

The AI use can result in higher or lower prices, but not personalized fares for different customers, Carter said. Instead, Delta plans to use AI pricing to “enhance market competitiveness and drive sales, benefiting both our customers and our business.”

Factors weighed by the AI system, Carter explained, include “customer demand for seats and purchasing data at an aggregated level, competitive offers and schedules, route performance, and cost of providing the service inclusive of jet fuel.” That could potentially mean a rival’s promotion or schedule change could trigger the AI system to lower prices to stay competitive, or it might increase prices based on rising fuel costs to help increase revenue or meet business goals.

“Given the tens of millions of fares and hundreds of thousands of routes for sale at any given time, the use of new technology like AI promises to streamline the process by which we analyze existing data and the speed and scale at which we can respond to changing market dynamics,” Carter wrote.

He explained the AI system helps Delta aggregate purchasing data for specific routes and flights, adapt to new market conditions, and factor in “thousands of variables simultaneously.” AI could also eventually be used to assist with crew scheduling, improve flight availability, or help reservation specialists answer complex questions or resolve disputes.

But “to reiterate, prices are not targeted to individual consumers,” Carter emphasized.

Delta further pointed out that the company does not require customers to log in to search for tickets, which means customers can search for flights without sharing any personal information.

For AI companies paying attention to the Delta backlash, there may be a lesson about the value of transparency in Delta’s scandal. Critics noted Delta was among the first to admit it was using AI to influence pricing, but the vague explanation on the earnings call stoked confusion over how, as Delta seemed to drag its feet amid calls by groups like Consumer Watchdog for more transparency.

Delta denies using AI to come up with inflated, personalized prices Read More »

amazon-is-considering-shoving-ads-into-alexa+-conversations

Amazon is considering shoving ads into Alexa+ conversations

Since 2023, Amazon has been framing Alexa+ as a monumental evolution of Amazon’s voice assistant that will make it more conversational, capable, and, for Amazon, lucrative. Amazon said in a press release on Thursday that it has given early access of the generative AI voice assistant to “millions” of people. The product isn’t publicly available yet, and some advertised features are still unavailable, but Amazon’s CEO is already considering loading the chatbot up with ads.

During an investors call yesterday, as reported by TechCrunch, Andy Jassy noted that Alexa+ started rolling out as early access to some customers in the US and that a broader rollout, including internationally, should happen later this year. An analyst on the call asked Amazon executives about Alexa+’s potential for “increasing engagement” long term.

Per a transcript of the call, Jassy responded by saying, in part:

I think over time, there will be opportunities, you know, as people are engaging in more multi-turn conversations to have advertising play a role to help people find discovery and also as a lever to drive revenue.

Like other voice assistants, Alexa has yet to monetize users. Amazon is hoping to finally make money off the service through Alexa+, which is eventually slated to play a bigger role in e-commerce, including by booking restaurant reservations, keeping track of and ordering groceries, and recommending streaming content based on stated interests. But with Alexa reportedly costing Amazon $25 billion across four years, Amazon is eyeing additional routes to profitability.

Echo Show devices already show ads, and Echo speaker users may hear ads when listening to music. Advertisers have shown interest in advertising with Alexa+, but the inclusion of ads in a new offering like Alexa+ could drive people away.

Amazon is considering shoving ads into Alexa+ conversations Read More »

google-releases-gemini-2.5-deep-think-for-ai-ultra-subscribers

Google releases Gemini 2.5 Deep Think for AI Ultra subscribers

Google is unleashing its most powerful Gemini model today, but you probably won’t be able to try it. After revealing Gemini 2.5 Deep Think at the I/O conference back in May, Google is making this AI available in the Gemini app. Deep Think is designed for the most complex queries, which means it uses more compute resources than other models. So it should come as no surprise that only those subscribing to Google’s $250 AI Ultra plan will be able to access it.

Deep Think is based on the same foundation as Gemini 2.5 Pro, but it increases the “thinking time” with greater parallel analysis. According to Google, Deep Think explores multiple approaches to a problem, even revisiting and remixing the various hypotheses it generates. This process helps it create a higher-quality output.

Deep Think benchmarks

Credit: Google

Like some other heavyweight Gemini tools, Deep Think takes several minutes to come up with an answer. This apparently makes the AI more adept at design aesthetics, scientific reasoning, and coding. Google has exposed Deep Think to the usual battery of benchmarks, showing that it surpasses the standard Gemini 2.5 Pro and competing models like OpenAI o3 and Grok 4. Deep Think shows a particularly large gain in Humanity’s Last Exam, a collection of 2,500 complex, multi-modal questions that cover more than 100 subjects. Other models top out at 20 or 25 percent, but Gemini 2.5 Deep Think managed a score of 34.8 percent.

Google releases Gemini 2.5 Deep Think for AI Ultra subscribers Read More »

backpage-survivors-will-receive-$200m-to-cover-medical,-health-bills

Backpage survivors will receive $200M to cover medical, health bills

Survivors, or their representatives, must submit claims by February 2, 2026. To receive compensation, claims must include at least one document showing they “suffered monetary and/or behavioral health losses,” the claims form specified.

Documents can include emails, texts, screenshots, or advertisements. Claims may be further strengthened by sharing receipts from doctors’ visits, as well as medical or psychological exam results, summaries, or plans.

Medical expenses survivors can document can include any expenses paid out of pocket, including dental expenses, tattoo removals, or even future medical costs referenced in doctor’s referrals, an FAQ noted. Similarly, counseling or therapy costs can be covered, as well as treatment for substance use, alternative behavioral treatments, and future behavioral health plans recommended by a professional.

The FAQ also clarified that lost wages can be claimed, including any documentation of working overtime. Survivors only need to show approximate dates and times of abuse, since the DOJ said that it “appreciates that you may not remember exact number of hours you were trafficked during the relevant timeframe.” However, no future economic losses can be claimed, the FAQ said, and survivors will not be compensated for pain and suffering, despite the DOJ understanding that “your experience was painful and traumatic.”

Consulting the DOJ’s FAQ can help survivors assess the remission process. It noted that any “information regarding aliases, email addresses used, phone numbers, and trafficker names” can “be used to verify your eligibility.” Survivors are also asked to share any prior compensation already received from Backpage or through other lawsuits. To get answers to any additional questions, they can call the administrator in charge of dispensing claims, Epiq Global, at 1-888-859-9206 toll-free or at 1-971-316-5053 for international calls, the DOJ noted.

If you are in immediate danger or need resources because of a trafficking situation, please call 911 or the National Human Trafficking Hotline, toll-free at 1-888-373-7888.

Backpage survivors will receive $200M to cover medical, health bills Read More »