Author name: Mike M.

ted-cruz-bill:-states-that-regulate-ai-will-be-cut-out-of-$42b-broadband-fund

Ted Cruz bill: States that regulate AI will be cut out of $42B broadband fund

BEAD changes: No fiber preference, no low-cost mandate

The BEAD program is separately undergoing an overhaul because Republicans don’t like how it was administered by Democrats. The Biden administration spent about three years developing rules and procedures for BEAD and then evaluating plans submitted by each US state and territory, but the Trump administration has delayed grants while it rewrites the rules.

While Biden’s Commerce Department decided to prioritize the building of fiber networks, Republicans have pushed for a “tech-neutral approach” that would benefit cable companies, fixed wireless providers, and Elon Musk’s Starlink satellite service.

Secretary of Commerce Howard Lutnick previewed changes in March, and today he announced more details of the overhaul that will eliminate the fiber preference and various requirements imposed on states. One notable but unsurprising change is that the Trump administration won’t let states require grant recipients to offer low-cost Internet plans at specific rates to people with low incomes.

The National Telecommunications and Information Administration (NTIA) “will refuse to accept any low-cost service option proposed in a [state or territory’s] Final Proposal that attempts to impose a specific rate level (i.e., dollar amount),” the Trump administration said. Instead, ISPs receiving subsidies will be able to continue offering “their existing, market driven low-cost plans to meet the statutory low-cost requirement.”

The Benton Institute for Broadband & Society criticized the overhaul, saying that the Trump administration is investing in the cheapest broadband infrastructure instead of the best. “Fiber-based broadband networks will last longer, provide better, more reliable service, and scale to meet communities’ ever-growing connectivity needs,” the advocacy group said. “NTIA’s new guidance is shortsighted and will undermine economic development in rural America for decades to come.”

The Trump administration’s overhaul drew praise from cable lobby group NCTA-The Internet & Television Association, whose members will find it easier to obtain subsidies. “We welcome changes to the BEAD program that will make the program more efficient and eliminate onerous requirements, which add unnecessary costs that impede broadband deployment efforts,” NCTA said. “These updates are welcome improvements that will make it easier for providers to build faster, especially in hard-to-reach communities, without being bogged down by red tape.”

Ted Cruz bill: States that regulate AI will be cut out of $42B broadband fund Read More »

millions-of-low-cost-android-devices-turn-home-networks-into-crime-platforms

Millions of low-cost Android devices turn home networks into crime platforms

Millions of low-cost devices for media streaming, in-vehicle entertainment, and video projection are infected with malware that turns consumer networks into platforms for distributing malware, concealing nefarious communications, and performing other illicit activities, the FBI has warned.

The malware infecting these devices, known as BadBox, is based on Triada, a malware strain discovered in 2016 by Kaspersky Lab, which called it “one of the most advanced mobile Trojans” the security firm’s analysts had ever encountered. It employed an impressive kit of tools, including rooting exploits that bypassed security protections built into Android and functions for modifying the Android OS’s all-powerful Zygote process. Google eventually updated Android to block the methods Triada used to infect devices.

The threat remains

A year later, Triada returned, only this time, devices came pre-infected before they reached consumers’ hands. In 2019, Google confirmed that the supply-chain attack affected thousands of devices and that the company had once again taken measures to thwart it.

In 2023, security firm Human Security reported on BigBox, a Triada-derived backdoor it found preinstalled on thousands of devices manufactured in China. The malware, which Human Security estimated was installed on 74,000 devices around the world, facilitated a range of illicit activities, including advertising fraud, residential proxy services, the creation of fake Gmail and WhatsApp accounts, and infecting other Internet-connected devices.

Millions of low-cost Android devices turn home networks into crime platforms Read More »

openai-is-retaining-all-chatgpt-logs-“indefinitely”-here’s-who’s-affected.

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected.

In the copyright fight, Magistrate Judge Ona Wang granted the order within one day of the NYT’s request. She agreed with news plaintiffs that it seemed likely that ChatGPT users may be spooked by the lawsuit and possibly set their chats to delete when using the chatbot to skirt NYT paywalls. Because OpenAI wasn’t sharing deleted chat logs, the news plaintiffs had no way of proving that, she suggested.

Now, OpenAI is not only asking Wang to reconsider but has “also appealed this order with the District Court Judge,” the Thursday statement said.

“We strongly believe this is an overreach by the New York Times,” Lightcap said. “We’re continuing to appeal this order so we can keep putting your trust and privacy first.”

Who can access deleted chats?

To protect users, OpenAI provides an FAQ that clearly explains why their data is being retained and how it could be exposed.

For example, the statement noted that the order doesn’t impact OpenAI API business customers under Zero Data Retention agreements because their data is never stored.

And for users whose data is affected, OpenAI noted that their deleted chats could be accessed, but they won’t “automatically” be shared with The New York Times. Instead, the retained data will be “stored separately in a secure system” and “protected under legal hold, meaning it can’t be accessed or used for purposes other than meeting legal obligations,” OpenAI explained.

Of course, with the court battle ongoing, the FAQ did not have all the answers.

Nobody knows how long OpenAI may be required to retain the deleted chats. Likely seeking to reassure users—some of which appeared to be considering switching to a rival service until the order lifts—OpenAI noted that “only a small, audited OpenAI legal and security team would be able to access this data as necessary to comply with our legal obligations.”

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected. Read More »

google’s-nightmare:-how-a-search-spinoff-could-remake-the-web

Google’s nightmare: How a search spinoff could remake the web


Google has shaped the Internet as we know it, and unleashing its index could change everything.

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google wasn’t around for the advent of the World Wide Web, but it successfully remade the web on its own terms. Today, any website that wants to be findable has to play by Google’s rules, and after years of search dominance, the company has lost a major antitrust case that could reshape both it and the web.

The closing arguments in the case just wrapped up last week, and Google could be facing serious consequences when the ruling comes down in August. Losing Chrome would certainly change things for Google, but the Department of Justice is pursuing other remedies that could have even more lasting impacts. During his testimony, Google CEO Sundar Pichai seemed genuinely alarmed at the prospect of being forced to license Google’s search index and algorithm, the so-called data remedies in the case. He claimed this would be no better than a spinoff of Google Search. The company’s statements have sometimes derisively referred to this process as “white labeling” Google Search.

But does a white label Google Search sound so bad? Google has built an unrivaled index of the web, but the way it shows results has become increasingly frustrating. A handful of smaller players in search have tried to offer alternatives to Google’s search tools. They all have different approaches to retrieving information for you, but they agree that spinning off Google Search could change the web again. Whether or not those changes are positive depends on who you ask.

The Internet is big and noisy

As Google’s search results have changed over the years, more people have been open to other options. Some have simply moved to AI chatbots to answer their questions, hallucinations be damned. But for most people, it’s still about the 10 blue links (for now).

Because of the scale of the Internet, there are only three general web search indexes: Google, Bing, and Brave. Every search product (including AI tools) relies on one or more of these indexes to probe the web. But what does that mean?

“Generally, a search index is a service that, when given a query, is able to find relevant documents published on the Internet,” said Brave’s search head Josep Pujol.

A search index is essentially a big database, and that’s not the same as search results. According to JP Schmetz, Brave’s chief of ads, it’s entirely possible to have the best and most complete search index in the world and still show poor results for a given query. Sound like anyone you know?

Google’s technological lead has allowed it to crawl more websites than anyone else. It has all the important parts of the web, plus niche sites, abandoned blogs, sketchy copies of legitimate websites, copies of those copies, and AI-rephrased copies of the copied copies—basically everything. And the result of this Herculean digital inventory is a search experience that feels increasingly discombobulated.

“Google is running large-scale experiments in ways that no rival can because we’re effectively blinded,” said Kamyl Bazbaz, head of public affairs at DuckDuckGo, which uses the Bing index. “Google’s scale advantage fuels a powerful feedback loop of different network effects that ensure a perpetual scale and quality deficit for rivals that locks in Google’s advantage.”

The size of the index may not be the only factor that matters, though. Brave, which is perhaps best known for its browser, also has a search engine. Brave Search is the default in its browser, but you can also just go to the URL in your current browser. Unlike most other search engines, Brave doesn’t need to go to anyone else for results. Pujol suggested that Brave doesn’t need the scale of Google’s index to find what you need. And admittedly, Brave’s search results don’t feel meaningfully worse than Google’s—they may even be better when you consider the way that Google tries to keep you from clicking.

Brave’s index spans around 25 billion pages, but it leaves plenty of the web uncrawled. “We could be indexing five to 10 times more pages, but we choose not to because not all the web has signal. Most web pages are basically noise,” said Pujol.

The freemium search engine Kagi isn’t worried about having the most comprehensive index. Kagi is a meta search engine. It pulls in data from multiple indexes, like Bing and Brave, but it has a custom index of what founder and CEO Vladimir Prelovac calls the “non-commercial web.”

When you search with Kagi, some of the results (it tells you the proportion) come from its custom index of personal blogs, hobbyist sites, and other content that is poorly represented on other search engines. It’s reminiscent of the days when huge brands weren’t always clustered at the top of Google—but even these results are being pushed out of reach in favor of AI, ads, Knowledge Graph content, and other Google widgets. That’s a big part of why Kagi exists, according to Prelovac.

A Google spinoff could change everything

We’ve all noticed the changes in Google’s approach to search, and most would agree that they have made finding reliable and accurate information harder. Regardless, Google’s incredibly deep and broad index of the Internet is in demand.

Even with Bing and Brave available, companies are going to extremes to syndicate Google Search results. A cottage industry has emerged to scrape Google searches as a stand-in for an official index. These companies are violating Google’s terms, yet they appear in Google Search results themselves. Google could surely do something about this if it wanted to.

The DOJ calls Google’s mountain of data the “essential raw material” for building a general search engine, and it believes forcing the firm to license that material is key to breaking its monopoly. The sketchy syndication firms will evaporate if the DOJ’s data remedies are implemented, which would give competitors an official way to utilize Google’s index. And utilize it they will.

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech.

Credit: Ryan Whitwam

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech. Credit: Ryan Whitwam

According to Prelovac, this could lead to an explosion in search choices. “The whole purpose of the Sherman Act is to proliferate a healthy, competitive marketplace. Once you have access to a search index, then you can have thousands of search startups,” said Prelovac.

The Kagi founder suggested that licensing Google Search could allow entities of all sizes to have genuinely useful custom search tools. Cities could use the data to create deep, hyper-local search, and people who love cats could make a cat-specific search engine, in both cases pulling what they want from the most complete database of online content. And, of course, general search products like Kagi would be able to license Google’s tech for a “nominal fee,” as the DOJ puts it.

Prelovac didn’t hesitate when asked if Kagi, which offers a limited number of free searches before asking users to subscribe, would integrate Google’s index. “Yes, that is something we would do,” he said. “And that’s what I believe should happen.”

There may be some drawbacks to unleashing Google’s search services. Judge Amit Mehta has expressed concern that blocking Google’s search placement deals could reduce browser choice, and there is a similar issue with the data remedies. If Google is forced to license search as an API, its few competitors in web indexing could struggle to remain afloat. In a roundabout way, giving away Google’s search tech could actually increase its influence.

The Brave team worries about how open access to Google’s search technology could impact diversity on the web. “If implemented naively, it’s a big problem,” said Brave’s ad chief JP Schmetz, “If the court forces Google to provide search at a marginal cost, it will not be possible for Bing or Brave to survive until the remedy ends.”

The landscape of AI-based search could also change. We know from testimony given during the remedy trial by OpenAI’s Nick Turley that the ChatGPT maker tried and failed to get access to Google Search to ground its AI models—it currently uses Bing. If Google were suddenly an option, you can be sure OpenAI and others would rush to connect Google’s web data to their large language models (LLMs).

The attempt to reduce Google’s power could actually grant it new monopolies in AI, according to Brave Chief Business Officer Brian Brown. “All of a sudden, you would have a single monolithic voice of truth across all the LLMs, across all the web,” Brown said.

What if you weren’t the product?

If white labeling Google does expand choice, even at the expense of other indexes, it will give more kinds of search products a chance in the market—maybe even some that shun Google’s focus on advertising. You don’t see much of that right now.

For most people, web search is and always has been a free service supported by ads. Google, Brave, DuckDuckGo, and Bing offer all the search queries you want for free because they want eyeballs. It’s been said often, but it’s true: If you’re not paying for it, you’re the product. This is an arrangement that bothers Kagi’s founder.

“For something as important as information consumption, there should not be an intermediary between me and the information, especially one that is trying to sell me something,” said Prelovac.

Kagi search results acknowledge the negative impact of today’s advertising regime. Kagi users see a warning next to results with a high number of ads and trackers. According to Prelovac, that is by far the strongest indication that a result is of low quality. That icon also lets you adjust the prevalence of such sites in your personal results. You can demote a site or completely hide it, which is a valuable option in the age of clickbait.

Kagi search gives you a lot of control.

Credit: Ryan Whitwam

Kagi search gives you a lot of control. Credit: Ryan Whitwam

Kagi’s paid approach to search changes its relationship with your data. “We literally don’t need user data,” Prelovac said. “But it’s not only that we don’t need it. It’s a liability.”

Prelovac admitted that getting people to pay for search is “really hard.” Nevertheless, he believes ad-supported search is a dead end. So Kagi is planning for a future in five or 10 years when more people have realized they’re still “paying” for ad-based search with lost productivity time and personal data, he said.

We know how Google handles user data (it collects a lot of it), but what does that mean for smaller search engines like Brave and DuckDuckGo that rely on ads?

“I’m sure they mean well,” said Prelovac.

Brave said that it shields user data from advertisers, relying on first-party tracking to attribute clicks to Brave without touching the user. “They cannot retarget people later; none of that is happening,” said Brave’s JP Schmetz.

DuckDuckGo is a bit of an odd duck—it relies on Bing’s general search index, but it adds a layer of privacy tools on top. It’s free and ad-supported like Google and Brave, but the company says it takes user privacy seriously.

“Viewing ads is privacy protected by DuckDuckGo, and most ad clicks are managed by Microsoft’s ad network,” DuckDuckGo’s Kamyl Bazbaz said. He explained that DuckDuckGo has worked with Microsoft to ensure its network does not track users or create any profiles based on clicks. He added that the company has a similar privacy arrangement with TripAdvisor for travel-related ads.

It’s AI all the way down

We can’t talk about the future of search without acknowledging the artificially intelligent elephant in the room. As Google continues its shift to AI-based search, it’s tempting to think of the potential search spin-off as a way to escape that trend. However, you may find few refuges in the coming years. There’s a real possibility that search is evolving beyond the 10 blue links and toward an AI assistant model.

All non-Google search engines have AI integrations, with the most prominent being Microsoft Bing, which has a partnership with OpenAI. But smaller players have AI search features, too. The folks working on these products agree with Microsoft and Google on one important point: They see AI as inevitable.

Today’s Google alternatives all have their own take on AI Overviews, which generates responses to queries based on search results. They’re generally not as in-your-face as Google AI, though. While Google and Microsoft are intensely focused on increasing the usage of AI search, other search operators aren’t pushing for that future. They are along for the ride, though.

AI overview on phone

AI Overviews are integrated with Google’s search results, and most other players have their own version.

Credit: Google

AI Overviews are integrated with Google’s search results, and most other players have their own version. Credit: Google

“We’re finding that some people prefer to start in chat mode and then jump into more traditional search results when needed, while others prefer the opposite,” Bazbaz said. “So we thought the best thing to do was offer both. We made it easy to move between them, and we included an off switch for those who’d like to avoid AI altogether.”

The team at Brave views AI as a core means of accessing search and one that will continue to grow. Brave generates AI answers for many searches and prominently cites sources. You can also disable Brave’s AI if you prefer. But according to search chief Josep Pujol, the move to AI search is inevitable for a pretty simple reason: It’s convenient, and people will always choose convenience. So AI is changing the web as we know it, for better or worse, because LLMs can save a smidge of time, especially for more detailed “long-tail” queries. These AI features may give you false information while they do it, but that’s not always apparent.

This is very similar to the language Google uses when discussing agentic search, although it expresses it in a more nuanced way. By understanding the task behind a query, Google hopes to provide AI answers that save people time, even if the model needs a few ticks to fan out and run multiple searches to generate a more comprehensive report on a topic. That’s probably still faster than running multiple searches and manually reviewing the results, and it could leave traditional search as an increasingly niche service, even in a world with more choices.

“Will the 10 blue links continue to exist in 10 years?” Pujol asked. “Actually, one question would be, does it even exist now? In 10 years, [search] will have evolved into more of an AI conversation behavior or even agentic. That is probably the case. What, for sure, will continue to exist is the need to search. Search is a verb, an action that you do, and whether you will do it directly or whether it will be done through an agent, it’s a search engine.”

Vlad from Kagi sees AI becoming the default way we access information in the long term, but his search engine doesn’t force you to use it. On Kagi, you can expand the AI box for your searches and ask follow-ups, and the AI will open automatically if you use a question mark in your search. But that’s just the start.

“You watch Star Trek, nobody’s clicking on links there—I do believe in that vision in science fiction movies,” Prelovac said. “I don’t think my daughter will be clicking links in 10 years. The only question is if the current technology will be the one that gets us there. LLMs have inherent flaws. I would even tend to say it’s likely not going to get us to Star Trek.”

If we think of AI mainly as a way to search for information, the future becomes murky. With generative AI in the driver’s seat, questions of authority and accuracy may be left to language models that often behave in unpredictable and difficult-to-understand ways. Whether we’re headed for an AI boom or bust—for continued Google dominance or a new era of choice—we’re facing fundamental changes to how we access information.

Maybe if we get those thousands of search startups, there will be a few that specialize in 10 blue links. We can only hope.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google’s nightmare: How a search spinoff could remake the web Read More »

endangered-classic-mac-plastic-color-returns-as-3d-printer-filament

Endangered classic Mac plastic color returns as 3D-printer filament

On Tuesday, classic computer collector Joe Strosnider announced the availability of a new 3D-printer filament that replicates the iconic “Platinum” color scheme used in classic Macintosh computers from the late 1980s through the 1990s. The PLA filament (PLA is short for polylactic acid) allows hobbyists to 3D-print nostalgic novelties, replacement parts, and accessories that match the original color of vintage Apple computers.

Hobbyists commonly feed this type of filament into commercial desktop 3D printers, which heat the plastic and extrude it in a computer-controlled way to fabricate new plastic parts.

The Platinum color, which Apple used in its desktop and portable computer lines starting with the Apple IIgs in 1986, has become synonymous with a distinctive era of classic Macintosh aesthetic. Over time, original Macintosh plastics have become brittle and discolored with age, so matching the “original” color can be a somewhat challenging and subjective experience.

A close-up of

A close-up of “Retro Platinum” PLA filament by Polar Filament. Credit: Polar Filament

Strosnider, who runs a website about his extensive vintage computer collection in Ohio, worked for years to color-match the distinctive beige-gray hue of the Macintosh Platinum scheme, resulting in a spool of hobby-ready plastic by Polar Filament and priced at $21.99 per kilogram.

According to a forum post, Strosnider paid approximately $900 to develop the color and purchase an initial 25-kilogram supply of the filament. Rather than keeping the formulation proprietary, he arranged for Polar Filament to make the color publicly available.

“I paid them a fee to color match the speaker box from inside my Mac Color Classic,” Strosnider wrote in a Tinkerdifferent forum post on Tuesday. “In exchange, I asked them to release the color to the public so anyone can use it.”

Endangered classic Mac plastic color returns as 3D-printer filament Read More »

science-phds-face-a-challenging-and-uncertain-future

Science PhDs face a challenging and uncertain future


Smaller post-grad classes are likely due to research budget cuts.

Credit: Thomas Barwick/Stone via Getty Images

Since the National Science Foundation first started collecting postgraduation data nearly 70 years ago, the number of PhDs awarded in the United States has consistently risen. Last year, more than 45,000 students earned doctorates in science and engineering, about an eight-fold increase compared to 1958.

But this level of production of science and engineering PhD students is now in question. Facing significant cuts to federal science funding, some universities have reduced or paused their PhD admissions for the upcoming academic year. In response, experts are beginning to wonder about the short and long-term effects those shifts will have on the number of doctorates awarded and the consequent impact on science if PhD production does drop.

Such questions touch on longstanding debates about academic labor. PhD training is a crucial part of nurturing scientific expertise. At the same time, some analysts have worried about an oversupply of PhDs in some fields, while students have suggested that universities are exploiting them as low-cost labor.

Many budding scientists go into graduate school with the goal of staying in academia and ultimately establishing their own labs. For at least 30 years, there has been talk of a mismatch between the number of doctorates and the limited academic job openings. According to an analysis conducted in 2013, only 3,000 faculty positions in science and engineering are added each year—even though more than 35,000 PhDs are produced in these fields annually.

Decades of this asymmetrical dynamic has created a hypercompetitive and high-pressure environment in the academic world, said Siddhartha Roy, an environmental engineer at Rutgers University who co-authored a recent study on tenure-track positions in engineering. “If we look strictly at academic positions, we have a huge oversupply, and it’s not sustainable,” he said.

But while the academic job market remains challenging, experts point out that PhD training also prepares individuals for career paths in industry, government, and other science and technology fields. If fewer doctorates are awarded and funding continues to be cut, some argue, American science will weaken.

“The immediate impact is there’s going to be less science,” said Donna Ginther, a social researcher who studies scientific labor markets at the University of Kansas. In the long run, that could mean scientific innovations, such as new drugs or technological advances, will stall, she said: “We’re leaving that scientific discovery on the table.”

Historically, one of the main goals of training PhD students has been to retain those scientists as future researchers in their respective fields. “Academia has a tendency to want to produce itself, reproduce itself,” said Ginther. “Our training is geared towards creating lots of mini-mes.”

But it is no secret in the academic world that tenure-track faculty positions are scarce, and the road to obtaining tenure is difficult. Although it varies across different STEM fields, the number of doctorates granted each year consistently surpass the number of tenure-track positions available. A survey gathering data from the 2022-2023 academic year, conducted by the Computing Research Association, found that around 11 percent of PhD graduates in computational science (for which employment data was reported) moved on to tenure-track faculty positions.

Roy found a similar figure for engineering: Around one out of every eight individuals who obtain their doctorate—12.5 percent—will eventually land a tenure-track faculty position, a trend that remained stable between 2014 and 2021, the last year for which his team analyzed data. The bottleneck in faculty positions, according to one recent study, leads about 40 percent of postdoctoral researchers to leave academia.

However, in recent years, researchers who advise graduate students have begun to acknowledge careers beyond academia, including positions in industry, nonprofits, government, consulting, science communication, and policy. “We need, as academics, need to take a broader perspective on what and how we prepare our students,” said Ginther.

As opposed to faculty positions, some of these labor markets can be more robust and provide plenty of opportunities for those with a doctorate, said Daniel Larremore, a computer scientist at the University of Colorado Boulder who studies academic labor markets, among other topics. Whether there is a mismatch between the number of PhDs and employment opportunities will depend on the subject of study and which fields are growing or shrinking, he added. For example, he pointed out that there is currently a boom in machine learning and artificial intelligence, so there is a lot of demand from industry for computer science graduates. In fact, commitments to industry jobs after graduation seem to be at a 30-year high.

But not all newly minted PhDs immediately find work. According to the latest NSF data, students in biological and biomedical sciences experienced a decline in job offers in the past 20 years, with 68 percent having definite commitments after graduating in 2023, compared to 72 percent in 2003. “The dynamics in the labor market for PhDs depends very much on what subject the PhD is in,” said Larremore.

Still, employment rates reflect that postgraduates benefit from greater opportunities compared to the general population. In 2024, the unemployment rate for college graduates with a doctoral degree in the US was 1.2 percent, less than half the national average at the time, according to the Bureau of Labor Statistics. In NSF’s recent survey, 74 percent of science and engineering graduating doctorates had definite commitments for employment or postdoctoral study or training positions, three points higher than it was in 2003.

“Overproducing for the number of academic jobs available? Absolutely,” said Larremore. “But overproducing for the economy in general? I don’t think so.”

The experts who spoke with Undark described science PhDs as a benefit for society: Ultimately, scientists with PhDs contribute to the economy of a nation, be it through academia or alternative careers. Many are now concerned about the impact that cuts to scientific research may have on that contribution. Already, there are reports of universities scaling back graduate student admissions in light of funding uncertainties, worried that they might not be able to cover students’ education and training costs. Those changes could result in smaller graduating classes in future years.

Smaller classes of PhD students might not be a bad thing for academia, given the limited faculty positions, said Roy. And for most non-academic jobs, Roy said, a master’s degree is more than sufficient. However, people with doctorates do contribute to other sectors like industry, government labs, and entrepreneurship, he added.

In Ginther’s view, fewer scientists with doctoral training could deal a devastating blow for the broader scientific enterprise. “Science is a long game, and the discoveries now take a decade or two to really hit the market, so it’s going to impinge on future economic growth.”

These long-term impacts of reductions in funding might be hard to reverse and could lead to the withering of the scientific endeavor in the United States, Larremore said: “If you have a thriving ecosystem and you suddenly halve the sunlight coming into it, it simply cannot thrive in the way that it was.”

This article was originally published on Undark. Read the original article.

Science PhDs face a challenging and uncertain future Read More »

shopper-denied-$51-refund-for-20tb-hdd-that’s-mostly-a-weighted-plastic-box

Shopper denied $51 refund for 20TB HDD that’s mostly a weighted plastic box

Many Arsians are the go-to IT support representative for family and friends. If you’re lucky, your loved ones’ problems are easily resolved with a reset, update, or new cable. That wasn’t the case for a son who recently had to break the news to his father that the 20TB portable hard drive he purchased for about $50 was mostly just a plastic box with weights and a PCB.

Ars Technica spoke with the Reddit user who posted about his father bringing him a “new 20T[B] HDD to see if I could figure out what was wrong with it.” The Redditor, who asked that we refer to him by his first name, Martin, revealed that his dad paid £38 (about $51.33) for what he thought was a portable HDD. That’s a red flag. HDDs have gotten cheaper over the years, but not that cheap. A 20TB external HDD typically costs over $200, and they’re usually much larger than the portable-SSD-sized device that Martin’s father received. A 20TB HDD in a portable form factor is rarer and can cost well over $300.

Taking a hammer to the device revealed that the chassis was nearly empty, save for some iron wheel weights sloppily attached to the black plastic with hefty globs of glue and a small PCB with some form of flash storage that could connect to a system via USB-A.

Fake HDD opened up.

The “HDD” opened up.

The “HDD” opened up. Credit: The__Unflushable/Reddit

As with other PC storage scams we’ve seen online, Windows read the so-called HDD as a “19TB drive, but would then just hang if you try to do anything with it,” Martin said on Reddit.

Programming the board’s firmware so that the drive appears as a high-capacity storage device on Windows is a clever trick that could convince users they’re to blame. But as Windows-savvy users would point out, Windows reports drive capacities in gibibytes or tebibytes, so a real 20TB HDD would appear as approximately 18.2TB in Windows.

Martin told Ars:

The device appeared to mount on the desktop with the device name in Mandarin (turned out this simply said “Hard Disc”). I tried copying a file, and the name did appear on the “hard disc.” It was only when I tried to open this file from the hard disc that the problems began to emerge. The file could not be opened, no matter how hard I tried, including reformatting the hard disc. At that point nothing was “working” at all.

Sketchy online listing

Martin told Ars that his father purchased the fake HDD from a website called UK.Chicntech, which appears to primarily sell car supplies, kitchen supplies, and home textiles. Currently, the website does not list any PC components or peripherals, but overall, its stock is pretty limited. Chicntech currently lists some other electronics, like a “[With Starry Sky Lid]AI Nano Mist Intelligent Car Aromatherapy Device” (linking for explanatory purposes only; we don’t recommend shopping on this website) for $56.

Shopper denied $51 refund for 20TB HDD that’s mostly a weighted plastic box Read More »

in-which-i-make-the-mistake-of-fully-covering-an-episode-of-the-all-in-podcast

In Which I Make the Mistake of Fully Covering an Episode of the All-In Podcast

I have been forced recently to cover many statements by US AI Czar David Sacks.

Here I will do so again, for the third time in a month. I would much prefer to avoid this. In general, when people go on a binge of repeatedly making such inaccurate inflammatory statements, in such a combative way, I ignore.

Alas, under the circumstances of his attacks on Anthropic, I felt an obligation to engage once more. The All-In Podcast did indeed go almost all-in (they left at least one chip behind) to go after anyone worried about AI killing everyone or otherwise opposing the administration’s AI strategies, in ways that are often Obvious Nonsense.

To their credit, they also repeatedly agreed AI existential risk is real, which also makes this an opportunity to extend an olive branch. And some of the disagreements clearly stem from real confusions and disagreements, especially around them not feeling the AGI or superintelligence and thinking all of this really is about jobs and also market share.

If anyone involved wants to look for ways to work together, or simply wants to become less confused, I’m here. If not, I hope to be elsewhere.

  1. Our Continuing Coverage.

  2. Important Recent Context.

  3. The Point of This Post.

  4. Summary of the Podcast.

  5. Part 1 (The Part With the Unhinged Attacks on Anthropic and also other targets).

  6. Other Related Obvious Nonsense.

  7. Part 2 – We Do Mean the Effect on Jobs.

  8. Part 3 – The Big Beautiful Bill.

  9. Where Does This Leave Us.

I first covered many of his claims in Fighting Obvious Nonsense About AI Diffusion. Then I did my best to do a fully balanced look at the UAE-KSA chips deal, in America Makes AI Chip Diffusion Deal with UAE and KSA. As I said then, depending on details of the deal and other things we do not publicly know, it is possible that from the perspective of someone whose focus in AI is great power competition, this deal advanced American interests. The fact that many of Sacks’s arguments in favor of the deal were Obvious Nonsense, and many seemed to be in clearly bad faith, had to be addressed but did not mean the deal itself had to be an error.

This third post became necessary because of recent additional statements by Sacks on the All-In Podcast. Mostly they are not anything he has not said before, and are things he is likely to say many times again in the future, and they are largely once again Obvious Nonsense, so why cover them? Doesn’t Sacks rant his hallucinations about the supposed ‘AI Existential Risk Industrial Complex’ all the time?

Yes. Yes, he does. Mostly he falsely rants, and he repeats himself, and I ignore it.

What was different this time was the context.

The Trump Administration is attempting to pass what they call the ‘Big Beautiful Bill.’

Primarily this bill is a federal budget, almost none of which has to do with AI.

It also contains a provision that would impose a 10 year moratorium, on the state or local level, on civil law enforcement of almost any laws related to AI.

Many people, including myself and Anthropic CEO Dario Amodei, are not afraid to say that this is a bonkers crazy thing to do, and that perhaps we might want to take some modest actions on AI prior to it transforming the world rather than after.

Dario Amodei (CEO Anthropic): You can’t just step in front of the train and stop it. The only move that’s going to work is steering the train – steer it 10 degrees in a different direction from where it was going. That can be done. That’s possible, but we have to do it now.

Putting this provision in the BBB is also almost certainly a violation of the Byrd rule, but Congress chose to put it in anyway, likely as a form of ‘reconnaissance in force.’

It is not entirely clear that the administration even wants this moratorium in this form. Maybe yes, maybe no. But they very much do care about the BBB.

Thus, someone leaked to Semafor, and we got this article with the title ‘Anthropic emerges as an adversary to Trump’s big bill,’ claiming that Anthropic is lobbying against the BBB due to the AI provision, and this and other Anthropic actions are making Trump world very angry.

The other main trigger, Semafor reports, was Anthropic’s hiring two Biden AI staffers, Elizabeth Kelly and Tarun Chhabra, and Biden AI advisor Ben Buchanan, although it is noted by Semafor that Anthropic also employs Republican-aligned policy staff, like Benjamin Merkel and Mary Croghan. Buchanan, the architect of the Biden Diffusion rules, has (as one would expect) personally opposed the UAE-KSA deal and other ways in which Biden administration rules have been reversed.

Bizarrely, the Trump administration also expressed annoyance at Anthropic CEO Dario Amodei warning about imminent loss of up to half of white collar jobs. I think that projection was too aggressive, but I am confident he believes it.

Semafor bizarrely frames these lobbying tactics as potentially savvy business moves?

Reed Albergotti: Opposing the bill preempting state AI laws may not be necessary anyway, because it faces high hurdles in both congress and in the courts.

In other words, Anthropic’s federal lobbying probably won’t make much of a difference. Influencing the White House on its executive orders would have been the best shot.

In the long run, though, maybe it’s a smart strategy. AI researchers may see Anthropic as more principled and it could help with recruiting. The Trump administration won’t be around forever and Anthropic may be better positioned when the next president takes office.

Yeah, look, no, obviously not, if you agree with Reed (and I do) that Anthropic can’t have a substantial impact on the BBB proceedings then this was clearly a misstep given the reaction. Why would anyone think ‘antagonize the Trump administration’ was good business for Anthropic? To help a bit with recruiting because they would look slightly more ‘more principled’ at the risk of facing a hostile White House?

Anthropic and the White House being enemies would help only OpenAI and China.

Anthropic’s lobbying of course is partly motivated by what they believe is good for America and humanity, and partly by what is good for Anthropic.

Anthropic has been, up until recently, seemingly been pursuing a very deliberate insider strategy. They were careful not to antagonize anyone. They continue to downplay public statements about AI existential and catastrophic risks. They have offered only very careful and measured support for any AI regulations. Dario has very much publicly gotten behind and emphasized the ‘need to beat China’ framework. Not only does Anthropic not call for AI to ‘slow down’ or ‘pause,’ they call upon American AI to accelerate. On SB 1047, Anthropic called for and got major softening of the bill and then still refused to endorse it.

This has been extremely frustrating for those who are worried about AI killing everyone, many of whom think Anthropic should speak up far louder and make the case for what is actually necessary. They see Anthropic as having largely sold out on this and often other fronts. Because such an approach is very obviously good for Anthropic’s narrow business interests.

What was said on the All-In Podcast recently, and is being reiterated even more than usual on Sacks’s Twitter recently, is a frankly rather unhinged attack against anyone and everyone Sacks dislikes in the AI space, in an attempt to associate all of it together into a supposed grand diabolical and conspiratorial ‘AI Existential Risk Industrial Complex’ out that, quite frankly, does not exist.

What is different this time is primarily the targeting of Anthropic.

Presumably the message is, loud and clear: Back the hell off. Or else.

This post has five primary objectives.

  1. Actually look concretely at the arguments being made in case they have a point.

  2. Have a reference point for this event and for this general class of claims and arguments, explaining that they simply are not a description of reality and illustrating the spirit in which they are being offered to us, such that I can refer others back to this post, and link back to it in the future.

  3. Extend an olive branch and offer of help to Sacks and those at the All-In Podcast.

  4. Ensure that Anthropic understands the messages being sent here.

  5. Provide a response to the podcast’s discussion on jobs in their Part 2.

For various reasons, I am, shall we say, writing this with the maximum amount of charity and politeness that I can bring myself to muster.

You should proceed to the rest of this post if and only if this post is relevant to you.

I used the YouTube transcript. This was four podcasts in one.

  1. A rather misinformed and unhinged all-out attack on and an attempt to conflate through associations and confusions and vibes some combination of Anthropic, diffusion controls on advanced AI chips, anyone supporting diffusion controls, anyone opposing the UAE deal especially if they are a China hawk, more generally anyone who has a different opinion on how best to beat China, anyone worried about AI job losses, anyone worried about AI existential risk (while admitting to their credit that AI is indeed an existential risk several times), those who cause AIs to create black George Washingtons, several distinct classes of people referred to as ‘doomers,’ EA, The Biden Administration, anyone previously employed by the Biden Administration at least in AI, OpenPhil, Dustin Moskovitz, Netflix CEO Reed Hoffman, woke agendas, a full on dystopian government with absolute power, a supposed plot to allocate all compute to a few chosen companies that was this close to taking over the world if Trump had lost.

    1. This was then extended to Barack Obama via Twitter.

    2. As presented this was presumably in large part a warning to Anthropic, that their recent activities have pissed people off more than they might realize, in ways I presume Anthropic did not intend.

  2. A much better discussion about AI job losses and economic growth, in which new startups and new jobs and cheap goods will save us all and everything will be great and we’ll all work less hours and be wealthier. I largely disagree.

    1. It also makes clear that yes, by existential they do (often) mean the effect on jobs and they do not in any way feel or expect superintelligence or even AGI. Or at minimum, they often speak and think in ways that assume this.

  3. A discussion of the ‘big beautiful bill’ also known as the budget, without reference to the attempted 10-year moratorium on any local or state enforcement of any civil law related to AI. Mostly here I just note key claims and attitudes. I thought a lot of the talk was confused but it’s not relevant to our interests here.

  4. A discussion of other matters outside our scope. I won’t comment.

If those involved believe what they are saying in part one and what David Sacks often says on Twitter on related topics, then they are deeply, deeply misinformed and confused about many things. That would mean this is a great opportunity for us all to talk, learn and work together. We actually agree on quite a lot, and that ‘we’ extends also to many of the others they are attacking here.

I would be happy to talk to any combination of the All-In hosts, in public, in private or on the podcast in any combination, to help clear all this up along with anything else they are curious about. We all benefit from that. I would love to do all this cooperatively. However differently we go about it, we all want all the good things and there are some signs there is underlying appreciation here for the problems ahead.

However it ended up in the podcast – again, this could all be a big misunderstanding – there was a lot of Obvious Nonsense here, including a lot of zombie lies, clearly weaponized. They say quite a lot of things that are not, and frame things in ways that serve to instill implications that are not true, and equate things that should not be equated, and so on. I can’t pretend otherwise.

There’s also a profound failure to ‘feel the AGI’ and definitely a failure to feel the ASI (artificial superintelligence), or even to feel that others might truly feel it, which seems to be driving a lot of the disagreement.

There’s a conflation, that I believe is largely genuine, of any and all skepticism of technology under the umbrella term ‘Doomer.’ Someone worries about job loss? Doomer. Someone worries about existential risk (by which perhaps you mean the effect on jobs?)? Doomer. Someone worries about AI ethics? Doomer. Someone worries about climate change? Doesn’t come up, but also doomer, perusambly.

But guys, seriously, if you actually believe all this, call me, let’s clear this up. I don’t know how you got this confused but we can fix it, even if we continue to disagree about important things too.

If you don’t believe it, of course, then stop saying it. And whether or not you intend to stop, you can call me anyway, let’s talk off the record and see if there’s anything to be done about all this.

The transcript mostly doesn’t make clear who is saying what, but also there don’t seem to be any real disagreements between the participants, so I’m going to use ‘they’ throughout.

I put a few of these notes into logical order rather than order in the transcript where it made more sense, but mostly this is chronological. I considered moving a few jobs-related things into the jobs section but decided not to do this.

As per my podcast standard, I will organize this as a series of bullet points. Anything in the main bullet point is my description of what was importantly said. Anything in the secondary sections is me responding to what was said.

  1. They start off acknowledging employment concerns are real, they explicitly say people are concerned about ASI and yes they do mean the effect on jobs.

  2. Then start going hard after ‘doomers’ starting with Dario Amodei’s aggressive claims about white collar job losses, accusing him of hype.

    1. Pot? Cryto-kettle?

    2. I do actually think that particular claim was too aggressive, but if Dario is saying that it is because he believes it (and has confusions about diffusion, probably).

    3. Later they say ‘Anthropic’s warnings coincidence with key moments in their fundraising journey’ right after Anthropic recently closed their Series E and now is finally warning us about AI risks.

    4. They are repeating the frankly zombie lie that Anthropic and OpenAI talk about AI existential risk or job loss as hype for fundraising, that it’s a ‘smart business strategy.’ That it is a ‘nefarious strategy.’ This is Obvious Nonsense. It is in obvious bad faith. OpenAI and Anthropic have in public been mostly actively downplaying existential risk concerns for a while now, in ways I know them not to believe. Stop it.

  3. Then claim broader AI risk concerns expressed at the first AI safety summit ‘have been discredited,’ while agreeing that the risks are real they simply haven’t arrived yet. Then they go on about an ‘agenda’ you should be ‘concerned about.’

  4. They essentially go all Jevon’s Paradox on labor, that the more we automate (without loss of generality) coding there will be better returns so you’ll actually end up using more. They state this like it is fact, even in the context of multipliers like 20x productivity.

    1. This claim seems obviously too strong. I won’t reiterate my position on jobs.

  5. These venture capitalists think that venture capitalists will always just create a lot more jobs than we lose even if e.g. all the truck drivers are out of work because profits, while investing in a bunch of one-person tech companies and cryptos.

  6. ‘Fear is a way of getting people into power and they’re going to create a new kind of control.’ I… I mean… given who is doing this podcast do I even have to say it?

  7. They claim Effective Altruism ‘astroturfs.’

    1. This is complete lying Obvious Nonsense, and rather rich coming from venture capitalists who engage in exactly this in defense of their books, with company disingenuous lobbying efforts from the likes of a16z and Meta massively outspending all worried people combined and lying their asses off outright on the regular and also being in control in the White House.

    2. Every survey says that Americans are indeed worried about AI (although it is low salience) and AI is unpopular.

  8. They then outright accuse OpenPhil, EA in general, Anthropic and so on of being in a grand conspiracy seeking ‘global AI governance,’ then conflate this with basic compute governance, then conflate this with the overall Biden AI agenda and DEI.

    1. Which again is Obvious Nonsense, at best such efforts are indifferent to DEI.

    2. I assure everyone Anthropic does not care about a woke agenda or about DEI.

    3. My experience with EA reflects this same attitude in almost all cases.

  9. Then they claim this ‘led to woke AI like the black George Washington.’

    1. I refer to what happened with that as The Gemini Incident.

    2. The causal claim here is Obvious Nonsense. Google was being stupid and woke all on its own for well documented reasons and you can be made at Google’s employees if you want about this.

  10. They make it sound as sinister as possible that Anthropic hired several ex-Biden AI policy people.

    1. I get why this is a bad look from the All-In Podcast perspective.

    2. However, what they are clearly implying here is not true, and Anthropic has hired people from both sides of the aisle as per Semafor, and is almost certainly simply snapping up talent that was available.

  11. They accuse ‘EA’ or OpenPhil or even Anthropic of advocating ‘for a pause.’

    1. This is unequivocally false for OP, for Anthropic and for the vast majority of EA efforts. Again, lies or deep deep confusion, Obvious Nonsense.

    2. Anthropic CEO Dario Amodei has put out extensive essays about the need to beat China and all that. He is actively trying to build transformational AI.

    3. A ‘pause’ would damage or destroy Anthropic and he thinks a pause would be obviously unwise right now. Which I agree with.

    4. I am very confident the people making these claims know the claims are false.

  12. They say ‘x-risk is not the only risk we have to beat China.’

    1. And I agree! We all agree! Great that we can agree these are two important goals. Can we please stop with the claims that we don’t agree with this?

    2. Dario also agrees very explicitly, out loud, in public, so much so it makes a lot of worried people and likely many of his employees highly uneasy and he’s accused of selling out.

    3. David Sacks in particular has accused anyone who opposes his approach to ‘beating China’ of not caring about beating China. He either needs to understand that a lot of other people genuinely worried about China strongly disagree about the right way to beat China and think keeping compute out of the wrong hands is important here, or he needs to stop lying about this.

  13. Someone estimates 30% chance China ‘wins the AI race’ but thinks existential risk is lower than 30%.

    1. I disagree on the both percentages, but yes that is a position one might reasonably take, but we can and must then work on both, and also while both these outcomes are very bad one is much much worse than the other and I hope we agree on which is which.

  14. They say Claude kicks ass, great product.

    1. I definitely agree with that.

  15. The pull quote comes around (19: 00) where they accuse everyone involved of being ‘funded by hardcore leftists’ and planning on some ‘Orwellian future where AI is controlled by the government’ that they ‘use to control all of us’ and using this to spread their ‘woke’ or ‘left-wing’ values.

    1. Seriously no, stop.

    2. I go into varying degrees of detail about this throughout this and other posts, but please, seriously, no, this is simply false on all counts.

    3. It is true that there are other people, including people who were in the Biden administration, who on the margin will prioritize doing things that promote ‘left-wing’ values and ‘woke’ agendas. Those are different people.

  16. They even claim that before Trump was elected they were on a path to ‘global compute governance’ restricted to 2-3 companies that then forced the AIs to be woke.

    1. This is again all such complete Obvious Nonsense.

    2. I believe this story originated with Marc Andreessen.

    3. At best it is a huge willful misunderstanding of something that was said by someone in the Biden Administration.

    4. It’s insane that they are still claiming this and harping on it, it makes it so hard to treat anything they say as if it maps to reality.

    5. At this point I seriously can’t even with painting people advocating for ‘maybe we should figure out what is the best thing to do with our money and do that’ and ‘we should prevent China from getting access to our compute’ and ‘if we are going to make digital minds that are potentially smarter than us that will transform the world that might not be a safe thing to do and is going to require some regulations at some point’ as ‘we should dictate all the actions of everyone on Earth in some Orwellian government conspiracy for Woke World Domination these people would totally pull off if it wasn’t for Trump’ and seriously just stop.

  17. They ask ‘should you fear government regulation or should you fear autocomplete.’

    1. It is 2025 are you still calling this ‘autocomplete’ you cannot be serious?

    2. We agree this thing is going to be pivotal to the future and that it presents existential risk. What the hell, guys. You are making a mockery of yourselves.

    3. I cannot emphasize enough that if you people could just please be normal on these fronts where we all want the same things then the people worried about AI killing everyone would mostly be happy to work together, and would largely be willing to overlook essentially everything else we disagree about.

    4. I honestly don’t even know why these people think they need to be spending their time, effort and emotional energy on these kinds of attacks right now. They must really think that they have some sort of mysterious super powerful enemy here and it’s a mirage.

    5. These are the same people pushing for their ‘big beautiful bill’ that includes a full pre-emption of any state or local regulations on AI (in a place that presumably won’t survive the Byrd rule, but they’re trying anyway) with the intended federal action to fill that void being actual nothing.

    6. Then they’re getting angry when people react as if that proposal is extreme and insane, and treat those opposed to it as being in an enemy camp.

  18. They do some reiteration of their defenses of the UAE-KSA chips deal.

    1. I’ve already said my peace on this extensively, again reasonable people can disagree on what is the best strategic approach, and reasonable people would recognize this.

David Sacks in particular continues to repeat a wide variety of highly unhinged claims about Effective Altruism. Here he includes Barack Obama in this grand conspiracy, then links to several even worse posts that are in transparently obvious bad faith.

David Sacks (2025, saying Obvious Nonsense): Republicans should understand that when Obama retweets hyperbolic and unproven claims about AI job loss, it’s not an accident, it’s part of an influence operation. The goal: to further “Global AI Governance,” a massive power grab by the bureaucratic state and globalist institutions.

The organizers: “Effective Altruist” billionaires with a long history of funding left-wing causes and Trump hatred. Of course, it’s fine to be concerned about a technology as transformational as AI, but if you repeat their claims uncritically, you may be falling for an astroturfed campaign by the “AI Existential Risk Industrial Complex.”

Claims about job loss (what I call They Took Our Jobs) are a mundane problem, calling for mundane solutions, and have nothing whatsoever to do with existential risk or ‘effective altruism,’ what are you even talking about. Is this because the article quotes Dario Amodei’s claims about job losses, therefore it is part of some grand ‘existential risk industrial complex’?

Seriously, do you understand how fully unhinged you sound to anyone with any knowledge of the situation?

David Sacks does not even disagree that we will face large scale job loss from AI, only about the speed and net impact. This same All-In Podcast talks about the possibility of large job losses in Part 2, not dissimilar in size to what Dario describes. Everyone who talks about this on the podcast seems to agree that massive job losses via AI automation are indeed coming, except they say This Is Good, Actually because technology will always also create more jobs to replace them. The disagreement here is highly reasonable and is mainly talking price, and the talking price is almost entirely about whether new jobs will replace the old ones.

Indeed, they talk about a ‘tough job market for new grads’ and warn that if you don’t embrace the AI tools, you’ll be left behind and won’t find work. That’s basically the same claim as Kevin Roose is making.

What did Barack Obama do and say? The post I saw was that he retweeted a New York Times article by Kevin Roose that talks about job losses and illustrates some signs of it, including reporting the newsworthy statement from Dario Amodei, and then Obama made this statement:

Barack Obama: Now’s the time for public discussions about how to maximize the benefits and limit the harms of this powerful new technology.

Do you disagree with Obama’s statement here, Sacks? Do you think it insufficiently expresses the need to provide miniature American flags for others and be twirling, always twirling towards freedom? Obama’s statement is essentially content-free.

EDIT: I then realized after I hit post later that yes, Obama did also retweet the Axios article that quoted Dario, saying this:

Barak Obama: At a time when people are understandably focused on the daily chaos in Washington, these articles describe the rapidly accelerating impact that AI is going to have on jobs, the economy, and how we live.

That is at least a non-trivial statement, although his follow-up Call to Action is the ultimate trivial statement. This very clearly is not part of some conspiracy to make us ‘have public discussions about how to maximize the benefits and limit the harms of this powerful technology.’

How do these people continue to claim that this all-powerful ‘Effective Altruism’ was somehow the astroturfing lobbyist group and they are the rogue resistance, when the AI industry has more lobbyists in Washington and Brussels than the fossil fuel industry and the tobacco industry combined? When almost all of that industry lobbying, including from OpenAI, Google, Meta and a16z, is exactly what you would expect, opposition to regulations and attempts to get their bag of subsidies.

What is most frustrating is that David Sacks very clearly understands that AGI presents an existential risk. AI existential risk is even explicitly affirmed multiple times during this podcast!

He has been very clear on this in the past, as in, for example:

David Sacks (2024, saying helpful things): AI is a wonderful tool for the betterment of humanity; AGI is a potential successor species.

I’m all in favor of accelerating technological progress, but there is something unsettling about the way OpenAI explicitly declares its mission to be the creation of AGI.

Despite this, Sacks seems to have decided that reiterating these bizarre conspiracy theories and unhinged attacks is a good strategy for whatever his goals might be.

Here is another recent absurdity that I got forcibly put in front of me via Tyler Cowen:

David Sacks (June 2025, saying untrue things): Nobody was caught more off guard by the DeepSeek moment than the AI Doomers.

They had been claiming:

— that the U.S. was years ahead in AI;

— that PRC leadership didn’t care much about AI;

— that China would prioritize stability over disruption; and

— that if the U.S. slowed down AI development, China would slow down too.

All of this turned out to be profoundly wrong. Now, ironically, many of the Doomers — who prior to DeepSeek had tried to ban American models now currently in use — are trying to rebrand as “China Hawks.” If they had their way, the U.S. would have already lost the AI race!

David Sacks has to know exactly what he is doing here. This is in obvious bad faith. At best, this is the tactic of ‘take a large group of people, and treat the entire group as saying anything that its most extreme member once said, and state it in the most negative way possible.’

To state the obvious, going point by point, how false all of this is:

  1. The USA remains ahead in AI, but yes China has closed this gap somewhat, as one would broadly expect, at least in terms of fast following. The impact of the DeepSeek moment was largely that various people, including Sacks, totally blew what happened out of proportion. Some of that was obvious at the time, some only became clear in retrospect. But the rhetoric is full on ‘missile gap.’ Also, this is like saying ‘you claimed Alice was two miles ahead of Bob, but then Bob caught up to Alice, so you were lying.’ That is not how anything works.

  2. The PRC leadership was, as far as I can tell, highly surprised by DeepSeek. They were indeed far more caught off guard than the ‘AI Doomers,’ many of whom had already been following DeepSeek and had noticed v3 and expected this. The PRC then noticed, and yes they now care about AI more, but for a long time they very much did not appreciate what was going on, what are you even talking about.

  3. China seems to have favored stability over disruption far more than America has in this case, they are absolutely caring about stability in the ways China cares about, and this is not what a China that was actually AGI-pilled would look like. China is happy to ‘disrupt’ in places where what they are disrupting is us. Sure.

  4. This is a complete non sequitur. This claims that ‘we’ said [X] → [Y], where [X] is ‘America slows down’ and [Y] is ‘China slows down.’ [X] did not happen! At all! So how can you possibly say that [X]→[Y] turned out to be profoundly wrong? You have absolutely no idea. I also note that we almost always didn’t even make this claim, that X→Y, we said it would be good if both X and Y were true and we should try to get that to happen. For example, I did not say ‘If we slow down, China slows down.’ I said things of the form ‘it would be good to open a dialogue about whether, if we solved down, China would also slow down, because we haven’t even tried that yet.’

  5. The reference to ‘attempts to ban models currently in use’ as if this applies broadly to the group in question, rather than to a very small number of people who were widely criticized at the time, including repeatedly by myself very very explicitly, for overreach because of this exact request.

  6. The repetition of the false claim that there is an attempted ‘rebrand as China Hawks’ which I have discussed previously, and then the claim that these are the same people who tried to ban current models, which they aren’t.

I sincerely wish that David Sacks would stop. I do not expect him to stop. Given that I do not expect him to stop, I sincerely wish that I can go back to avoiding responding when he continues.

The discussion of the future of jobs and employment in Part 2 was much better.

There seemed to be a problem with scale throughout Part 2.

This all seems to take place in a tech and startup bubble where everyone can be founding a new startup or deeply steeping themselves in AI tools to get one of those cool new AI jobs.

This is great advice for podcast listeners in terms of career development, but it simply doesn’t scale the way they want it to, nor does it then broaden out as fast or far in terms of jobs as they pitch it as doing.

There’s ‘what can a bright young listener to this podcast who is into tech and startups and is situationally aware do’ and ‘what is going to happen to a typical person.’ You cannot, in fact, successfully tell most people to ‘learn to code’ by adding in the word vibe.

  1. They assert ‘technology always means more jobs,’ and see concerns about job loss as largely looking at union jobs or those of particular groups like truck drivers that Biden cares about or coal miners that Trump cares about.

    1. I think the worries are mostly far more general. I find it interesting they focus primarily on the non-LLM job loss from self-driving rather than the wider things coming.

    2. I see union jobs as likely far more protected, especially government protected unions, as unions have leverage to prevent diffusion, until they are disrupted by non-union rivals, and similar for jobs protected by license regimes.

  2. They point out that we will all be richer and the benefits will come quickly, not only the job losses.

    1. True, although it will likely be cold comfort to many during the transition, the gains won’t flow through in ease of making ends meet the way one might hope unless we make that happen.

  3. They emphasize that costs of goods will fall.

    1. I think this is largely very right and yes people are underestimating this, but goods we can make without regulatory barriers are not where people are struggling and are a remarkably low percentage of costs.

    2. In the past, getting cheaper food and clothing was a huge deal because that was 50%+ of expenses and it shrunk dramatically, which is great.

    3. But now food is about 10% and clothing is trivial, the prices can’t go that much lower, and labor income might be falling quite a lot if there’s enough competition for jobs.

    4. If the price of food is cut in half that is great, I do agree it would be good to automate food prep (and truck driving and so on) when we can, but this actually doesn’t save all that much money.

    5. I think a lot of people’s focus on the price of food is essentially generational, historical and evolutionary memory of different times when food costs were central to survival.

  4. They correctly ask the right question, what allows for the same lifestyle.

    1. In the past, the main constraint on lifestyle was ability to purchase goods, so cutting goods costs via increased productivity means you need to work less to match lifestyle.

    2. But now it is mostly services, and the goods with restricted supply, and also we are ratcheting up what counts as the baseline lifestyle and what is the required basket of goods.

    3. The key question about lifestyle isn’t quality of goods. It’s about quality of life, it’s about ability to raise a family, as I will soon discuss in ‘Is Life Getting Harder?’

    4. Their model seems to boil down to something not that different from ‘startups are magic’ or ‘lump of income and labor fallacy?’ As in, if you have a bunch of wealth and investment then of course that will create tons of jobs through new startups and investment.

    5. But in a rapidly automating world, especially one in which the best startups will often be disruptors via automation, we’re talking about the need for tens of millions of new jobs over the course of a few years, and then those jobs start getting automated too, and AI keeps improving as this happens. If you think there really are this many ‘shadow jobs’ waiting for us I want a much more concrete model of how that can be true.

    6. Note that if you think we don’t need more gears here, then think about why you think that is true here and where else that might apply.

    7. Reminder: My expectation is that for a while unemployment won’t change that much, although there will be some extra unemployment due to transitional effects, until we exhaust the ‘shadow jobs’ that previously weren’t worth hiring people for, but then this will run out – there is a lot of ruin in the job market but not forever.

  5. Prediction that we will ‘take our profits’ in 30 hour work weeks, speculation about 10% GDP growth if we have 10%-20% white collar job loss (one time?!). None of this seems coherent, other than a general ‘we will all be rich and trends of richness continue’ intuition.

    1. Note the lack of ambition here. If only 20% of current white collar jobs or tasks get automated over a medium term then that isn’t that big. There’s no reason to think that causes persistent 10% growth.

    2. I do think there is a good chance of persistent 10%+ growth but if so it will involve far more transformational changes.

    3. I also don’t see why we should expect people to ‘take our profits’ in shorter work weeks unless we use government to essentially force this.

  6. ‘People say jobs are going to go away but I am on the ground and I see more startups than ever and they’re making a million dollars per employee.’

    1. The statement is true, and I buy that the startup world is going great, but in terms of responding to the threat of massive job losses? These people seem to be in a bubble. Do they even hear themselves? Can they imagine a Democratic politician talking like that in this context?

    2. Do they understand the relative scales of these employment opportunities and economic impacts? ‘The ground’ does not want to mean startup world in San Francisco.

  7. They talk about how it is hard to automate all of a customer service job because some parts are hard for AI.

    1. This is a distinct lack of thinking ahead.

    2. In general it does not seem like this discussion is baking in future AI progress, and also still leaves room for most such jobs to go away anyway.

  8. They say yes if we have 20% job loss government will have to step in but it is a ‘total power grab’ to demand the government ‘act now’ about potential future unemployment.

    1. What is this word salad specter of Andrew Yang or something? How does this relate to anything that anyone is seriously asking for?

    2. The thing about unemployment is that you can indeed respond after it happens. I strongly agree that we should wait and see before doing anything major about this, but also I don’t see serious calls to do otherwise.

  9. Based on various statements where they seem to conflate the two:

    1. I think that by existential risk they might literally mean the effect on jobs? No, seriously, literally, they think it means the effect on jobs? Or they are at least confused here? I can’t make sense of this discussion any other way. Not in a bad faith way, just it seems like they’re legitimately deeply confused about this.

  10. They say diffusion rules wouldn’t solve existential risk but they’re open to suggestions?

    1. I mean no they won’t do that on their own, the primary goal of diffusion rules is to hold back China so we can both win the race and giving ourselves enough freedom of action (and inaction) to have a chance to find a solution to existential risk, why is this so confusing.

    2. And what is this doing in the middle of a discussion about job loss and economic growth rates?

  11. More talk about ‘glorified auto compute.’

    1. You can stop any time, guys.

  12. (36: 52) ‘tough job market for new grads in the established organizations and so what should new grads do they should probably, steep themselves in the tools and go to younger companies or start a company i think that’s the only solution for them.’

    1. This is great advice but I don’t think they understand how grim that is. The vast majority of people are not going to be able to do a startup, I wish this were possible and it’s good advice for their audience sure but this is innumerate to suggest for the population as a whole.

    2. So the only thing, as they say, that young people can do in this type of future is deeply steep themselves in these AI tools to outcompete those that don’t do it, but obviously only a small portion of such people can go that route at once, this works exactly because everyone else mostly won’t do it. The vast majority of grads will be screwed on an epic level.

    3. This is the same as the whole ‘learn to code’ message that, shall we say, did not win the votes of the coal miners. Yes, any individual sufficiently capable person could learn to code, but not everyone can, and there were never that many slots. Similarly, for a long time ‘learn to play poker and grind it out’ has been a very viable path for anyone who has the discipline, but very obviously that is not a solution at scale because it would stop working (also it doesn’t produce anything).

  13. Again speculation that ‘the people who benefit the most’ are new coders willing to embrace the tech.

    1. I mean tell that to the current SWE market, this is not at all obvious, but yes in a AI-is-super-productive world the handful of people who most embrace this opportunity will do well. They’re right that the people who embrace the tools will beat the people who push back, okay, sure.

    2. I will never get the python love they also express here, or the hate for OOP. I really wish we weren’t so foolish as to build the AI future on python, but here we are.

  14. (40: 57) Again the conflation where blaming a layoff on AI is a ‘doomer story.’

    1. This is, once again, a distinct very different concern. Both are real.

    2. So they’re confirming that by ‘doomer’ they often simply mean someone who by existential risk does mean the effect on jobs.

    3. That’s a mostly different group of people, and that’s not how the term is typically used, and it’s clear that they’re either being fooled by the conflation or using it strategically or both.

    4. Pick a lane, I’m fine with either, but this trying to equate both camps to use each to attack the other? No.

  15. They insist that when layoffs happen so far they’re not due to AI.

    1. Okay, I mean, the companies do often say otherwise and you agree AI is making us all a lot more productive, but maybe they’re all lying and everyone only cuts management now but also then they say management jobs aren’t being eliminated due to AI yet.

    2. Alternatively they are also telling the ‘the layoffs are due to AI because the people who won’t embrace AI now need to be fired and this is good, actually’ story, which is also plausible but you can’t have it both ways.

    3. This all sounds like throwing everything at the wall that sounds like ‘AI is good’ and seeing what sticks.

    4. This is perhaps related to throwing everything that sounds like ‘AI is bad’ into a pot and claiming all of it is the same people in a grand conspiracy?

  16. As I understand them: The AI race is an infinite race with no finish line but it is still a race to see who is stronger and maybe USA wins maybe China wins maybe it’s a tie maybe ‘open source wins’ and nuclear deterrence led to peace and was good actually but this is better because it’s a system of productivity not destruction and everyone will have to compete vigorously but we have to watch out for something like 5G where Huawei ‘weren’t worried about diffusion’ they wanted to get their tech out, the race is about market share and whose technology people are using, and the pace of improvement is ‘holy shit.’

    1. I covered a (more coherent but logically identical) version of this when I previously covered Sacks, this does not what matters and the ‘AI race’ is not about market share, and this reflects like the rest of this podcast a profound failure to ‘feel the AGI’ and certainly to ‘feel the ASI.’

It seems worth a few notes while I am here. I will divide the ‘BBB’ into two things.

  1. The attempted 10-year moratorium on enforcement of any AI anything on the local or state level whatsoever. This is, in my humble opinion and also that of Anthropic’s CEO, deeply stupid, bonkers crazy, a massive overreach, a ‘of course you know this means war’ combined with ‘no one could have predicted a break in the levees’ level move. Also an obvious violation of the Byrd rule when placed within the budget, although sadly not in practice a violation of the 10th amendment.

  2. Everything else in the bill, which is what they discuss here. The most important note is that they only talk about the rest of the BBB without the moratorium.

I am not an expert on Congressional budget procedure or different types of appropriations but it seemed like no one here was one either, and the resulting discussion seemed like it would benefit from someone who understands how any of this works.

They are very keen to blame anything and everything they can on Biden, the rest on Congress, and nothing on Trump.

They seem very excited by making the DOGE cuts permanent for reasons that are not explained.

I notice that there is a prediction that this administration will balance the Federal budget. Are we taking wagers on that? There’s a lot of talk of the need to get the deficit down, and they blame the bill not doing this on Congress, essentially.

It sees this expectation is based on creating lots of economic growth, largely via AI. Very large gains from AI does seem to me to be the only sane way we might balance the budget any time soon. I agree that there should be lots of emphasis on GDP growth. They are very confident, it seems, that lower taxes will pay for themselves and spur lots of growth, and they think the CBO is dumb and simplistic.

There’s a concrete prediction for a very hot Q2 GDP print, 3%-4%. I hope it happens. It seems they generally think the economy will do better than predicted, largely due to AI but also I think due to Trump Is Magic Economy Catnip?

They talk about the need for more energy production and some details are discussed on timing and sizing, I agree and would be doing vastly more to move projects forward but from what I have seen of the BBB it does not seem to be net positive on this front. I think they are right to emphasize this but from what I can tell this is not cashing out in terms of much action to create new energy production.

I don’t have anything to say about Part 4, especially given it is out of my scope here.

I hope that Anthropic understands the reaction that they seem to be causing, and chooses wisely how to navigate given this. Given how often Sacks makes similar claims and how much we all have learned to tune those claims out most of the time, it would be easy to miss that something important has changed there.

I presume that David Sacks will continue to double down on this rhetoric, as will many others who have chosen to go down similar rhetorical paths. I expect them to continue employing these Obvious Nonsense vibe-based strategies and accusations of grand conspiracies indefinitely, without regard to whether they map onto reality.

I expect it to be part of a deliberate strategy to brand anyone opposing them, in the style of a certain kind of politics, as long as such styles are ascendant. Notice when someone makes or amplifies such claims. Update on that person accordingly.

I would love to be wrong about that. I do see signs that, underneath it all, something better might indeed be possible. But assuming I’m not wrong, it is what it is.

My realistic aspiration is to not have to keep having that conversation this way, and in particular not having to parse claims from such arguments as if they were attempting to be words that have meaning, that are truthful, or that map into physical reality. It is not fun for anyone, and there are so many other important things to do.

If they want to have a different kind of conversation, I would welcome that.

Discussion about this post

In Which I Make the Mistake of Fully Covering an Episode of the All-In Podcast Read More »

11-things-you-probably-didn’t-know-the-switch-2-can-do

11 things you probably didn’t know the Switch 2 can do


Our first quick dive into the system-level settings and the new GameChat multiplayer.

Let’s-a go! Credit: Kyle Orland

Eight years ago, just before the release of the Nintendo Switch, we provided an in-depth review of the hardware thanks to early production units provided by Nintendo. This year, Nintendo has opted not to provide such unrestricted early press access to the Switch 2 hardware, citing a “day-one update” to the system software and some launch games that would supposedly make pre-release evaluation more difficult.

As such, we won’t be able to provide our full thoughts on the Switch 2 until well after the system is in players’ hands. While that’s not an ideal situation for readers looking to make an early purchase decision, we’ll do our best to give you our hands-on impressions as soon as possible after launch day.

In lieu of review access, though, we were able to get some extended hands-on time with the final Switch 2 hardware at a daylong preview event held by Nintendo last week. This event provided our first look at the console’s system-level menu and settings, as well as features like GameChat (which was hard to fully evaluate in an extremely controlled environment).

While this access was far from sufficient for a full review, it did let us discover a few interesting features that we weren’t aware of beforehand. Here are some of the new tidbits we stumbled across during our day with the Switch 2 hardware.

GameChat can generate captions for live speech

One of the most unexpected accessibility features of the Switch 2 is the system’s ability to automatically generate on-screen captions for what friends are saying during a GameChat session. These captions appear in their own box that can be set to the side of the main gameplay. The captioning system seemed pretty fast and accurate in our test and could even update captions from multiple speakers at the same time.

GameChat can automatically update captions for multiple speakers at once.

Credit: Kyle Orland

GameChat can automatically update captions for multiple speakers at once. Credit: Kyle Orland

While this is obviously useful for hard-of-hearing players, we could also see the feature being a boon for managing crosstalk among rowdy GameChat parties or for quickly referring back to something someone said a few seconds ago.

You can generate spoken speech from text messages

In a reverse of the auto-captioning system discussed above, GameChat also has a feature buried deep in its menus that lets you type a message on the on-screen keyboard and have it spoken aloud to the other participants in a slightly robotic voice. This could come in handy when you’re playing in an environment where you have to be quiet but still want to quickly convey detailed information to your fellow players.

The camera has built-in head-tracking

During GameChat sessions, you can make the connected camera show only your face instead of your entire body and/or the background behind it. This mode keeps your face centered in a small, circular frame even as you move around during gameplay, though there is a slight delay in the tracking if you move your head too quickly.

While you can also activate a similar face display during local multiplayer sessions of Mario Kart World, the game doesn’t seem to track your movements, meaning you can easily fall out of frame if you don’t hold your body still.

The system can detect the angle of the kickstand

Wonderful!

Wonderful!

This was a cute little surprise I discovered in a Switch 2 Welcome Tour mini-game that asks you to set the kickstand as close as possible to a given angle. This mini-game works even if the Joy-Cons are not attached, suggesting that there is a sensor in the kickstand or tablet itself that measures the angle. It did take a few seconds of stillness for the game to fully confirm the system’s resting angle, though, so don’t expect to be tilting the kickstand rapidly to control action games or anything.

You can use mouse mode to navigate system menus

I stumbled on this feature when I was holding the Joy-Cons normally and one of my fingers accidentally passed over the mouse sensor, activating a mouse pointer on the system menu screen. When I put the controller down on its edge, I found that the pointer could scroll and click through those menus, often much more quickly than flicking a joystick.

Mouse mode also lets you zoom in on specific areas of the screen with a quick double-click, which should be useful for both vision-impaired players and those playing on tiny and/or far-off screens.

You can adjust the mouse mode sensitivity

The system menu lets you adjust the mouse sensor’s sensitivity between “low,” “medium,” and “high.” While that’s a lot less precise than the fully adjustable DPI settings you might be used to with a computer mouse, it’s still a welcome option.

In some quick testing, I found the high-sensitivity mode to be especially useful when using the mouse on a small surface, such as the top of my thigh. At this setting, the pointer could move from one end of the screen to the other with the slightest wrist adjustment. Low sensitivity mode, on the other hand, proved useful in more precise situations, such as in a Welcome Tour mini-game where I had to move a ball quickly and precisely through a large, electrified maze.

You can play sounds to find lost controllers

Find lost controllers easily with this menu option.

Find lost controllers easily with this menu option.

Lose a Joy-Con somewhere in the depths of your couch? Not to worry—a new menu option on the Switch 2 lets you play a distinctive sound through that Joy-Con’s improved HD Rumble 2 motor to help you find its precise location. While we confirmed that this feature also works with the new Pro Controller 2, we were unable to determine whether it can be used for original Switch controllers that are synced with a Switch 2.

You can set a system-wide security PIN

Your unique PIN code must be entered any time the system comes out of sleep mode, making the hardware functionally useless to anyone who doesn’t have the PIN. This should be great for kids who want to keep siblings away and parents who are worried about their kids sneaking in extra Switch 2 time when they shouldn’t be.

You can limit the battery charging level

A new system-level option will prevent the Switch 2 from charging as soon as it hits 90 percent of capacity, a move intended to increase the longevity of the internal battery. This is already a common feature on many smartphones and portable gaming devices, so it’s nice to see Nintendo joining the bandwagon here. Thus far, though, it appears that the 90 percent battery capacity is the only cutoff point available, with no further options for customization.

You can adjust the size of menu text

MAXIMUM TEXT SIZE.

Credit: Kyle Orland

MAXIMUM TEXT SIZE. Credit: Kyle Orland

As you can see in the photo above, setting the system text size to “MAXIMUM” lets menu options be seen easily from roughly the moon. You can set the system text to bold and high-contrast for even more legibility, and there’s also an option to make the system menu text smaller than the default, for whatever reason.

You can swap the A and B buttons at the system level

With this menu option activated, the B button is used to “confirm” and the A button is used to “cancel” in system menus. This should be welcome news for players more used to the button layout on Xbox, PlayStation, and Steam Deck controllers, which all have the “confirm” and “cancel” options in reversed positions from the Nintendo default.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

11 things you probably didn’t know the Switch 2 can do Read More »

google-settles-shareholder-lawsuit,-will-spend-$500m-on-being-less-evil

Google settles shareholder lawsuit, will spend $500M on being less evil

“Over the years, we have devoted substantial resources to building robust compliance processes,” a Google spokesperson said. “To avoid protracted litigation we’re happy to make these commitments.”

This case is what’s known as a consolidated derivative litigation, where multiple shareholder lawsuits are combined into a single action. The litigation stretches back to 2021, when a Michigan pension fund accused Google of harming the company’s future by triggering widespread antitrust and regulatory actions with “prolonged and ongoing monopolistic and anticompetitive business practices.” That accusation has only gained more weight in the years since it was made.

Today, Google is coming off three major antitrust losses. In 2023, Google lost an antitrust case brought by Epic Games that accused it of anticompetitive practices in app distribution. In 2024, the US Department of Justice successfully showed that Google has illegally maintained a search monopoly. Finally, Google lost the advertising antitrust case earlier this year, putting its primary revenue driver at risk.

These legal salvos could cost the company billions in fines and force major changes to its business. Google is facing a world in which it might need to open Google Play to other app stores, hand over advertising data to competitors, license its search index, and even sell the Chrome browser. Perhaps the reforms will lead to a changed company, but that won’t undo the damage from the current spate of antitrust actions.

Google settles shareholder lawsuit, will spend $500M on being less evil Read More »

texas-ag-loses-appeal-to-seize-evidence-for-elon-musk’s-ad-boycott-fight

Texas AG loses appeal to seize evidence for Elon Musk’s ad boycott fight

If MMFA is made to endure Paxton’s probe, the media company could face civil penalties of up to $10,000 per violation of Texas’ unfair trade law, a fine or confinement if requested evidence was deleted, or other penalties for resisting sharing information. However, Edwards agreed that even the threat of the probe apparently had “adverse effects” on MMFA. Reviewing evidence, including reporters’ sworn affidavits, Edwards found that MMFA’s reporting on X was seemingly chilled by Paxton’s threat. MMFA also provided evidence that research partners had ended collaborations due to the looming probe.

Importantly, Paxton never contested claims that he retaliated against MMFA, instead seemingly hoping to dodge the lawsuit on technicalities by disputing jurisdiction and venue selection. But Edwards said that MMFA “clearly” has standing, as “they are the targeted victims of a campaign of retaliation” that is “ongoing.”

The problem with Paxton’s argument is that” it “ignores the body of law that prohibits government officials from subjecting individuals to retaliatory actions for exercising their rights of free speech,” Edwards wrote, suggesting that Paxton arguably launched a “bad-faith” probe.

Further, Edwards called out the “irony” of Paxton “readily” acknowledging in other litigation “that a state’s attempt to silence a company through the issuance and threat of compelling a response” to a civil investigative demand “harms everyone.”

With the preliminary injunction won, MMFA can move forward with its lawsuit after defeating Paxton’s motion to dismiss. In her concurring opinion, Circuit Judge Karen L. Henderson noted that MMFA may need to show more evidence that partners have ended collaborations over the probe (and not for other reasons) to ultimately clinch the win against Paxton.

Watchdog celebrates court win

In a statement provided to Ars, MMFA President and CEO Angelo Carusone celebrated the decision as a “victory for free speech.”

“Elon Musk encouraged Republican state attorneys general to use their power to harass their critics and stifle reporting about X,” Carusone said. “Ken Paxton was one of those AGs who took up the call, and his attempt to use his office as an instrument for Musk’s censorship crusade has been defeated.”

MMFA continues to fight against X over the same claims—as well as a recently launched Federal Trade Commission probe—but Carusone said the media company is “buoyed that yet another court has seen through the fog of Musk’s ‘thermonuclear’ legal onslaught and recognized it for the meritless attack to silence a critic that it is,” Carusone said.

Paxton’s office did not immediately respond to Ars’ request to comment.

Texas AG loses appeal to seize evidence for Elon Musk’s ad boycott fight Read More »

testing-a-robot-that-could-drill-into-europa-and-enceladus

Testing a robot that could drill into Europa and Enceladus


We don’t currently have a mission to put it on, but NASA is making sure it’s ready.

Geysers on Saturn’s moon Enceladus Credit: NASA

Europa and Enceladus are two ocean moons that scientists have concluded have liquid water oceans underneath their outer icy shells. The Europa Clipper mission should reach Europa around April of 2030. If it collects data hinting at the moon’s potential habitability, robotic lander missions could be the only way to confirm if there’s really life in there or not.

To make these lander missions happen, NASA’s Jet Propulsion Laboratory team has been working on a robot that could handle the search for life and already tested it on the Matanuska Glacier in Alaska. “At this point this is a pretty mature concept,” says Kevin Hand, a planetary scientist at JPL who led this effort.

Into the unknown

There are only a few things we know for sure about conditions on the surface of Europa, and nearly all of them don’t bode well for lander missions. First, Europa is exposed to very harsh radiation, which is a problem for electronics. The window of visibility—when a potential robotic lander could contact Earth—lasts less than half of the 85 hours it takes for the moon to complete its day-night cycle due to the Europa-Jupiter orbit. So, for more than half the mission, the robot would need to fend for itself, with no human ground teams to get it out of trouble. The lander would also need to run on non-rechargeable batteries, because the vast distance to the Sun would make solar panels prohibitively massive.

And that’s just the beginning. Unlike on Mars, we don’t have any permanent orbiters around Europa that could provide a communication infrastructure, and we don’t have high-resolution imagery of the surface, which would make the landing particularly tricky. “We don’t know what Europa’s surface looks like at the centimeter to meter scale. Even with the Europa Clipper imagery, the highest resolution will be about half a meter per pixel across a few select regions,” Hand explains.

Because Europa has an extremely thin atmosphere that doesn’t provide any insulation, the temperatures on top of its ice shell are estimated to vary between minus-160° Celsius during the daytime maximum and minus-220° C during the night, which means the ice the lander would be there to sample is most likely hard as concrete. Hand’s team, building their robot, had to figure out a design that could deal with all these issues.

The work on the robotic system for the Europa lander mission began more than 10 years ago. Back then, the 2013–2022 decadal strategy for planetary science cited the Europa Clipper as the second-highest priority large-scale planetary mission, so a lander seemed like a natural follow-up.

Autonomy and ice drilling

The robot developed by Hand’s team has legs that enable it to stabilize itself on various types of surfaces, from rock-hard ice to loose, soft snow. To orient itself in the environment, it uses a stereoscopic camera with an LED light source for illumination hooked to computer-vision algorithms—a system similar to the one currently used by the Perseverance rover on Mars. “Stereoscopic cameras can triangulate points in an image and build a digital surface topography model,” explains Joseph Bowkett, a JPL researcher and engineer who worked on the robot’s design.

The team built an entirely new robotic arm with seven degrees of freedom. Force torque sensors installed in most of its joints act a bit like a nervous system, informing the robot when key components sustain excessive loads to prevent it from damaging the arm or the drill. “As we press down on the surface [and] conduct drilling and sampling, we can measure the forces and react accordingly,” Bowkett says. The finishing touch was the ICEPICK, a drilling and sampling tool the robot uses to excavate samples from the ice up to 20 centimeters deep.

Because of long periods the lander would need operate without any human supervision, the team also gave it a wide range of autonomous systems, which operate at two different levels. High-level autonomy is responsible for scheduling and prioritizing tasks within a limited energy budget. The robot can drill into a sampling site, analyze samples with onboard instruments, and decide whether it makes sense to keep drilling at the same spot or choose a different sampling site. The high-level system is also tasked with choosing the most important results for downlink back to Earth.

Low-level autonomy breaks all these high-level tasks down into step-by-step decisions on how to operate the drill and how to move the arm in the safest and most energy-efficient way.

The robot was tested in simulation software first, then indoors at JPL’s facilities, and finally at the Matanuska Glacier in Alaska, where it was lowered from a helicopter that acted as a proxy for a landing vehicle. It was tested at three different sites, ranked from the easiest to the most challenging. It completed all the baseline activities as well as all of the extras. The latter included a task like drilling 27 centimeters deep into ice at the most difficult site, where it was awkwardly positioned on an eight-to-12-degree slope. The robot passed all the tests with flying colors.

And then it got shelved.

Switching the ocean worlds

Hand’s team put their Europa landing robot through the Alaskan field test campaign between July and August 2022. But when the new decadal strategy for planetary science came out in 2023, it turned out that the Europa lander was not among the missions selected. The National Academies committee responsible for formulating these decadal strategies did not recommend giving it a go, mainly because they believed harsh radiation in the Jovian system would make detecting biosignatures “challenging” for a lander.

An Enceladus lander, on the other hand, remained firmly on the table. “I was also on the team developing EELS, a robot intended for a potential Enceladus mission, so thankfully I can speak about both. The radiation challenges are indeed far greater for Europa,” Bowkett says.

Another argument for changing our go-to ocean world is that water plumes containing salts along with carbon- and nitrogen-bearing molecules have already been observed on Enceladus, which means there is a slight chance biosignatures could be detected by a flyby mission. The surface of Enceladus, according to the decadal strategy document, should be capable of preserving biogenic evidence for a long time and seems more conducive to a lander mission. “Luckily, many of the lessons on how to conduct autonomous sampling on Europa, we believe, will transfer to Enceladus, with the benefit of a less damaging radiation environment,” Bowkett told Ars.

The dream of a Europa landing is not completely dead, though. “I would love to get into the Europa’s ocean with a submersible and further down to the seafloor. I would love for that to happen,” Hand says. “But technologically it’s quite a big leap, and you always have to balance your dream missions with the number of technological miracles that need to be solved to make these missions possible.”

Science Robotics, 2025.  DOI: 10.1126/scirobotics.adi5582

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Testing a robot that could drill into Europa and Enceladus Read More »