Author name: Paul Patrick

chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results

ChatGPT users shocked to learn their chats were in Google search results

Faced with mounting backlash, OpenAI removed a controversial ChatGPT feature that caused some users to unintentionally allow their private—and highly personal—chats to appear in search results.

Fast Company exposed the privacy issue on Wednesday, reporting that thousands of ChatGPT conversations were found in Google search results and likely only represented a sample of chats “visible to millions.” While the indexing did not include identifying information about the ChatGPT users, some of their chats did share personal details—like highly specific descriptions of interpersonal relationships with friends and family members—perhaps making it possible to identify them, Fast Company found.

OpenAI’s chief information security officer, Dane Stuckey, explained on X that all users whose chats were exposed opted in to indexing their chats by clicking a box after choosing to share a chat.

Fast Company noted that users often share chats on WhatsApp or select the option to save a link to visit the chat later. But as Fast Company explained, users may have been misled into sharing chats due to how the text was formatted:

“When users clicked ‘Share,’ they were presented with an option to tick a box labeled ‘Make this chat discoverable.’ Beneath that, in smaller, lighter text, was a caveat explaining that the chat could then appear in search engine results.”

At first, OpenAI defended the labeling as “sufficiently clear,” Fast Company reported Thursday. But Stuckey confirmed that “ultimately,” the AI company decided that the feature “introduced too many opportunities for folks to accidentally share things they didn’t intend to.” According to Fast Company, that included chats about their drug use, sex lives, mental health, and traumatic experiences.

Carissa Veliz, an AI ethicist at the University of Oxford, told Fast Company she was “shocked” that Google was logging “these extremely sensitive conversations.”

OpenAI promises to remove Google search results

Stuckey called the feature a “short-lived experiment” that OpenAI launched “to help people discover useful conversations.” He confirmed that the decision to remove the feature also included an effort to “remove indexed content from the relevant search engine” through Friday morning.

ChatGPT users shocked to learn their chats were in Google search results Read More »

citing-“market-conditions,”-nintendo-hikes-prices-of-original-switch-consoles

Citing “market conditions,” Nintendo hikes prices of original Switch consoles

Slowed tech progress, inflation, and global trade wars are doing a number on game console pricing this year, and the bad news keeps coming. Nintendo delayed preorders of the Switch 2 in the US and increased accessory prices, and Microsoft gave its Series S and X consoles across-the-board price hikesin May.

Today, Nintendo is back for more, increasing prices on the original Switch hardware, as well as some Amiibo, the Alarmo clock, and some Switch and Switch 2 accessories. The price increases will formally take effect on August 3.

The company says that there are currently no price increases coming for the Switch 2 console, Nintendo Online memberships, and physical and digital Switch 2 games. But it didn’t take future price increases off the table, noting that “price adjustments may be necessary in the future.”

Nintendo didn’t announce how large the price increases would be, but some retailers were already listing higher prices as of Friday. Target now lists the Switch Lite for $229.99, up from $199.99; the original Switch for $339.99, up from $299.99; and the OLED model of the Switch for a whopping $399.99, up from $349.99 and just $50 less than the price of the much more powerful Switch 2 console.

Citing “market conditions,” Nintendo hikes prices of original Switch consoles Read More »

the-military’s-squad-of-satellite-trackers-is-now-routinely-going-on-alert

The military’s squad of satellite trackers is now routinely going on alert


“I hope this blows your mind because it blows my mind.”

A Long March 3B rocket carrying a new Chinese Beidou navigation satellite lifts off from the Xichang Satellite Launch Center on May 17, 2023. Credit: VCG/VCG via Getty Images

This is Part 2 of our interview with Col. Raj Agrawal, the former commander of the Space Force’s Space Mission Delta 2.

If it seems like there’s a satellite launch almost every day, the numbers will back you up.

The US Space Force’s Mission Delta 2 is a unit that reports to Space Operations Command, with the job of sorting out the nearly 50,000 trackable objects humans have launched into orbit.

Dozens of satellites are being launched each week, primarily by SpaceX to continue deploying the Starlink broadband network. The US military has advance notice of these launches—most of them originate from Space Force property—and knows exactly where they’re going and what they’re doing.

That’s usually not the case when China or Russia (and occasionally Iran or North Korea) launches something into orbit. With rare exceptions, like human spaceflight missions, Chinese and Russian officials don’t publish any specifics about what their rockets are carrying or what altitude they’re going to.

That creates a problem for military operators tasked with monitoring traffic in orbit and breeds anxiety among US forces responsible for making sure potential adversaries don’t gain an edge in space. Will this launch deploy something that can destroy or disable a US satellite? Will this new satellite have a new capability to surveil allied forces on the ground or at sea?

Of course, this is precisely the point of keeping launch details under wraps. The US government doesn’t publish orbital data on its most sensitive satellites, such as spy craft collecting intelligence on foreign governments.

But you can’t hide in low-Earth orbit, a region extending hundreds of miles into space. Col. Raj Agrawal, who commanded Mission Delta 2 until earlier this month, knows this all too well. Agrawal handed over command to Col. Barry Croker as planned after a two-year tour of duty at Mission Delta 2.

Col. Raj Agrawal, then-Mission Delta 2 commander, delivers remarks to audience members during the Mission Delta 2 redesignation ceremony in Colorado Springs, Colorado, on October 31, 2024. Credit: US Space Force

Some space enthusiasts have made a hobby of tracking US and foreign military satellites as they fly overhead, stringing together a series of observations over time to create fairly precise estimates of an object’s altitude and inclination.

Commercial companies are also getting in on the game of space domain awareness. But most are based in the United States or allied nations and have close partnerships with the US government. Therefore, they only release information on satellites owned by China and Russia. This is how Ars learned of interesting maneuvers underway with a Chinese refueling satellite and suspected Russian satellite killers.

Theoretically, there’s nothing to stop a Chinese company, for example, from taking a similar tack on revealing classified maneuvers conducted by US military satellites.

The Space Force has an array of sensors scattered around the world to detect and track satellites and space debris. The 18th and 19th Space Defense Squadrons, which were both under Agrawal’s command at Mission Delta 2, are the units responsible for this work.

Preparing for the worst

One of the most dynamic times in the life of a Space Force satellite tracker is when China or Russia launches something new, according to Agrawal. His command pulls together open source information, such as airspace and maritime warning notices, to know when a launch might be scheduled.

This is not unlike how outside observers, like hobbyist trackers and space reporters, get a heads-up that something is about to happen. These notices tell you when a launch might occur, where it will take off from, and which direction it will go. What’s different for the Space Force is access to top-secret intelligence that might clue military officials in on what the rocket is actually carrying. China, in particular, often declares that its satellites are experimental, when Western analysts believe they are designed to support military activities.

That’s when US forces swing into action. Sometimes, military forces go on alert. Commanders develop plans to detect, track, and target the objects associated with a new launch, just in case they are “hostile,” Agrawal said.

We asked Agrawal to take us through the process his team uses to prepare for and respond to one of these unannounced, or “non-cooperative,” launches. This portion of our interview is published below, lightly edited for brevity and clarity.

Ars: Let’s say there’s a Russian or Chinese launch. How do you find out there’s a launch coming? Do you watch for NOTAMs (Notices to Airmen), like I do, and try to go from there?

Agrawal: I think the conversation starts the same way that it probably starts with you and any other technology-interested American. We begin with what’s available. We certainly have insight through intelligence means to be able to get ahead of some of that, but we’re using a lot of the same sources to refine our understanding of what may happen, and then we have access to other intel.

The good thing is that the Space Force is a part of the Intelligence Community. We’re plugged into an entire Intelligence Community focused on anything that might be of national security interest. So we’re able to get ahead. Maybe we can narrow down NOTAMs; maybe we can anticipate behavior. Maybe we have other activities going on in other domains or on the Internet, the cyber domain, and so on, that begin to tip off activity.

Certainly, we’ve begun to understand patterns of behavior. But no matter what, it’s not the same level of understanding as those who just cooperate and work together as allies and friends. And if there’s a launch that does occur, we’re not communicating with that launch control center. We’re certainly not communicating with the folks that are determining whether or not the launch will be safe, if it’ll be nominal, how many payloads are going to deploy, where they’re going to deploy to.

I certainly understand why a nation might feel that they want to protect that. But when you’re fielding into LEO [low-Earth orbit] in particular, you’re not really going to hide there. You’re really just creating uncertainty, and now we’re having to deal with that uncertainty. We eventually know where everything is, but in that meantime, you’re creating a lot of risk for all the other nations and organizations that have fielded capability in LEO as well.

Find, fix, track, target

Ars: Can you take me through what it’s like for you and your team during one of these launches? When one comes to your attention, through a NOTAM or something else, how do you prepare for it? What are you looking for as you get ready for it? How often are you surprised by something with one of these launches?

Agrawal: Those are good questions. Some of it, I’ll be more philosophical on, and others I can be specific on. But on a routine basis, our formation is briefed on all of the launches we’re aware of, to varying degrees, with the varying levels of confidence, and at what classifications have we derived that information.

In fact, we also have a weekly briefing where we go into depth on how we have planned against some of what we believe to be potentially higher threats. How many organizations are involved in that mission plan? Those mission plans are done at a very tactical level by captains and NCOs [non-commissioned officers] that are part of the combat squadrons that are most often presented to US Space Command…

That integrated mission planning involves not just Mission Delta 2 forces but also presented forces by our intelligence delta [Space Force units are called deltas], by our missile warning and missile tracking delta, by our SATCOM [satellite communications] delta, and so on—from what we think is on the launch pad, what we think might be deployed, what those capabilities are. But also what might be held at risk as a result of those deployments, not just in terms of maneuver but also what might these even experimental—advertised “experimental”—capabilities be capable of, and what harm might be caused, and how do we mission-plan against those potential unprofessional or hostile behaviors?

As you can imagine, that’s a very sophisticated mission plan for some of these launches based on what we know about them. Certainly, I can’t, in this environment, confirm or deny any of the specific launches… because I get access to more fidelity and more confidence on those launches, the timing and what’s on them, but the precursor for the vast majority of all these launches is that mission plan.

That happens at a very tactical level. That is now posturing the force. And it’s a joint force. It’s not just us, Space Force forces, but it’s other services’ capabilities as well that are posturing to respond to that. And the truth is that we even have partners, other nations, other agencies, intel agencies, that have capability that have now postured against some of these launches to now be committed to understanding, did we anticipate this properly? Did we not?

And then, what are our branch plans in case it behaves in a way that we didn’t anticipate? How do we react to it? What do we need to task, posture, notify, and so on to then get observations, find, fix, track, target? So we’re fulfilling the preponderance of what we call the kill chain, for what we consider to be a non-cooperative launch, with a hope that it behaves peacefully but anticipating that it’ll behave in a way that’s unprofessional or hostile… We have multiple chat rooms at multiple classifications that are communicating in terms of “All right, is it launching the way we expected it to, or did it deviate? If it deviated, whose forces are now at risk as a result of that?”

A spectator takes photos before the launch of the Long March 7A rocket carrying the ChinaSat 3B satellite from the Wenchang Space Launch Site in China on May 20, 2025. Credit: Meng Zhongde/VCG via Getty Images

Now, we even have down to the fidelity of what forces on the ground or on the ocean may not have capability… because of maneuvers or protective measures that the US Space Force has to take in order to deviate from its mission because of that behavior. The conversation, the way it was five years ago and the way it is today, is very, very different in terms of just a launch because now that launch, in many cases, is presenting a risk to the joint force.

We’re acting like a joint force. So that Marine, that sailor, that special operator on the ground who was expecting that capability now is notified in advance of losing that capability, and we have measures in place to mitigate those outages. And if not, then we let them know that “Hey, you’re not going to have the space capability for some period of time. We’ll let you know when we’re back. You have to go back to legacy operations for some period of time until we’re back into nominal configuration.”

I hope this blows your mind because it blows my mind in the way that we now do even just launch processing. It’s very different than what we used to do.

Ars: So you’re communicating as a team in advance of a launch and communicating down to the tactical level, saying that this launch is happening, this is what it may be doing, so watch out?

Agrawal: Yeah. It’s not as simple as a ballistic missile warning attack, where it’s duck and cover. Now, it’s “Hey, we’ve anticipated the things that could occur that could affect your ability to do your mission as a result of this particular launch with its expected payload, and what we believe it may do.” So it’s not just a general warning. It’s a very scoped warning.

As that launch continues, we’re able to then communicate more specifically on which forces may lose what, at what time, and for how long. And it’s getting better and better as the rest of the US Space Force, as they present capability trained to that level of understanding as well… We train this together. We operate together and we communicate together so that the tactical user—sometimes it’s us at US Space Force, but many times it’s somebody on the surface of the Earth that has to understand how their environment, their capability, has changed as a result of what’s happening in, to, and from space.

Ars: The types of launches where you don’t know exactly what’s coming are getting more common now. Is it normal for you to be on this alert posture for all of the launches out of China or Russia?

Agrawal: Yeah. You see it now. The launch manifest is just ridiculous, never mind the ones we know about. The ones that we have to reach out into the intelligence world and learn about, that’s getting ridiculous, too. We don’t have to have this whole machine postured this way for cooperative launches. So the amount of energy we’re expending for a non-cooperative launch is immense. We can do it. We can keep doing it, but you’re just putting us on alert… and you’re putting us in a position where we’re getting ready for bad behavior with the entire general force, as opposed to a cooperative launch, where we can anticipate. If there’s an anomaly, we can anticipate those and work through them. But we’re working through it with friends, and we’re communicating.

We’re not having to put tactical warfighters on alert every time … but for those payloads that we have more concern about. But still, it’s a very different approach, and that’s why we are actively working with as many nations as possible in Mission Delta 2 to get folks to sign on with Space Command’s space situational awareness sharing agreements, to go at space operations as friends, as allies, as partners, working together. So that way, we’re not posturing for something higher-end as a result of the launch, but we’re doing this together. So, with every nation we can, we’re getting out there—South America, Africa, every nation that will meet with us, we want to meet with them and help them get on the path with US Space Command to share data, to work as friends, and use space responsibly.”

A Long March 3B carrier rocket carrying the Shijian 21 satellite lifts off from the Xichang Satellite Launch Center on October 24, 2021. Credit: Li Jieyi/VCG via Getty Images

Ars: How long does it take you to sort out and get a track on all of the objects for an uncooperative launch?

Agrawal: That question is a tough one to answer. We can move very, very quickly, but there are times when we have made a determination of what we think something is, what it is and where it’s going, and intent; there might be some lag to get it into a public catalog due to a number of factors, to include decisions being made by combatant commanders, because, again, our primary objective is not the public-facing catalog. The primary objective is, do we have a risk or not?

If we have a risk, let’s understand, let’s figure out to what degree do we think we have to manage this within the Department of Defense. And to what degree do we believe, “Oh, no, this can go in the public catalog. This is a predictable elset (element set)”? What we focus on with (the public catalog) are things that help with predictability, with spaceflight safety, with security, spaceflight security. So you sometimes might see a lag there, but that’s because we’re wrestling with the security aspect of the degree to which we need to manage this internally before we believe it’s predictable. But once we believe it’s predictable, we put it in the catalog, and we put it on space-track.org. There’s some nuance in there that isn’t relative to technology or process but more on national security.

On the flip side, what used to take hours and days is now getting down to seconds and minutes. We’ve overhauled—not 100 percent, but to a large degree—and got high-speed satellite communications from sensors to the centers of SDA (Space Domain Awareness) processing. We’re getting higher-end processing. We’re now duplicating the ability to process, duplicating that capability across multiple units. So what used to just be human labor intensive, and also kind of dial-up speed of transmission, we’ve now gone to high-speed transport. You’re seeing a lot of innovation occur, and a lot of data fusion occur, that’s getting us to seconds and minutes.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

The military’s squad of satellite trackers is now routinely going on alert Read More »

the-week-in-ai-governance

The Week in AI Governance

There was enough governance related news this week to spin it out.

Anthropic, Google, OpenAI, Mistral, Aleph Alpha, Cohere and others commit to signing the EU AI Code of Practice. Google has now signed. Microsoft says it is likely to sign.

xAI signed the AI safety chapter of the code, but is refusing to sign the others, citing them as overreach especially as pertains to copyright.

The only company that said it would not sign at all is Meta.

This was the underreported story. All the important AI companies other than Meta have gotten behind the safety section of the EU AI Code of Practice. This represents a considerable strengthening of their commitments, and introduces an enforcement mechanism. Even Anthropic will be forced to step up parts of their game.

That leaves Meta as the rogue state defector that once again gives zero anythings about safety, as in whether we all die, and also safety in its more mundane forms. Lol, we are Meta, indeed. So the question is, what are we going to do about it?

xAI took a middle position. I see the safety chapter as by far the most important, so as long as xAI is signing that and taking it seriously, great. Refusing the other parts is a strange flex, and I don’t know exactly what their problem is since they didn’t explain. They simply called it ‘unworkable,’ which is odd when Google, OpenAI and Anthropic all declared they found it workable.

Then again, xAI finds a lot of things unworkable. Could be a skill issue.

This is a sleeper development that could end up being a big deal. When I say ‘against regulations’ I do not mean against AI regulations. I mean against all ‘regulations’ in general, no matter what, straight up.

From the folks who brought you ‘figure out who we technically have the ability to fire and then fire all of them, and if something breaks maybe hire them back, this is the Elon way, no seriously’ and also ‘whoops we misread something so we cancelled PEPFAR and a whole lot of people are going to die,’ Doge is proud to give you ‘if a regulation is not technically required by law it must be an unbridled bad thing we can therefore remove, I wonder why they put up this fence.’

Hannah Natanson, Jeff Stein, Dan Diamond and Rachel Siegel (WaPo): The tool, called the “DOGE AI Deregulation Decision Tool,” is supposed to analyze roughly 200,000 federal regulations to determine which can be eliminated because they are no longer required by law, according to a PowerPoint presentation obtained by The Post that is dated July 1 and outlines DOGE’s plans.

Roughly 100,000 of those rules would be deemed worthy of trimming, the PowerPoint estimates — mostly through the automated tool with some staff feedback. The PowerPoint also suggests the AI tool will save the United States trillions of dollars by reducing compliance requirements, slashing the federal budget and unlocking unspecified “external investment.”

The conflation here is absolute. There are two categories of regulations: The half ‘required by law,’ and the half ‘worthy of trimming.’ Think of the trillions you can save.

They then try to hedge and claim that’s not how it is going to work.

Asked about the AI-fueled deregulation, White House spokesman Harrison Fields wrote in an email that “all options are being explored” to achieve the president’s goal of deregulating government.

No decisions have been completed on using AI to slash regulations, a HUD spokesperson said.

The spokesperson continued: “The intent of the developments is not to replace the judgment, discretion and expertise of staff but be additive to the process.”

That would be nice. I’m far more ‘we would be better off with a lot less regulations’ than most. I think it’s great to have an AI tool that splits off the half we can consider cutting from the half we are stuck with. I still think that ‘cut everything that a judge wouldn’t outright reverse if you tried cutting it’ is not a good strategy.

I find the ‘no we will totally consider whether this is a good idea’ talk rather hollow, both because of track record and also they keep telling us what the plan is?

“The White House wants us higher on the leader board,” said one of the three people. “But you have to have staff and time to write the deregulatory notices, and we don’t. That’s a big reason for the holdup.”

That’s where the AI tool comes in, the PowerPoint proposes. The tool will save 93 percent of the human labor involved by reviewing up to 500,000 comments submitted by the public in response to proposed rule changes. By the end of the deregulation exercise, humans will have spent just a few hours to cancel each of the 100,000 regulations, the PowerPoint claims.

They then close by pointing out that the AI makes mistakes even on the technical level it is addressing. Well, yeah.

Also, welcome to the future of journalism:

China has its own AI Action Plan and is calling for international cooperation on AI. Wait, what do they mean by that? If you look in the press, that depends who you ask. All the news organizations will be like ‘the Chinese released an AI Action Plan’ and then not link to the actual plan, I had to have o3 dig it up.

Here’s o3’s translation of the actual text. This is almost all general gestures in the direction of capabilities, diffusion, infrastructure and calls for open models. It definitely is not an AI Action Plan in the sense that America offered an AI Action Plan, with had lots of specific actionable proposals. This is more of a general outline of a plan and statement of goals, at best. At least it doesn’t talk about or call for a ‘race’ but a call for everything to be open and accelerated is not obviously better.

  • Seize AI opportunities together. Governments, international organizations, businesses, research institutes, civil groups, and individuals should actively cooperate, accelerate digital‑infrastructure build‑out, explore frontier AI technologies, and spread AI applications worldwide, fully unlocking AI’s power to drive growth, achieve the UN‑2030 goals, and tackle global challenges.

  • Foster AI‑driven innovation. Uphold openness and sharing, encourage bold experimentation, build international S‑and‑T cooperation platforms, harmonize policy and regulation, and remove technical barriers to spur continuous breakthroughs and deep “AI +” applications.

  • Empower every sector. Deploy AI across manufacturing, consumer services, commerce, healthcare, education, agriculture, poverty reduction, autonomous driving, smart cities, and more; share infrastructure and best practices to supercharge the real economy.

  • Accelerate digital infrastructure. Expand clean‑energy grids, next‑gen networks, intelligent compute, and data centers; create interoperable AI infrastructure and unified compute‑power standards; support especially the Global South in accessing and applying AI.

  • Build a pluralistic open‑source ecosystem. Promote cross‑border open‑source communities and secure platforms, open technical resources and interfaces, improve compatibility, and let non‑sensitive tech flow freely.

  • Supply high‑quality data. Enable lawful, orderly, cross‑border data flows; co‑create top‑tier datasets while safeguarding privacy, boosting corpus diversity, and eliminating bias to protect cultural and ecosystem diversity.

  • Tackle energy and environmental impacts. Champion “sustainable AI,” set AI energy‑ and water‑efficiency standards, promote low‑power chips and efficient algorithms, and scale AI solutions for green transition, climate action, and biodiversity.

  • Forge standards and norms. Through ITU, ISO, IEC, and industry, speed up standards on safety, industry, and ethics; fight algorithmic bias and keep standards inclusive and interoperable.

  • Lead with public‑sector adoption. Governments should pioneer reliable AI in public services (health, education, transport), run regular safety audits, respect IP, enforce privacy, and explore lawful data‑trading mechanisms to upgrade governance.

  • Govern AI safety. Run timely risk assessments, create a widely accepted safety framework, adopt graded management, share threat intelligence, tighten data‑security across the pipeline, raise explainability and traceability, and prevent misuse.

  • Implement the Global Digital Compact. Use the UN as the main channel, aim to close the digital divide—especially for the Global South—and quickly launch an International AI Scientific Panel and a Global AI Governance Dialogue under UN auspices.

  • Boost global capacity‑building. Through joint labs, shared testing, training, industry matchmaking, and high‑quality datasets, help developing countries enhance AI innovation, application, and governance while improving public AI literacy, especially for women and children.

  • Create inclusive, multi‑stakeholder governance. Establish public‑interest platforms involving all actors; let AI firms share use‑case lessons; support think tanks and forums in sustaining global technical‑policy dialogue among researchers, developers, and regulators.

What does it have to say about safety or dealing with downsides? We have ‘forge standards and norms’ with a generic call for safety and ethics standards, which seems to mostly be about interoperability and ‘bias.’

Mainly we have ‘Govern AI safety,’ which is directionally nice to see I guess but essentially content free and shows no sign that the problems are being taken seriously on the levels we care about. Most concretely, in the ninth point, we have a call for regular safety audits of AI models. That all sounds like ‘the least you could do.’

Here’s one interpretation of the statement:

Brenda Goh (Reuters): China said on Saturday it wanted to create an organisation to foster global cooperation on artificial intelligence, positioning itself as an alternative to the U.S. as the two vie for influence over the transformative technology.

Li did not name the United States but appeared to refer to Washington’s efforts to stymie China’s advances in AI, warning that the technology risked becoming the “exclusive game” of a few countries and companies.

China wants AI to be openly shared and for all countries and companies to have equal rights to use it, Li said, adding that Beijing was willing to share its development experience and products with other countries, particularly the “Global South”. The Global South refers to developing, emerging or lower-income countries, mostly in the southern hemisphere.

The foreign ministry released online an action plan for global AI governance, inviting governments, international organisations, enterprises and research institutions to work together and promote international exchanges including through a cross-border open source community.

As in, we notice you are ahead in AI, and that’s not fair. You should do everything in the open so you let us catch up in all the ways you are ahead, so we can bury you using the ways in which you are behind. That’s not an unreasonable interpretation.

Here’s another.

The Guardian: Chinese premier Li Qiang has proposed establishing an organisation to foster global cooperation on artificial intelligence, calling on countries to coordinate on the development and security of the fast-evolving technology, days after the US unveiled plans to deregulate the industry.

Li warned Saturday that artificial intelligence development must be weighed against the security risks, saying global consensus was urgently needed.

“The risks and challenges brought by artificial intelligence have drawn widespread attention … How to find a balance between development and security urgently requires further consensus from the entire society,” the premier said.

Li said China would “actively promote” the development of open-source AI, adding Beijing was willing to share advances with other countries, particularly developing ones in the global south.

So that’s a call to keep security in mind, but every concrete reference is mundane and deals with misuse, and then they call for putting everything out into the open, with the main highlighted ‘risk’ to coordinate on being that America might get an advantage, and encouraging us to give it away via open models to ‘safeguard multilateralism.’

A third here, from the Japan Times, frames it as a call for an alliance to take aim at an American AI monopoly.

Director Michael Kratsios: China’s just-released AI Action Plan has a section that drives at a fundamental difference between our approaches to AI: whether the public or private sector should lead in AI innovation.

I like America’s odds of success.

He quotes point nine, which his translation has as ‘the public sector takes the lead in deploying applications.’ Whereas o3’s translation says ‘governments should pioneer reliable AI in public services (health, education, transport), run regular safety audits, respect IP, enforce privacy, and explore lawful data‑trading mechanisms to upgrade governance.’

Even in Michael’s preferred translation, this is saying government should aggressively deploy AI applications to improve government services. The American AI Action Plan, correctly, fully agrees with this. Nothing in the Chinese statement says to hold the private sector back. Quite the contrary.

The actual disagreement we have with point nine is the rest of it, where the Chinese think we should run regular safety audits, respect IP and enforce privacy. Those are not parts of the American AI Action Plan. Do you think we were right not to include those provisions, sir? If so, why?

Suppose in the future, we learned we were in a lot more danger than we think we are in now, and we did want to make a deal with China and others. Right now the two sides would be very far apart but circumstances could quickly change that.

Could we do it in a way that could be verified?

It wouldn’t be easy, but we do have tools.

This is the sort of thing we should absolutely be preparing to be able to do, whether or not we ultimately decide to do it.

Mauricio Baker: For the last year, my team produced the most technically detailed overview so far. Our RAND working paper finds: strong verification is possible—but we need ML and hardware research.

You can find the paper here and on arXiv. It includes a 5-page summary and a list of open challenges.

In the Cold War, the US and USSR used inspections and satellites to verify nuclear weapon limits. If future, powerful AI threatens to escape control or endanger national security, the US and China would both be better off with guardrails.

It’s a tough challenge:

– Verify narrow restrictions, like “no frontier AI training past some capability,” or “no mass-deploying if tests show unacceptable danger”

– Catch major state efforts to cheat

– Preserve confidentiality of models, data, and algorithms

– Keep overhead low

Still, reasons for optimism:

– No need to monitor all computers—frontier AI needs thousands of specialized AI chips.

– We can build redundant layers of verification. A cheater only needs to be caught once.

– We can draw from great work in cryptography and ML/hardware security.

One approach is to use existing chip security features like Confidential Computing, built to securely verify chip activities. But we’d need serious design vetting, teardowns, and maybe redesigns before the US could strongly trust Huawei’s chip security (or frankly, NVIDIA’s).

“Off-chip” mechanisms could be reliable sooner: network taps or analog sensors (vetted, limited use, tamper evident) retrofitted onto AI data centers. Then, mutually secured, airgapped clusters could check if claimed compute uses are reproducible and consistent with sensor data.

Add approaches “simple enough to work”: whistleblower programs, interviews of personnel, and intelligence activities. Whistleblower programs could involve regular in-person contact—carefully set up so employees can anonymously reveal violations, but not much more.

We could have an arsenal of tried-and-tested methods to confidentially verify a US-China AI treaty. But at the current pace, in three years, we’ll just have a few speculative options. We need ML and hardware researchers, new RFPs by funders, and AI company pilot programs.

Jeffrey Ladish: Love seeing this kind of in-depth work on AI treaty verification. A key fact is verification doesn’t have to be bullet proof to be useful. We can ratchet up increasingly robust technical solutions while using other forms of HUMINT and SIGINT to provide some level of assurance.

Remember, the AI race is a mixed-motive conflict, per Schelling. Both sides have an incentive to seek an advantage, but also have an incentive to avoid mutually awful outcomes. Like with nuclear war, everyone loses if any side loses control of superhuman AI.

This makes coordination easier, because even if both sides don’t like or trust each other, they have an incentive to cooperate to avoid extremely bad outcomes.

It may turn out that even with real efforts there are not good technical solutions. But I think it is far more likely that we don’t find the technical solutions due to lack of trying, rather than that the problem is so hard that it cannot be done.

The reaction to the AI Action Plan was almost universally positive, including here from Nvidia and AMD. My own review, focused on the concrete proposals within, also reflected this. It far exceeded my expectations on essentially all fronts, so much so that I would be actively happy to see most of its proposals implemented rather than nothing be done.

I and others focused on the concrete policy, and especially concrete policy relative to expectations and what was possible in context, for which it gets high praise.

But a document like this might have a lot of its impact due to the rhetoric instead, even if it lacks legal force, or cause people to endorse the approach as ideal in absolute terms rather than being the best that could be done at the time.

So, for example, the actual proposals for open models were almost reasonable, but if the takeaway is lots more rhetoric of ‘yay open models’ like it is in this WSJ editorial, where the central theme is very clearly ‘we must beat China, nothing else matters, this plan helps beat China, so the plan is good’ then that’s really bad.

Another important example: Nothing in the policy proposals here makes future international cooperation harder. The rhetoric? A completely different story.

The same WSJ article also noticed the same obvious contradictions with other Trump policies that I did – throttling renewable energy and high-skilled immigration and even visas are incompatible with our goals here, the focus on ‘woke AI’ could have been much worse but remains a distraction, also I would add, what is up with massive cuts to STEM research if we are taking this seriously? If we are serious about winning and worry that one false move would ‘forfeit the race’ then we need to act like it.

Of course, none of that is up to the people who were writing the AI Action Plan.

What the WSJ editorial board didn’t notice, or mention at all, is the possibility that there are other risks or downsides at play here, and it dismisses outright the possibility of any form of coordination or cooperation. That’s a very wrong, dangerous and harmful attitude, one it shares with many in or lobbying the government.

A worry I have on reflection, that I wasn’t focusing on at the time, is that officials and others might treat the endorsements of the good policy proposals here as an endorsement of the overall plan presented by the rhetoric, especially the rhetoric at the top of the plan, or of the plan’s sufficiency and that it is okay to ignore and not speak about what the plan ignores and does not speak about.

That rhetoric was alarmingly (but unsurprisingly) terrible, as it is the general administration plan of emphasizing whenever possible that we are in an ‘AI race’ that will likely go straight to AGI and superintelligence even if those words couldn’t themselves be used in the plan, where ‘winning’ is measured in the mostly irrelevant ‘market share.’

And indeed, the inability to mention AGI or superintelligence in the plan leads to such exactly the standard David Sacks lines that toxically center the situation on ‘winning the race’ by ‘exporting the American tech stack.’

I will keep repeating, if necessary until I am blue in the face, that this is effectively a call (the motivations for which I do not care to speculate) for sacrificing the future and get us all killed in order to maximize Nvidia’s market share.

There is no ‘tech stack’ in the meaningful sense of necessary integration. You can run any most any AI model on most any advanced chip, and switch on an hour’s notice.

It does not matter who built the chips. It matters who runs the chips and for whose benefit. Supply is constrained by manufacturing capacity, so every chip we sell is one less chip we have. The idea that failure to hand over large percentages of the top AI chips to various authoritarians, or even selling H20s directly to China as they currently plan to do, would ‘forfeit’ ‘the race’ is beyond absurd.

Indeed, both the rhetoric and actions discussed here do the exact opposite. It puts pressure on others especially China to push harder towards ‘the race’ including the part that counts, the one to AGI, and also the race for diffusion and AI’s benefits. And the chips we sell arm China and others to do this important racing.

There is later talk acknowledging that ‘we do not intend to ignore the risks of this revolutionary technological power.’ But Sacks frames this as entire about the risk that AI will be misused or stolen by malicious actors. Which is certainly a danger, but far from the primary thing to worry about.

That’s what happens when you are forced to pretend AGI, ASI, potential loss of control and all other existential risks do not exist as possibilities. The good news is that there are some steps in the actual concrete plan to start preparing for those problems, even if they are insufficient and it can’t be explained, but it’s a rough path trying to sustain even that level of responsibility under this kind of rhetorical oppression.

The vibes and rhetoric were accelerationist throughout, especially at the top, and completely ignored the risks and downsides of AI, and the dangers of embracing a rhetoric based on an ‘AI race’ that we ‘must win,’ and where that winning mostly means chip market share. Going down this path is quite likely to get us all killed.

I am happy to make the trade of allowing the rhetoric to be optimistic, and to present the Glorious Transhumanist Future as likely to be great even as we have no idea how to stay alive and in control while getting there, so long as we can still agree to take the actions we need to take in order to tackle that staying alive and in control bit – again, the actions are mostly the same even if you are highly optimistic that it will work out.

But if you dismiss the important dangers entirely, then your chances get much worse.

So I want to be very clear that I hate that rhetoric, I think it is no good, very bad rhetoric both in terms of what is present and what (often with good local reasons) is missing, while reiterating that the concrete particular policy proposals were as good as we could reasonably have hoped for on the margin, and the authors did as well as they could plausibly have done with people like Sacks acting as veto points.

That includes the actions on ‘preventing Woke AI,’ which have convinced even Sacks to frame this as preventing companies from intentionally building DEI into their models. That’s fine, I wouldn’t want that either.

Even outlets like Transformer weighed in positively, with them calling the plan ‘surprisingly okay’ and noting its ability to get consensus support, while ignoring the rhetoric. They correctly note the plan is very much not adequate. It was a missed opportunity to talk about or do something about various risks (although I understand why), and there was much that could have been done that wasn’t.

Seán Ó hÉigeartaigh: Crazy to reflect on the three global AI competitions going on right now:

– 1. US political leadership have made AI a prestige race, echoing the Space Race. It’s cool and important and strategic, and they’re going to Win.

– 2. For Chinese leadership AI is part of economic strength, soft power and influence. Technology is shared, developing economies will be built on Chinese fundamental tech, the Chinese economy and trade relations will grow. Weakening trust in a capricious US is an easy opportunity to take advantage of.

– 3. The AGI companies are racing something they think will out-think humans across the board, that they don’t yet know how to control, and think might literally kill everyone.

Scariest of all is that it’s not at all clear to decision-makers that these three things are happening in parallel. They think they’re playing the same game, but they’re not.

I would modify the US political leadership position. I think to a lot of them it’s literally about market share, primarily chip market share. I believe this because they keep saying, with great vigor, that it is literally about chip market share. But yes, they think this matters because of prestige, and because this is how you get power.

My guess is, mostly:

  1. The AGI companies understand these are three distinct things.

    1. They are using the confusions of political leadership for their own ends.

  2. The Chinese understand there are two distinct things, but not three.

    1. As in, they know what US leadership is doing, and they know what they are doing, and they know these are distinct things.

    2. They do not feel the AGI and understand its implications.

  3. The bulk of the American political class cannot differentiate between the US and Chinese strategies, or strategic positions, or chooses to pretend not to, cannot imagine things other than ordinary prestige, power and money, and cannot feel the AGI.

    1. There are those within the power structure who do feel the AGI, to varying extents, and are trying to sculpt actions (including the action plan) accordingly with mixed success.

    2. An increasing number of them, although still small, do feel the AGI to varying extents but have yet to cash that out into anything except ‘oh ’.

  4. There is of course a fourth race or competition, which is to figure out how to build it without everyone dying.

The actions one would take in each of these competitions are often very similar, especially the first three and often the fourth as well, but sometimes are very different. What frustrates me most is when there is an action that is wise on all levels, yet we still don’t do it.

Also, on the ‘preventing Woke AI’ question, the way the plan and order are worded seems designed to make compliance easy and not onerous, but given other signs from the Trump administration lately, I think we have reason to worry…

Fact Post: Trump’s FCC Chair says he will put a “bias monitor” in place who will “report directly” to Trump as part of the deal for Sky Dance to acquire CBS.

Ari Drennen: The term that the Soviet Union used for this job was “apparatchik” btw.

I was willing to believe that firing Colbert was primarily a business decision. This is very different imagine the headline in reverse: “Harris’s FCC Chair says she will put a “bias monitor” in place who will “report directly” to Harris as part of the deal for Sky Dance to acquire CBS.”

Now imagine it is 2029, and the headline is ‘AOC appoints new bias monitor for CBS.’ Now imagine it was FOX. Yeah. Maybe don’t go down this road?

Director Krastios has now given us his view on the AI Action Plan. This is a chance to see how much it is viewed as terrible rhetoric versus its good policy details, and to what extent overall policy is going to be guided by good details versus terrible rhetoric.

Peter Wildeford offers his takeaway summary.

Peter Wildeford: Winning the Global AI Race

  1. The administration’s core philosophy is a direct repudiation of the previous one, which Kratsios claims was a “fear-driven” policy “manically obsessed” with hypothetical risks that stifled innovation.

  2. The plan is explicitly called an “Action Plan” to signal a focus on immediate execution and tangible results, not another government strategy document that just lists aspirational goals.

  3. The global AI race requires America to show the world a viable, pro-innovation path for AI development that serves as an alternative to the EU’s precautionary, regulation-first model.

He leads with hyperbolic slander, which is par for the course, but yes concrete action plans are highly useful and the EU can go too far in its regulations.

There are kind of two ways to go with this.

  1. You could label any attempt to do anything to ensure we don’t die as ‘fear-driven’ and ‘maniacally obsessed’ with ‘hypothetical’ risks that ‘stifle’ innovation, and thus you probably die.

  2. You could label the EU and Biden Administration as ‘fear-driven’ and ‘manically obsessed’ with ‘hypothetical’ risks that ‘stifle’ innovation, contrasting that with your superior approach, and then having paid this homage do reasonable things.

The AI Action Plan as written was the second one. But you have to do that on purpose, because the default outcome is to shift to the first one.

Executing the ‘American Stack’ Export Strategy

  1. The strategy is designed to prevent a scenario where the world runs on an adversary’s AI stack by proactively offering a superior, integrated American alternative.

  2. The plan aims to make it simple for foreign governments to buy American by promoting a “turnkey solution”—combining chips, cloud, models, and applications—to reduce complexity for the buyer.

  3. A key action is to reorient US development-finance institutions like the DFC and EXIM to prioritize financing for the export of the American AI stack, shifting their focus from traditional hard infrastructure.

The whole ‘export’ strategy is either nonsensical, or an attempt to control capital flow, because I heard a rumor that it is good to be the ones directing capital flow.

Once again, the ‘tech stack’ thing is not, as described here, what’s the word? Real.

The ‘adversary’ does not have a ‘tech stack’ to offer, they have open models people can run on the same chips. They don’t have meaningful chips to even run their own operations, let alone export. And the ‘tech’ does not ‘stack’ in a meaningful way.

Turnkey solutions and package marketing are real. I don’t see any reason for our government to be so utterly obsessed with them, or even involved at all. That’s called marketing and serving the customer. Capitalism solves this. Microsoft and Amazon and Google and OpenAI and Anthropic and so on can and do handle it.

Why do we suddenly think the government needs to be prioritizing financing this? Given that it includes chip exports, how is it different from ‘traditional hard infrastructure’? Why do we need financing for the rest of this illusory stack when it is actually software? Shouldn’t we still be focusing on ‘traditional hard infrastructure’ in the places we want it, and then whenever possible exporting the inference?

Refining National Security Controls

  1. Kratsios argues the biggest issue with export controls is not the rules themselves but the lack of resources for enforcement, which is why the plan calls for giving the Bureau of Industry and Security (BIS) the tools it needs.

  2. The strategy is to maintain strict controls on the most advanced chips and critical semiconductor-manufacturing components, while allowing sales of less-advanced chips under a strict licensing regime.

  3. The administration is less concerned with physical smuggling of hardware and more focused on preventing PRC front companies from using legally exported hardware for large-scale, easily flaggable training runs.

  4. Proposed safeguards against misuse are stringent “Know Your Customer” (KYC) requirements paired with active monitoring for the scale and scope of compute jobs.

It is great to see the emphasis on enforcement. It is great to hear that the export control rules are not the issue.

In which case, can we stop waving them, such as with H20 sales to China? Thank you. There is of course a level at which chips can be safely sold even directly to China, but the experts all agree the H20 is past that level.

The lack of concern about smuggling is a blind eye in the face of overwhelming evidence of widespread smuggling. I don’t much care if they are claiming to be concerned, I care about the actual enforcement, but we need enforcement. Yes, we should stop ‘easily flaggable’ PRC training runs and use KYC techniques, but this is saying we should look for our keys under the streetlight and then if we don’t find the keys assume we can start our car without them.

Championing ‘Light-Touch’ Domestic Regulation

  1. The administration rejects the idea of a single, overarching AI law, arguing that expert agencies like the FDA and DOT should regulate AI within their specific domains.

  2. The president’s position is that a “patchwork of regulations” across 50 states is unacceptable because the compliance burden disproportionately harms innovative startups.

  3. While using executive levers to discourage state-level rules, the administration acknowledges that a durable solution requires an act of Congress to create a uniform federal standard.

Yes, a ‘uniform federal standard’ would be great, except they have no intention of even pretending to meaningfully pursue one. They want each federal agency to do its thing in its own domain, as in a ‘use case’ based AI regime which when done on its own is the EU approach and doomed to failure.

I do acknowledge the step down from ‘kill state attempts to touch anything AI’ (aka the insane moratorium) to ‘discourage’ state-level rules using ‘executive levers,’ at which point we are talking price. One worries the price will get rather extreme.

Addressing AI’s Economic Impact at Home

  1. Kratsios highlights that the biggest immediate labor need is for roles like electricians to build data centers, prompting a plan to retrain Americans for high-paying infrastructure jobs.

  2. The technology is seen as a major productivity tool that provides critical leverage for small businesses to scale and overcome hiring challenges.

  3. The administration issued a specific executive order on K-12 AI education to ensure America’s students are prepared to wield these tools in their future careers.

Ahem, immigration, ahem, also these things rarely work, but okay, sure, fine.

Prioritizing Practical Infrastructure Over Hypothetical Risk

  1. Kratsios asserts that chip supply is no longer a major constraint; the key barriers to the AI build-out are shortages of skilled labor and regulatory delays in permitting.

  2. Success will be measured by reducing the time from permit application to “shovels in the ground” for new power plants and data centers.

  3. The former AI Safety Institute is being repurposed to focus on the hard science of metrology—developing technical standards for measuring and evaluating models, rather than vague notions of “safety.”

It is not the only constraint, but it is simply false to say that chip supply is no longer a major constraint.

Defining success in infrastructure in this way would, if taken seriously, lead to large distortions in the usual obvious Goodhart’s Law ways. I am going to give the benefit of the doubt and presume this ‘success’ definition is local, confined to infrastructure.

If the only thing America’s former AISI can now do are formal measured technical standards, then that is at least a useful thing that it can hopefully do well, but yeah it basically rules out at the conceptual level the idea of actually addressing the most important safety issues, by dismissing them are ‘vague.’

This goes beyond ‘that which is measured is managed’ to an open plan of ‘that which is not measured is not managed, it isn’t even real.’ Guess how that turns out.

Defining the Legislative Agenda

  1. While the executive branch has little power here, Kratsios identifies the use of copyrighted data in model training as a “quite controversial” area that Congress may need to address.

  2. The administration would welcome legislation that provides statutory cover for the reformed, standards-focused mission of the Center for AI Standards and Innovation (CAISI).

  3. Continued congressional action is needed for appropriations to fund critical AI-related R&D across agencies like the National Science Foundation.

TechCrunch: 20 national security experts urge Trump administration to restrict Nvidia H20 sales to China.

The letter says the H20 is a potent accelerator of China’s frontier AI capabilities and could be used to strengthen China’s military.

Americans for Responsible Innovation: The H20 and the AI models it supports will be deployed by China’s PLA. Under Beijing’s “Military-Civil Fusion” strategy, it’s a guarantee that H20 chips will be swiftly adapted for military purposes. This is not a question of trade. It is a question of national security.

It would be bad enough if this was about selling the existing stock of H20s, that Nvidia has taken a writedown on, even though it could easily sell them in the West instead. It is another thing entirely that Nvidia is using its capacity on TSMC machines to make more of them, choosing to create chips to sell directly to China instead of creating chips for us.

Ruby Scanlon: Nvidia placed orders for 300,000 H20 chipsets with contract manufacturer TSMC last week, two sources said, with one of them adding that strong Chinese demand had led the US firm to change its mind about just relying on its existing stockpile.

It sounds like we’re planning on feeding what would have been our AI chips to China. And then maybe you should start crying? Or better yet tell them they can’t do it?

I share Peter Wildeford’s bafflement here:

Peter Wildeford: “China is close to catching up to the US in AI so we should sell them Nvidia chips so they can catch up even faster.”

I never understand this argument from Nvidia.

The argument is also false, Nvidia is lying, but I don’t understand even if it were true.

There is only a 50% premium to buy Nvidia B200 systems within China, which suggests quite a lot of smuggling is going on.

Tao Burga: Nvidia still insists that there’s “no evidence of any AI chip diversion.” Laughable. All while lobbying against the data center chip location verification software that would provide the evidence. Tell me, where does the $1bn [in AI chips smuggled to China] go?

Rob Wiblin: Nvidia successfully campaigning to get its most powerful AI chips into China has such “the capitalists will sell us the rope with which we will hang them” energy.

Various people I follow keep emphasizing that China is smuggling really a lot of advanced AI chips, including B200s and such, and perhaps we should be trying to do something about it, because it seems rather important.

Chipmakers will always oppose any proposal to track chips or otherwise crack down on smuggling and call it ‘burdensome,’ where the ‘burden’ is ‘if you did this they would not be able to smuggle as many chips, and thus we would make less money.’

Reuters Business: Demand in China has begun surging for a business that, in theory, shouldn’t exist: the repair of advanced v artificial intelligence chipsets that the US has banned the export of to its trade and tech rival.

Peter Wildeford: Nvidia position: “datacenters from smuggled products is a losing proposition […] Datacenters require service and support, which we provide only to authorized NVIDIA products.”

Reality: Nvidia AI chip repair industry booms in China for banned products.

Scott Bessent Warns TSMC’s $40 billion Arizona fab that could meet 7% of American chip demand keeps getting delayed, and blames inspectors and red tape. There’s confusion here in the headline that he is warning it would ‘only’ meet 7% of demand, but 7% of demand would be amazing for one plant and the article’s text reflects this.

Bessent criticized regulatory hurdles slowing construction of the $40 billion facility. “Evidently, these chip design plants are moving so quickly, you’re constantly calling an audible and you’ve got someone saying, ‘Well, you said the pipe was going to be there, not there. We’re shutting you down,’” he explained.

It does also mean that if we want to meet 100% or more of demand we will need a lot more plants, but we knew that.

Epoch reports that Chinese hardware is behind American hardware, and is ‘closing the gap’ but faces major obstacles in chip manufacturing capability.

Epoch: Even if we exclude joint ventures with U.S., Australian, or U.K. institutions (where the developers can access foreign silicon), the clear majority of homegrown models relied on NVIDIA GPUs. In fact, it took until January 2024 for the first large language model to reportedly be trained entirely on Chinese hardware, arguably years after the first large language models.

Probably the most important reason for the dominance of Western hardware is that China has been unable to manufacture these AI chips in adequate volumes. Whereas Huawei reportedly manufactured 200,000 Ascend 910B chips in 2024, estimates suggest that roughly one million NVIDIA GPUs were legally delivered to China in the same year.

That’s right. For every top level Huawei chip manufactured, Nvidia sold five to China. No, China is not about to export a ‘full Chinese tech stack’ for free the moment we turn our backs. They’re offering downloads of r1 and Kimi K2, to be run on our chips, and they use all their own chips internally because they still have a huge shortage.

Put bluntly, we don’t see China leaping ahead on compute within the next few years. Not only would China need to overcome major obstacles in chip manufacturing and software ecosystems, they would also need to surpass foreign companies making massive investments into hardware R&D and chip fabrication.

Unless export controls erode or Beijing solves multiple technological challenges in record time, we think that China will remain at least one generation behind in hardware. This doesn’t prevent Chinese developers from training and running frontier AI models, but it does make it much more costly.

Overall, we think these costs are large enough to put China at a substantial disadvantage in AI scaling for at least the rest of the decade.

Beating China may or may not be your number one priority. We do know that taking export controls seriously is the number one priority for ‘beating China.’

Intel will cancel 14A and following nodes, essentially abandoning the technological frontier, if it cannot win a major external customer.

Discussion about this post

The Week in AI Governance Read More »

china-claims-nvidia-built-backdoor-into-h20-chip-designed-for-chinese-market

China claims Nvidia built backdoor into H20 chip designed for Chinese market

The CAC did not specify which experts had found a back door in Nvidia’s products or whether any tests in China had uncovered the same results. Nvidia did not immediately respond to a request for comment.

Lawmakers in Washington have expressed concern about chip smuggling and introduced a bill that would require chipmakers such as Nvidia to embed location tracking into export-controlled hardware.

Beijing has issued informal guidance to major Chinese tech groups to increase purchases of domestic AI chips in order to reduce reliance on Nvidia and support the evolution of a rival domestic chip ecosystem.

Chinese tech giant Huawei and smaller groups including Biren and Cambricon have benefited from the push to localize chip supply chains.

Nvidia said it would take nine months from restarting manufacturing to shipping the H20 to clients. Industry insiders said there was considerable uncertainty among Chinese customers over whether they would be able to take delivery of any orders if the US reversed its decision to allow its sale.

The Trump administration has faced heavy criticism, including from security experts and former officials, who argue that the H20 sales would accelerate Chinese AI development and threaten US national security.

“There are strong factions on both sides of the Pacific that don’t like the idea of renewing H20 sales,” said Triolo. “In the US, the opposition is clear, but also in China voices are saying that it will slow transition to the alternative ecosystem.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

China claims Nvidia built backdoor into H20 chip designed for Chinese market Read More »

google-tool-misused-to-scrub-tech-ceo’s-shady-past-from-search

Google tool misused to scrub tech CEO’s shady past from search

Capital F for “Frustrating”

Upon investigating, FPF found that its article on Blackman was completely absent from Google results, even through a search with the exact title. Poulson later realized that two of his own Substack articles were similarly affected. The Foundation was led to the Refresh Outdated Content tool upon checking its search console.

Google’s tool doesn’t just take anyone’s word for it when they suggest the removal of search results. However, a bug in the tool made it an ideal way to suppress information in search results. When inputting a URL, the tool allowed users to change the capitalization in the URL slug. The Foundation’s article was titled “Anatomy of a censorship campaign: A tech exec’s crusade to stifle journalism,” but the requests logged in Google’s tool included variations like “AnAtomy” and “censorSHip.”

Because the Refresh Outdated Content was seemingly case-insensitive, the crawler would check the URL, encounter a 404 error, and then de-index the working URL. Investigators determined this method was used by Blackman or someone with a suspicious interest in his online profile dozens of times between May and June 2025. Amusingly, since leaving Premise, Blackman has landed in the CEO role at online reputation management firm The Transparency Company.

If you go looking for the Freedom of the Press Foundation article or Poulson’s own reporting, it should appear normally in Google’s search results. The FPF contacted Google about the issue, and the company confirmed the bug. It issued a fix with unusual swiftness, telling the Foundation that the bug affected “a tiny fraction of websites.”

It is unclear whether Google was aware of the bug previously or if its exploitation was widespread. The Internet is vast, and those who seek to maliciously hide information are not prone to publicizing their methods. It’s somewhat unusual for Google to admit fault so readily, but at least it addressed the issue.

The Refresh Outdated Content tool doesn’t log who submits requests, but whoever was behind this disinformation campaign may want to look into the Streisand Effect.

Google tool misused to scrub tech CEO’s shady past from search Read More »

vpn-use-soars-in-uk-after-age-verification-laws-go-into-effect

VPN use soars in UK after age-verification laws go into effect

Also on Friday, the Windscribe VPN service posted a screenshot on X claiming to show a spike in new subscribers. The makers of the AdGuard VPN claimed that they have seen a 2.5X increase in install rates from the UK since Friday.

Nord Security, the company behind the NordVPN app, says it has seen a “1,000 percent increase in purchases” of subscriptions from the UK since the day before the new laws went into effect. “Such spikes in demand for VPNs are not unusual,” Laura Tyrylyte, Nord Security’s head of public relations, tells WIRED. She adds in a statement that “whenever a government announces an increase in surveillance, Internet restrictions, or other types of constraints, people turn to privacy tools.”

People living under repressive governments that impose extensive Internet censorship—like China, Russia, and Iran—have long relied on circumvention tools like VPNs and other technologies to maintain anonymity and access blocked content. But as countries that have long claimed to champion the open Internet and access to information, like the United States, begin considering or adopting age verification laws meant to protect children, the boundaries for protecting digital rights online quickly become extremely murky.

“There will be a large number of people who are using circumvention tech for a range of reasons” to get around age verification laws, the ACLU’s Kahn Gillmor says. “So then as a government you’re in a situation where either you’re obliging the websites to do this on everyone globally, that way legal jurisdiction isn’t what matters, or you’re encouraging people to use workarounds—which then ultimately puts you in the position of being opposed to censorship-circumvention tools.”

This story originally appeared on wired.com.

VPN use soars in UK after age-verification laws go into effect Read More »

tesla-picks-lges,-not-catl,-for-$4.3-billion-storage-battery-deal

Tesla picks LGES, not CATL, for $4.3 billion storage battery deal

Tesla has a new battery cell supplier. Although the automaker is vertically integrated to a degree not seen in the automotive industry for decades, when it comes to battery cells it’s mostly dependent upon suppliers. Panasonic cells can be found in many Teslas, with the cheaper, sturdier lithium iron phosphate (LFP) battery cells being supplied by CATL. Now Tesla has a new source of LFP cells thanks to a deal just signed with LG Energy Solutions.

According to The Korea Economic Daily, the contract between Tesla and LGES is worth $4.3 billion. LGES will begin supplying Tesla with cells next August through until at least the end of July 2030, with provisions to extend the contract if necessary.

The LFP cells probably aren’t destined for life on the road, however. Instead, they’ll likely be used in Tesla’s energy storage products, which both Tesla and LGES hope will soak up demand now that EV sales prospects look so weak in North America.

The deal also reduces Tesla’s reliance on Chinese suppliers. LGES will produce the LFP cells at its factory in Michigan, says Reuters, and so they will not be subject to the Trump trade war tariffs, unlike Chinese-made cells from CATL.

Although Tesla CEO Elon Musk has boasted about the size of the energy storage market, its contribution to Tesla’s financials remains meagre, and actually shrank during the last quarter.

Tesla picks LGES, not CATL, for $4.3 billion storage battery deal Read More »

the-case-for-memes-as-a-new-form-of-comics

The case for memes as a new form of comics


Both comics and memes rely on the same interplay of visual and verbal elements for their humor.

Credit: Jennifer Ouellette via imgflip

It’s undeniable that the rise of the Internet had a profound impact on cartooning as a profession, giving cartoonists both new tools and a new publishing and/or distribution medium. Online culture also spawned the emergence of viral memes in the late 1990s. Michelle Ann Abate, an English professor at The Ohio State University, argues in a paper published in INKS: The Journal of the Comics Studies Society, that memes—specifically, image macros—represent a new type of digital comic, right down to the cognitive and creative ways in which they operate.

“One of my areas of specialty has been graphic novels and comics,” Abate told Ars. “I’ve published multiple books on various aspects of comics history and various titles: everything from Charles Schulz’s Peanuts to The Far Side, to Little Lulu to Ziggy to The Family Circus. So I’ve been working on comics as part of the genres and texts and time periods that I look at for many years now.”

Her most recent book is 2024’s Singular Sensations: A Cultural History of One-Panel Comics in the United States, which Abate was researching when the COVID-19 pandemic hit in 2020. “I was reading a lot of single panel comics and sharing them with friends during the pandemic, and memes were something we were always sharing, too,” Abate said. “It occurred to me one day that there isn’t a whole lot of difference between the single panel comics I’m sharing and the memes. In terms of how they function, how they operate, the connection of the verbal and the visual, there’s more continuity than there is difference.”

So Abate decided to approach the question more systematically. Evolutionary biologist Richard Dawkins coined the word “meme” in his 1976 popular science book, The Selfish Gene, well before the advent of the Internet age. For Dawkins, it described a “unit of cultural transmission, or a unit of information”: ideas, catchphrases, catchy tunes, fashions, even arch building.

distraught woman pointing a finger and yelling, facing an image of a confused cat in front of a salad

Credit: Jennifer Ouellette via imgflp

In a 21st century context, “meme” refers to a piece of online content that spikes in popularity and gets passed from user to user, i.e., going viral. These can be single images remixed with tailored text, such as “Distracted Boyfriend,” “This Is Fine,” or “Batman Slapping Robin.” Or they can feature multiple panels, like “American Chopper.” Furthermore, “Memes can also be a gesture, they can be an activity, they can be a video like the Wednesday dance or the ice bucket challenge,” said Abate. “It’s become such a part of our lexicon that it’s hard to imagine a world without memes at this point.”

For Abate, Internet memes are clearly related to sequential art like comics, representing a new stage of evolution in the genre. In both cases, the visual and verbal elements work in tandem to produce the humor.

Granted, comic artists usually create both the image and the text, whereas memes adapt preexisting visuals with new text. Some might consider this poaching, but Abate points out that cartoonists like Charles Schulz have long used stencil templates (a static prefabricated element) to replicate images, a practice that is also used effectively in, say, Dinosaur Comics. And meme humor depends on people connecting the image to its origin rather than obscuring it. She compares the practice to sampling in music; the end result is still an original piece of art.

In fact, The New Yorker’s hugely popular cartoon caption contest—in which the magazine prints a single-panel drawing with no speech balloons or dialogue boxes and asks readers to supply their own verbal jokes—is basically a meme generator. “It’s seen more as a highbrow thing, crowdsourcing everybody’s wit,” said Abate. “But [the magazine supplies] the template image and then everybody puts in their own text or captions. They’re making memes. If they only published the winner, folks would be disappointed because the fun is seeing all the clever, funny things that people come up with.”

Memes both mirror and modify the comic genre. For instance, the online nature of memes can affect formatting. If there are multiple panels, those panels are usually arranged vertically rather than horizontally since memes are typically read by scrolling down one’s phone—like the “American Chopper” meme:

American Chopper meme with each frame representing a stage in the debate

Credit: Jennifer Ouellette via imgflip

Per Abate, this has the added advantage of forcing the reader to pause briefly to consider the argument and counter-argument, emphasizing that it’s an actual debate rather than two men simply yelling at one another. “If the panels were arranged horizontally and the guys were side by side in each other’s face, installments of ‘American Chopper’ would come across very differently,” she said.

A pad with infinite sheets

Scott McCloud is widely considered the leading theorist when it comes to the art of comics, and his hugely influential 2000 book, Reinventing Comics: The Evolution of an Art Form, explores the boundless potential for digital comics, freed from the constraints of a printed page. He calls this aspect the “infinite canvas,” because cartoonists can now create works of any size or shape, even as tall as a mountain. Memes have endless possibilities of a different kind, per Abate.

“[McCloud] thinks of it very expansively: a single panel could be the size of a city block,” said Abate. “You could never do that with a book because how could you print the book? How could you hold the book? How could you read the book? How could you download the book on your Kindle? But when you’ve got a digital world, it could be a city block and you can explore it with your mouse and your cursor and your track pad and, oh, all the possibilities for storytelling and for the medium that will open up with this infinite canvas. There have been many places and titles where this has played out with digital comics.

“Obviously with a meme, they’re not the size of a city block,” she continued. “So it occurred to me that they are infinite, but almost like you’re peeling sheets off a pad and the pad just has an endless number of sheets. You can just keep redoing it, redo, redo, redo. That’s memes. They get revised and repurposed and re-imagined and redone and recirculated over and over and over again. The template gets used inexhaustibly, which is what makes them fun, what makes them go viral.”

comic frame showing batman slapping robin

Credit: Jennifer Ouellette via imgflp

Just what makes a good meme image? Abate has some thoughts about that, too. “It has to be not just the image, but the ability for the image to be paired with a caption, a text,” she said. “It has to lend itself to some kind of verbal element as well. And it also has to have some elasticity of being specific enough that it’s recognizable, but also being malleable enough that it can be adapted to different forms.”

In other words, a really good meme must be generalizable if it is to last longer than a few weeks. The recent kiss-cam incident at a Coldplay concert is a case in point. When a married tech CEO was caught embracing his company’s “chief people officer,” they quickly realized they were on the Jumbotron, panicked, and hid their faces—which only made it worse. The moment went viral and spawned myriad memes. Even the Phillies mascots got into the spirit, re-enacting the moment at a recent baseball game. But that particular meme might not have long-term staying power.

“It became a meme very quickly and went viral very fast,” said Abate. “I may be proved wrong, but I don’t think the Coldplay moment will be a meme that will be around a year from now. It’s commenting on a particular incident in the culture, and then the clock will tick, and folks will move on. Whereas something like ‘Distracted Boyfriend’ or ‘This is Fine’ has more staying power because it’s not tied to a particular incident or a particular scandal but can be applied to all kinds of political topics, pop culture events, and cultural experiences.”

black man stroking his chin, mouth partly open in surprise

Credit: Sean Carroll via imgflp

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

The case for memes as a new form of comics Read More »

microsoft-is-revamping-windows-11’s-task-manager-so-its-numbers-make-more-sense

Microsoft is revamping Windows 11’s Task Manager so its numbers make more sense

Copilot+ features, and annoying “features”

Microsoft continues to roll out AI features, particularly to PCs that meet the qualifications for the company’s Copilot+ features. These betas enable “agent-powered search” for Intel and AMD Copilot+ PCs, which continue to get most of these features a few weeks or months later than Qualcomm Snapdragon+ PCs. This agent is Microsoft’s latest attempt to improve the dense, labyrinthine Settings app by enabling natural-language search that knows how to respond to queries like “my mouse pointer is too small” or “how to control my PC by voice” (Microsoft’s examples). Like other Copilot+ features, this relies on your PC’s neural processing unit (NPU) to perform all processing locally on-device. Microsoft has also added a tutorial for the “Click to Do” feature that suggests different actions you can perform based on images, text, and other content on your screen.

Finally, Microsoft is tweaking the so-called “Second Chance Out of Box Experience” window (also called “SCOOBE,” pronounced “scooby”), the setup screen that you’ll periodically see on a Windows 11 PC even if you’ve already been using it for months or years. This screen attempts to enroll your PC in Windows Backup, to switch your default browser to Microsoft Edge and its default search engine to Bing, and to import favorites and history into Edge from whatever browser you might have been trying to use before.

If you, like me, experience the SCOOBE screen primarily as a nuisance rather than something “helpful,” it is possible to make it go away. Per our guide to de-cluttering Windows 11, open Settings, go to System, then to Notifications, scroll down, expand the “additional settings” drop-down, and uncheck all three boxes here to get rid of the SCOOBE screen and other irritating reminders.

Most of these features are being released simultaneously to the Dev and Beta channels of the Windows Insider program (from least- to most-stable, the four channels are Canary, Dev, Beta, and Release Preview). Features in the Beta channel are usually not far from being released into the public versions of Windows, so non-Insiders can probably expect most of these things to appear on their PCs in the next few weeks. Microsoft is also gearing up to release the Windows 11 25H2 update, this year’s big annual update, which will enable a handful of features that the company is already quietly rolling out to PCs running version 24H2.

Microsoft is revamping Windows 11’s Task Manager so its numbers make more sense Read More »

trump-promised-a-drilling-boom,-but-us-energy-industry-hasn’t-been-interested

Trump promised a drilling boom, but US energy industry hasn’t been interested


Exec: “Liberation Day chaos and tariff antics have harmed the domestic energy industry.”

“We will drill, baby, drill,” President Donald Trump declared at his inauguration on January 20. Echoing the slogan that exemplified his energy policies during the campaign, he made his message clear: more oil and gas, lower prices, greater exports.

Six months into Trump’s second term, his administration has little to show on that score. Output is ticking up, but slower than it did under the Biden administration. Pump prices for gasoline have bobbed around where they were in inauguration week. And exports of crude oil in the four months through April trailed those in the same period last year.

The White House is discovering, perhaps the hard way, that energy markets aren’t easily managed from the Oval Office—even as it moves to roll back regulations on the oil and gas sector, offers up more public lands for drilling at reduced royalty rates, and axes Biden-era incentives for wind and solar.

“The industry is going to do what the industry is going to do,” said Jenny Rowland-Shea, director for public lands at the Center for American Progress, a progressive policy think tank.

That’s because the price of oil, the world’s most-traded commodity, is more responsive to global demand and supply dynamics than to domestic policy and posturing.

The market is flush with supplies at the moment, as the Saudi Arabia-led cartel of oil-producing nations known as OPEC+ allows more barrels to flow while China, the world’s top oil consumer, curbs its consumption. Within the US, a boom in energy demand driven by rapid electrification and AI-serving data centers is boosting power costs for homes and businesses, yet fossil fuel producers are not rushing to ramp up drilling.

There is one key indicator of drilling levels that the industry has watched closely for more than 80 years: a weekly census of active oil and gas rigs published by Baker Hughes. When Trump came into office January 20, the US rig count was 580. Last week, the most recent figure, it was down to 542—hovering just above a four-year low reached earlier in the month.

The most glaring factor behind this stagnant rig count is the current level of crude oil prices. Take the US benchmark grade: West Texas Intermediate crude. Its prices were near $66 a barrel on July 28, after hitting a four-year low of $62 in May. The break-even level for drilling new wells is somewhere close to $60 per barrel, according to oil and gas experts.

That’s before you account for the fallout of elevated tariffs on steel and other imports for the many companies that get their pipes and drilling equipment from overseas, said Robert Rapier, editor-in-chief of Shale Magazine, who has two decades of experience as a chemical engineer.

The Federal Reserve Bank of Dallas’ quarterly survey of over 130 oil and gas producers based in Texas, Louisiana, and New Mexico, conducted in June, suggests the industry’s outlook is pessimistic. Nearly half of the 38 firms that responded to this question saw their firms drilling fewer wells this year than they had earlier expected.

Survey participants could also submit comments. One executive from an exploration and production (E&P) company said, “It’s hard to imagine how much worse policies and DC rhetoric could have been for US E&P companies.” Another executive said, “The Liberation Day chaos and tariff antics have harmed the domestic energy industry. Drill, baby, drill will not happen with this level of volatility.”

Roughly one in three survey respondents chalked up the expectations for fewer wells to higher tariffs on steel imports. And three in four said tariffs raised the cost of drilling and completing new wells.

“They’re getting more places to drill and they’re getting some lower royalties, but they’re also getting these tariffs that they don’t want,” Rapier said. “And the bottom line is their profits are going to suffer.”

Earlier this month, ExxonMobil estimated that its profit in the April-June quarter will be roughly $1.5 billion lower than in the previous three months because of weaker oil and gas prices. And over in Europe, BP, Shell, and TotalEnergies issued similar warnings to investors about hits to their respective profits.

These warnings come even as Trump has installed friendly faces to regulate the oil and gas sector, including at the Department of Energy, the Environmental Protection Agency, and the Department of the Interior, the latter of which manages federal lands and is gearing up to auction more oil and gas leases on those lands.

“There’s a lot of enthusiasm for a window of opportunity to make investments. But there’s also a lot of caution about wanting to make sure that if there’s regulatory reforms, they’re going to stick,” said Kevin Book, managing director of research at ClearView Energy Partners, which produces analyses for energy companies and investors.

The recently enacted One Big Beautiful Bill Act contains provisions requiring four onshore and two offshore lease sales every year, lowering the minimum royalty rate to 12.5 percent from 16.67 percent, and bringing back speculative leasing—when lands that don’t invite enough bids are leased for less money—that was stopped in 2022.

“Pro-energy policies play a critical role in strengthening domestic production,” said a spokesperson for the American Petroleum Institute, the top US oil and gas industry group. “The new tax legislation unlocks opportunities for safe, responsible development in critical resource basins to deliver the affordable, reliable fuel Americans rely on.”

Because about half of the federal royalties end up with the states and localities where the drilling occurs, “budgets in these oil and gas communities are going to be hit hard,” Rowland-Shea of American Progress said. Meanwhile, she said, drilling on public lands can pollute the air, raise noise levels, cause spills or leaks, and restrict movement for both people and wildlife.

Earlier this year, Congress killed an EPA rule finalized in November that would have charged oil and gas companies for flaring excess methane from their operations.

“Folks in the Trump camp have long said that the Biden administration was killing drilling by enforcing these regulations on speculative leasing and reining in methane pollution,” said Rowland-Shea. “And yet under Biden, we saw the highest production of oil and gas in history.”

In fact, the top three fossil fuel producers collectively earned less during Trump’s first term than they did in either of President Barack Obama’s terms or under President Joe Biden. “It’s an irony that when Democrats are in there and they’re putting in policies to shift away from oil and gas, which causes the price to go up, that is more profitable for the oil and gas industry,” said Rapier.

That doesn’t mean, of course, that the Trump administration’s actions won’t have long-lasting climate implications. Even though six months may be a significant amount of time in political accounting, investment decisions in the energy sector are made over longer horizons, ClearView’s Book said. As long as the planned lease sales take place, oil companies can snap up and sit on public lands until they see more favorable conditions for drilling.

It’s an irony that when Democrats are in there and they’re putting in policies to shift away from oil and gas, which causes the price to go up, that is more profitable for the oil and gas industry.

What could pad the demand for oil and gas is how the One Big Beautiful Bill Act will withdraw or dilute the Inflation Reduction Act’s tax incentives and subsidies for renewable energy sources. “With the kneecapping of wind and solar, that’s going to put a lot more pressure on fossil fuels to fill that gap,” Rowland-Shea said.

However, the economics of solar and wind are increasingly too attractive to ignore. With electricity demand exceeding expectations, Book said, “any president looking ahead at end-user prices and power supply might revisit or take a flexible position if they find themselves facing shortage.”

A recent United Nations report found that “solar and wind are now almost always the least expensive—and the fastest—option for new electricity generation.” That is why Texas, deemed the oil capital of the world, produces more wind power than any other state and also led the nation in new solar capacity in the last two years.

Renewables like wind and solar, said Rowland-Shea, are “a truly abundant and American source of energy.”

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

Trump promised a drilling boom, but US energy industry hasn’t been interested Read More »

the-first-company-to-complete-a-fully-successful-lunar-landing-is-going-public

The first company to complete a fully successful lunar landing is going public

The financial services firm Charles Schwab reported last month that IPOs are on the comeback across multiple sectors of the market. “After a long dry spell, there are signs of life in the initial public offerings space,” Charles Schwab said in June. “An increase in offerings can sometimes suggest an improvement in overall market sentiment.”

Firefly Aerospace started as a propulsion company. This image released by Firefly earlier this year shows the company’s family of engines. From left to right: Miranda for the Eclipse rocket; Lightning and Reaver for the Alpha rocket; and Spectre for the Blue Ghost and Elytra spacecraft.

Firefly is eschewing a SPAC merger in favor of a traditional IPO. Another space company, Voyager Technologies, closed an Initial Public Offering on June 11, raising nearly $383 million with a valuation peaking at $3.8 billion despite reporting a loss of $66 million in 2024. Voyager’s stock price has been in a precipitous decline since then.

Financial information disclosed by Firefly in a regulatory filing with the Securities and Exchange Commission reveals the company registered $60.8 million in revenue in 2024, a 10 percent increase from the prior year. But Firefly’s net loss widened from $135 million to $231 million, largely due to higher spending on research and development for the Eclipse rocket and Elytra spacecraft.

Rocket Lab, too, reported a net loss of $190 million in 2024 and another $60.6 million in the first quarter of this year. Despite this, Rocket Lab’s stock price has soared for most of 2025, further confirming that near-term profits aren’t everything for investors.

Chad Anderson, the founder and managing partner of Space Capital, offered a “gut check” to investors listening to his quarterly podcast last week.

“90 percent of IPOs that double on day one deliver negative returns over three years,” Anderson said. “And a few breakout companies become long-term winners… Rocket Lab being chief among them. But many fall short of expectations, even with some collapsing into bankruptcy, again, as we’ve seen over the last few years.

“There’s a lot of excitement about the space economy, and rightly so,” Anderson said. “This is a once-in-a-generation opportunity for investors, but unfortunately, I think this is going to be another example of why specialist expertise is required and the ability to read financial statements and understand the underlying business fundamentals, because that’s what’s really going to take companies through in the long term.”

The first company to complete a fully successful lunar landing is going public Read More »