Author name: Rejus Almole

xai’s-grok-suddenly-can’t-stop-bringing-up-“white-genocide”-in-south-africa

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa

Where could Grok have gotten these ideas?

The treatment of white farmers in South Africa has been a hobbyhorse of South African X owner Elon Musk for quite a while. In 2023, he responded to a video purportedly showing crowds chanting “kill the Boer, kill the White Farmer” with a post alleging South African President Cyril Ramaphosa of remaining silent while people “openly [push] for genocide of white people in South Africa.” Musk was posting other responses focusing on the issue as recently as Wednesday.

They are openly pushing for genocide of white people in South Africa. @CyrilRamaphosa, why do you say nothing?

— gorklon rust (@elonmusk) July 31, 2023

President Trump has long shown an interest in this issue as well, saying in 2018 that he was directing then Secretary of State Mike Pompeo to “closely study the South Africa land and farm seizures and expropriations and the large scale killing of farmers.” More recently, Trump granted “refugee” status to dozens of white Afrikaners, even as his administration ends protections for refugees from other countries

Former American Ambassador to South Africa and Democratic politician Patrick Gaspard posted in 2018 that the idea of large-scale killings of white South African farmers is a “disproven racial myth.”

In launching the Grok 3 model in February, Musk said it was a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct.” X’s “About Grok” page says that the model is undergoing constant improvement to “ensure Grok remains politically unbiased and provides balanced answers.”

But the recent turn toward unprompted discussions of alleged South African “genocide” has many questioning what kind of explicit adjustments Grok’s political opinions may be getting from human tinkering behind the curtain. “The algorithms for Musk products have been politically tampered with nearly beyond recognition,” journalist Seth Abramson wrote in one representative skeptical post. “They tweaked a dial on the sentence imitator machine and now everything is about white South Africans,” a user with the handle Guybrush Threepwood glibly theorized.

Representatives from xAI were not immediately available to respond to a request for comment from Ars Technica.

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa Read More »

google-deepmind-creates-super-advanced-ai-that-can-invent-new-algorithms

Google DeepMind creates super-advanced AI that can invent new algorithms

Google’s DeepMind research division claims its newest AI agent marks a significant step toward using the technology to tackle big problems in math and science. The system, known as AlphaEvolve, is based on the company’s Gemini large language models (LLMs), with the addition of an “evolutionary” approach that evaluates and improves algorithms across a range of use cases.

AlphaEvolve is essentially an AI coding agent, but it goes deeper than a standard Gemini chatbot. When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.

According to DeepMind, this AI uses an automatic evaluation system. When a researcher interacts with AlphaEvolve, they input a problem along with possible solutions and avenues to explore. The model generates multiple possible solutions, using the efficient Gemini Flash and the more detail-oriented Gemini Pro, and then each solution is analyzed by the evaluator. An evolutionary framework allows AlphaEvolve to focus on the best solution and improve upon it.

Credit: Google DeepMind

Many of the company’s past AI systems, for example, the protein-folding AlphaFold, were trained extensively on a single domain of knowledge. AlphaEvolve, however, is more dynamic. DeepMind says AlphaEvolve is a general-purpose AI that can aid research in any programming or algorithmic problem. And Google has already started to deploy it across its sprawling business with positive results.

Google DeepMind creates super-advanced AI that can invent new algorithms Read More »

netflix-will-show-generative-ai-ads-midway-through-streams-in-2026

Netflix will show generative AI ads midway through streams in 2026

Netflix is joining its streaming rivals in testing the amount and types of advertisements its subscribers are willing to endure for lower prices.

Today, at its second annual upfront to advertisers, the streaming leader announced that it has created interactive mid-roll ads and pause ads that incorporate generative AI. Subscribers can expect to start seeing the new types of ads in 2026, Media Play News reported.

“[Netflix] members pay as much attention to midroll ads as they do to the shows and movies themselves,” Amy Reinhard, president of advertising at Netflix, said, per the publication.

Netflix started testing pause ads in July 2024, per The Verge.

Netflix launched its ad subscription tier in November 2022. Today, it said that the tier has 94 million subscribers, compared to the 300 million total subscribers it claimed in January. The current number of ad subscribers represents a 34 percent increase from November. Half of new Netflix subscribers opt for the $8 per month option rather than ad-free subscriptions, which start at $18 per month, the company says.

Netflix will show generative AI ads midway through streams in 2026 Read More »

fighting-obvious-nonsense-about-ai-diffusion

Fighting Obvious Nonsense About AI Diffusion

Our government is determined to lose the AI race in the name of winning the AI race.

The least we can do, if prioritizing winning the race, is to try and actually win it.

It is one thing to prioritize ‘winning the AI race’ against China over ensuring that humanity survives, controls and can collectively steer our future. I disagree with that choice, but I understand it. This mistake is very human.

I also believe that more alignment and security efforts at anything like current margins not only do not slow our AI efforts, they would actively help us win the race against China, by enabling better diffusion and use of AI, and ensuring we can proceed with its development. So the current path is a mistake even if you do not worry about humanity dying or losing control over the future.

However, if you look at the idea of building smarter, faster, more capable, more competitive, freely copyable digital minds we don’t understand that can be given goals and think ‘oh that future will almost certainly stay under humanity’s control and not be a danger to us in any way’ (and when you put it like that, um, what are you thinking?) then I understand the second half of this mistake as well.

What is not an understandable mistake, what I struggle to find a charitable and patriotic explanation for, is to systematically cripple or give away many of America’s biggest and most important weapons in the AI race, in exchange for thirty pieces of silver and some temporary market share.

To continue alienating our most important and trustworthy allies with unnecessary rhetoric and putting up trading barriers with them. To attempt to put tariffs even on services like movies where we already dominate and otherwise give the most important markets, like the EU, every reason in their minds to put up barriers to our tech companies and question our reliability as an ally. And simultaneously in the name of building alliances put the most valuable resources with unreliable partners like Malaysia, Saudi Arabia, Qatar and the UAE.

Indeed, we have now scrapped the old Biden ‘AI diffusion’ rule with no sign of its replacement, and where did David Sacks gloat about this? Saudi Arabia, of course. This is what ‘trusted partners’ means to them. Meanwhile, we are warning sterny against use of Huawai’s AI chips, ensuring China keeps all those chips itself. Our future depends on who has the compute, who ends up with the chips. We seem to instead think the future is determined by the revenue from chip manufacturing? Why would that be a priority? What do these people even think is going on?

To not only fail to robustly support and bring down regulatory and permitting barriers to the nuclear power we urgently need to support our data centers, but to actively wipe out the subsidies on which the nuclear industry depends, as the latest budget aims to do with remarkably little outcry via gutting the LPO and tax credits, while China of course ramps up its nuclear power plant construction efforts, no matter what the rhetoric on this might say. Then to use our inability to power the data centers as a reason to put our strategically vital data centers, again, in places like the UAE, because they can provide that power. What do you even call that?

To fail to let our AI companies have the ability to recruit the best and brightest, who want to come here and help make America great, instead throwing up more barriers and creating a climate of fear I’m hearing is turning many of the best people away.

And most of all, to say that the edge America must preserve, the ‘race’ that we must ‘win,’ is somehow the physical production of advanced AI chips. So, people say, in order to maintain our edge in chip production, we should give that edge entirely away right now, allowing those chips to be diverted to China, as would be inevitable in the places that are looking to buy where we seem most eager to enable sales. Nvidia even outright advocates that it should be allowed to sell to China openly, and no one in Washington seems to hold them accountable for this.

And we are doing all this while many perpetuate the myth that our AI efforts are not very solidly ahead of China in the places that matter most, or threaten to lock in the world’s customers, because DeepSeek which is impressive but still very clearly substantially behind our top labs, or because TikTok and Temu exist while forgetting that the much bigger Amazon and Meta also exist.

Temu’s sales are less than a tenth of Amazon’s, and the rest of the world’s top four e-commerce websites are Shopify, Walmart.com and eBay. As worrisome as it is, TikTok is only the fourth largest social media app behind Facebook, YouTube and Instagram, and there aren’t signs of that changing. Imagine if that situation was reversed.

Earlier this week I did an extensive readthrough and analysis of the Senate AI Hearing.

Here, I will directly lay out my response to various claims by and cited by US AI Czar David Sacks about the AI Diffusion situation and the related topics discussed above.

  1. Some of What Is Being Incorrectly Claimed.

  2. Response to Eric Schmidt.

  3. China and the AI Missile Gap.

  4. To Preserve Your Tech Edge You Should Give Away Your Tech Edge.

  5. To Preserve Your Compute Edge You Should Sell Off Your Compute.

  6. Shouting From The Rooftops: The Central Points to Know.

  7. The Longer Explanations.

  8. The Least We Can Do.

There are multiple distinct forms of Obvious Nonsense to address, either as text or very directly implied, whoever you attribute the errors to:

David Sacks (US AI Czar): Writing in NYT, former Google CEO Eric Schmidt warns that “China Tech Is Starting to Pull Ahead”:

“China is at parity or pulling ahead of the United States in a variety of technologies, notably at the A.I. frontier. And it has developed a real edge in how it disseminates, commercializes and manufactures tech. History has shown us that those who adopt and diffuse a technology the fastest win.”

As he points out, diffusing a technology the fastest — and relatedly, I would add, building the largest partner ecosystem — are the keys to winning. Yet when Washington introduced an “AI Diffusion Rule”, it was almost 200 pages of regulation hindering adoption of American technology, even by close partners.

The Diffusion Rule is on its way out, but other regulations loom.

President Trump committed to rescind 10 regulations for every new regulation that is added.

If the U.S. doesn’t embrace this mentality with respect to AI, we will lose the AI race.

Sriram Krishnan: Something @DavidSacks and I and many others here have been emphasizing is the need to have broad partner ecosystems using American AI stack rather than onerous complicated regulations.

If the discussion was ‘a bunch of countries like Mexico, Poland and Portugal are in Tier 2 that should instead have been in Tier 1’ then I agree there are a number of countries that probably should have been Tier 1. And I agree that there might well be a simpler implementation waiting to be fond.

And yet, why is it that in practice, these ‘broad partner ecosystems using American AI’ always seem to boil down to a handful of highly questionably allied and untrustworthy Gulf States with oil money trying to buy global influence, perhaps with a side of Malaysia and other places that are very obviously going to leak to China? David Sacks literally seems to think that if you do not literally put the data center in specifically China, then that keeps it in friendly hands and out of China’s grasp, and that we can count on our great friendships and permanent alliances with places like Saudi Arabia. Um, no. Why would you think that?

That Eric Schmidt editorial quoted above is a royal mess. For example, you have this complete non-sequitur.

Eric Schmidt and Selina Xu: History has shown us that those who adopt and diffuse a technology the fastest win.

So it’s no surprise that China has chosen to forcefully retaliate against America’s recent tariffs.

China forcefully retaliated against America’s tariffs for completely distinct reasons. The story Schmidt is trying to imply here doesn’t make any sense. His vibe reports are Just So Stories, not backed up at all by economic or other data.

‘By some benchmarks’ you can show pretty much anything, but I mean wow:

Eric Schmidt and Selina Xu: Yet, as with smartphones and electric vehicles, Silicon Valley failed to anticipate that China would find a way to swiftly develop a cheap yet state-of-the-art competitor. Today’s Chinese models are very close behind U.S. versions. In fact, DeepSeek’s March update to its V3 large language model is, by some benchmarks, the best nonreasoning model.

Look. No. Stop.

He then pivots to pointing out that there are other ‘tech’ areas where China is competitive, and goes into full scaremonger mode:

Apps for the Chinese online retailers Shein and Temu and the social media platforms RedNote and TikTok are already among the most downloaded globally. Combine this with the continuing popularity of China’s free open-source A.I. models, and it’s not hard to imagine teenagers worldwide hooked on Chinese apps and A.I. companions, with autonomous Chinese-made agents organizing our lives and businesses with services and products powered by Chinese models.

As I noted above, ‘American online retailers like Amazon and Shopify and the social media platforms Facebook and Instagram are already not only among but the most used globally.’

There is a stronger case one can make with physical manufacturing, when Eric then pivots to electric cars (and strangely focuses on Xiaomi over BYD) and industrial robotics.

Then, once again, he makes the insane ‘the person behind is giving away their inferior tech so we should give away our superior tech to them, that’ll show them’ argument:

We should learn from what China has done well. The United States needs to openly share more of its A.I. technologies and research, innovate even faster and double down on diffusing A.I. throughout the economy.

When you are ahead and you share your model, you give your rivals that model for free, killing your lead and your business for some sort of marketing win, and also you’re plausibly creating catastrophic risk. When you are behind, and you share it, sure, I mean why not.

In any case, he’s going to get his wish. OpenAI is going to release an open weight reasoning model, reducing America’s lead in order to send the clear message that yes we are ahead. Hope you all think it was worth it.

The good AI argument is that China is doing a better job in some ways of AI diffusion, of taking its AI capabilities and using them for mundane utility.

Similarly, I keep seeing forms of an argument that says:

  1. America’s export controls have given us an important advantage in compute.

  2. China’s companies have been slowed down by this, but have managed to stay only somewhat behind us in spite of it (largely because following is much easier).

  3. Therefore, we should lift the controls and give up our compute edge.

I’m sorry, what?

At lunch during Selina’s trip to China, when U.S. export controls were brought up, someone joked, “America should sanction our men’s soccer team, too, so they will do better.” So that they will do better.

It’s a hard truth to swallow, but Chinese tech has become better despite constraints, as Chinese entrepreneurs have found creative ways to do more with less. So it should be no surprise that the online response in China to American tariffs has been nationalistic and surprisingly optimistic: The public is hunkering down for a battle and thinks time is on Beijing’s side.

I don’t know why Eric keeps talking about the general tariffs or trade war with China here, or rather I do and it’s very obviously a conflation designed as a rhetorical trick. That’s a completely distinct issue, and I here take no position on that fight other than to note that our actions were not confined to China, and we very obviously shouldn’t be going after our trading partners and allies in these ways – including by Sacks’s logic.

The core proposal here is that, again:

  1. We gave China less to work with, put them at a disadvantage.

  2. They are managing to compete with us despite (his word) the disadvantage.

  3. Therefore we should take away their disadvantage.

It’s literal text. “America should sanction our men’s soccer team, too, so they will do better.” Should we also go break their legs? Would that help?

Then there’s a strange mix of ‘China is winning so we should become a centrally planned economy,’ mixed with ‘China is winning so we cannot afford to ever have any regulations on everything.’ Often both are coming from the same people. It’s weird.

So, shouting from the rooftops, once more with feeling for the people in the back:

  1. America is ahead of China in AI.

  2. Diffusion rules serve to protect America’s technological lead where it matters.

  3. UAE, Qatar and Saudi Arabia are not reliable American allies, nor are they important markets for our technology. We should not be handing them large shares of the world’s most valuable resource, compute.

  4. The exact diffusion rule is gone but something similar must take its place, to do otherwise would be how America ‘loses the AI race.’

  5. Not having any meaningful regulations at all on AI, or ‘building machines that are smarter and more capable than humans,’ is not a good idea, nor would it mean America would ‘lose the AI race.’

  6. AI is currently virtually unregulated as a distinct entity, so ‘repeal 10 regulations for every one you add’ is to not regulate at all building machines that are soon likely to be smarter and more capable than humans, or anything else either.

  7. ‘Winning the AI race’ is about racing to superintelligence. It is not about who gets to build the GPU. The reason to ‘win’ the ‘race’ is not market share in selling big tech solutions. It is especially not about who gets to sell others the AI chips.

  8. If we care about American dominance in global markets, including tech markets, stop talking about how what we need to do is not regulate AI, and start talking about the things that will actually help us, or at least stop doing the things that actively hurt us and could actually make us lose.

  1. American AI chips dominate and will continue to dominate. Our access to compute dominates, and will dominate if we enact and enforce strong export controls. American models dominate, we are in 1st, 2nd and 3rd with (in some order) OpenAI, Google and Anthropic. We are at least many months ahead.

    1. There was this one time DeepSeek put out an excellent reasoning model called r1 and an app for it.

    2. Through a confluence of circumstances (including misinterpretation of its true training costs, its making a good clean app where it showed its chain of thought, Google being terrible at marketing, it beating several other releases by a few weeks, OpenAI’s best models being behind paywalls, China ‘missile gap’ background fears, comparing only in the realms where r1 was relevant, acting as if only open models count, etc), this caught fire for a bit.

    3. But after a while it became clear that while r1 was a great achievement and indicated DeepSeek was a serious competitor, it was still even at their highest point 4-6 months behind, fundamentally it was a ‘fast follow’ achievement which is very different from taking a lead or keeping pace, and as training costs are scaled up it will be very difficult for DeepSeek to keep pace.

    4. That doesn’t mean DeepSeek doesn’t matter. Without DeepSeek the company, China would be much further behind than this.

    5. In response to this, a lot of jingoism and fearmongering akin to Kennedy’s ‘missile gap’ happened, which continues to this day.

    6. There are of course other tech and no-tech areas where China is competitive, such as Temu and TikTok in tech. But that’s very different.

    7. China does have advantages, especially its access to energy, and if they were allowed to access large amounts of compute that would be worrisome.

  2. The diffusion rules serve to protect America’s technological lead where it matters.

    1. America makes the best AI chips.

    2. The reason this matters is that it lets us be the ones who have those chips.

    3. America’s lead has many causes but one main cause is that we have far more and better compute, due to superior access to the best AI chips.

  3. Biden’s Diffusion Rule placed some countries in Tier 2 that could reasonably have and probably should have (based on what I know) been placed in Tier 1, or at worst a kind of Tier 1.5 with only mildly harsher supervision.

    1. If you want to move places like the remaining NATO members into Tier 1, or do something with a similar effect? That seems reasonable to me.

    2. However this very clearly does not include the very countries that we keep talking about allowing to build massive data centers with American AI chips, like the UAE, Saudi Arabia and Qatar.

    3. When there is talk of robust American allies and who will build and use our technology, somehow the talk is almost always about gulf states and other unreliable allies that are trying to turn their wealth into world influence.

    4. I leave why this might be so as an exercise to the reader.

    5. Even if such states do stay on our side, you had better believe they will use the leverage this brings to extract various other concessions from us.

    6. There is also very real concern that placing these resources in such locations would cause them to be misused by bad actors, including for terrorism, including via CBRN risks. It is foolish not to realize this.

    7. There is a conflation of selling other countries American AI chips and having them build AI data centers, with those countries using America’s AIs and other American tech company products. We should care mostly about them using our software products. The main reason to build AI data centers in other countries that are not our closest most trustworthy allies is if we are unable to build those data centers in America or in our closest most trustworthy allies, which mostly comes down to issues of permitting and power supply, which we could do a lot more to solve.

    8. If you’re going to say ‘the two are closely related we don’t want to piss off our allies’ right about now, I am going to be rather speechless given what else we have been up to lately including in trade, you cannot be serious right now. Or, if you want to actually get serious about this across the board, good, let’s talk.

    9. Is this a sign we don’t fully trust some of these countries? Yes. Yes it is.

  4. The exact diffusion rule is going away but something similar must and will take its place, to do otherwise would be how America ‘loses the AI race.’

    1. If China could effectively access the best AI chips, that would get rid of one of our biggest and most important advantages. Given their edge in energy, it could over time reverse that advantage.

    2. The point of trying to prevent China from improving its chip production is to prevent China from having the resulting compute. If we sell the chips to prevent this, then they already have the compute now. You lose.

    3. It is very clear that our exports to the ‘tier 2’ countries that look to buy what looks suspiciously like a lot of chips are often diverted to use by China, with the most obvious example being those sold to Malaysia.

    4. We should also worry about what happens to data centers built in places like Saudi Arabia or the UAE.

    5. I will believe the replacement rule will have the needed teeth when I see it.

    6. That doesn’t mean we can’t find a better, simpler implementation that protects American chips from falling into Chinese hands. But we need some diffusion rule that we can enforce, and that in practice actually prevents the Chinese from buying or getting access to our AI chips in quantity.

    7. Yes, if we sell our best AI chips to everyone freely, as Nvidia wants to do, or do it in ways that are effectively the same thing, then that helps protect Nvidia’s profits and market share, and by denying others markets we do gain some edge in the ability to maintain our dominance in making AI chips.

    8. But so what? All we do is make a little money on the AI chips, and China gets to catch up in actually having and using the AI chips, which is what matters. We’d be sacrificing the future on the altar of Nvidia’s stock price. This is the capitalist selling the rope with which to hang him. ‘Winning the race’ to an ordinary tech market is not what matters. If the only way to protect our lead for a little longer there is to give away the benefits of the lead, of what use was the lead?

    9. It also would make very little difference to either Nvidia or its Chinese competitors.

    10. Nvidia can still sell as many chips as it can produce, well above cost. All the chips Nvidia is not allowed to sell to China, even the crippled A20s, will happily be purchased in Western markets at profitable prices, if Nvidia allows it, giving America and its allies more compute and China less compute.

    11. I would be happy, if necessary, to have USG purchase any chips that Nvidia or AMD or anyone else is unable to sell due to diffusion rules. We would have many good uses for them, we can use them for public compute resources for universities and startups or whatever if the military doesn’t want them. The cost is peanuts relative to the stakes. (Disclosure, I am a shareholder of Nvidia, etc, but also I am writing this entire post).

    12. Demand in China for AI chips greatly outstrips supply. They have no need for export markets for their chips, and indeed we should be happy if they choose to export some of them rather than keeping them for domestic use.

    13. China already sees AI chip production as central to its future and national security. They are already pushing as hard as they dare.

  5. Not having any meaningful regulations at all on AI, or ‘building machines that are smarter and more capable than humans,’ is not a good idea, nor would it mean America would ‘lose the AI race.’

    1. This is not a strawman position. The House is trying to impose a 10-year moratorium on state and local enforcement of any laws whatsoever related to AI, even a potential law banning CSAM, without offering anything to replace that in any way, and Congress notoriously can’t pass laws these days. We also have the call to ‘repeal 10 regulations for every new one,’ which is again de facto a call for no regulations at all (see #6).

    2. Highly capable AI represents an existential risk to humanity.

    3. If we ‘win the race’ by simply going ahead as fast as possible, it’s not America that win the future. The AIs win the future.

    4. I can’t go over all the arguments about that here, but seriously it should be utterly obvious that building more intelligent, capable, competitive, faster, cheaper minds and optimization engines, that can be freely copied and given whatever goals and tasks, is not a safe thing for humanity to do.

    5. I strongly believe it turns out it’s far more dangerous than I made it sound there, for many many reasons. I don’t have the space here to talk about why but seriously how do people claims this is a ‘safe’ action. What?

    6. Even if highly capable AI remains under our control, it is going to transform the world and all aspects of our civilization and way of life. The idea that we would not want to steer that at all seems rather crazy.

    7. Regulations do not need to ‘slow down’ AI in a meaningful way. Indeed, a total lack of meaningful regulations would slow down diffusion and practical use of AI, including for national security and core economic purposes, more than wise regulation, because no one is going to use AI they cannot trust.

    8. That goes to both people knowing that they can trust AI, and also to requiring the AIs be made trustworthy. Security is capability. We also need to protect our technology and intellectual property from theft if we want to keep a lead.

    9. A lack of such regulations would also mean falling back upon the unintended consequences of ordinary law as they then happen to apply to AI, which will often be extremely toxic for our ability to apply AI to the most valuable tasks.

    10. If we try to not regulate AI at all, the public will turn against AI. Americans already dislike AI, in a way the Chinese do not. We must build trust.

    11. China, like everyone else, already regulates AI. The idea that if we had a fraction of the regulations they do, or if we interfere with companies or the market a fraction of how much they constantly do so everywhere, that we suddenly ‘lose the race,’ is silly.

    12. We have a substantial lead in AI, despite many efforts to lose I discuss later. We are not in danger of ‘losing’ every time we breathe on the situation.

    13. Most of the regulations that are being pushed for are about transparency, often even transparency to the government, so we can know what the hell is going on, and so people can critique the safety and security plans of labs. They are about building state capacity to evaluate models, and using that, which actively benefits AI companies in various ways as discussed above.

    14. There are also real and important mundane harms to deal with now.

    15. Yes, if we were to impose highly onerous, no good, very bad regulations, in the style of the European Union, that would threaten our AI lead and be very bad. This is absolutely a real risk. But this type of accusation consistently gets levied against any bill attempting to do anything, anywhere, for any reason – or that someone is trying to ‘ban math’ or ‘kill AI’ or whatever. Usually this involves outright hallucinations about what is in the bill, or its consequences.

  6. AI is currently virtually unregulated as a distinct entity, so ‘repeal 10 regulations for every one you add’ is to not regulate at all building machines that are soon likely to be smarter and more capable than humans, or anything else either.

    1. There are many regulations that impact AI in various ways.

    2. Many of those regulations are worth repealing or reforming. For example, permitting reform on power plants and transmission lines. And there are various consequences of copyright, or of common law, that should be reconsidered for the AI age.

    3. What almost all these rules have in common is that they are not rules about AI. They are rules that are already in place in general, for other reasons. And again, I’d be happy to get rid of many of them, in general or for AI in particular.

    4. But yes, you are going to want to regulate AI, and not merely in the ‘light touch’ ways that are code words for doing nothing, or actively working to protect AI from existing laws.

    5. AI is soon going to be the central fact about the world. To suggest this level of non-intervention is not classical liberalism, it is anarchism.

    6. Anarchism does not tend to go well for the uncompetitive and disadvantaged, which in the future age of ASI would be the humans, and it fails to solve various important market failures, collective action and public goods problems and so on.

    7. The reason why a general hands-off approach has in the past tended to benefit humans, so long as you work to correct key market failures and solve particular collective action problems, is that humans are the most powerful optimization engines, and most intelligent and powerful minds, on the planet, and we have various helpful social dynamics and characteristics. All of that, and some other key underpinnings I could go into, often won’t apply to a future world with very powerful AI.

    8. If we don’t do sensible regulations now, while we can all navigate this calmly, it will get done after something goes wrong, and not calmly or wisely.

  7. ‘Winning the AI race’ is not about who gets to build the GPU. ‘Winning’ the ‘race’ is not important because of who gets market share in selling big tech solutions. It is especially not about who gets to sell others the AI chips. Winning the race is about the race to superintelligence.

    1. The major AI labs say we will likely reach AGI within Trump’s second term, with superintelligence (ASI) following soon thereafter. David Sacks himself endorses this view explicitly.

    2. ‘Winning the race’ to superintelligence is indeed very important. The way in which humanity reaches superintelligence (assuming we do reach it) will determine the future.

    3. That future might be anything from wonderful to worthless. It might or might not involve humanity surviving, or being in control over the future. It might or might not reflect different values, or be something we would find valuable.

    4. If we build a superintelligence before we know how to align it, meaning before we know how to get it to do what we want it to do, everyone dies, or at minimum we lose control over the future.

    5. If we build a superintelligence and know how to align it, but we don’t choose a good thing to align it to, meaning we don’t wisely choose how it will act, then the same thing happens. We die, or we lose control over the future.

    6. If we build a superintelligence and know how to align it, and align it in general to ‘whatever the local human tells it to do,’ even with restrictions on that, and give out copies, this results at best in gradual disempowerment of humanity and us losing control over the future and the future likely losing all value. This problem is hard.

    7. This is very different from a question like ‘who gets better market share for their AI products,’ whether that is hardware or software, and questions about things like commercial adaptation and lockin or tech stack usage or what not, as if AI was some ordinary technology.

    8. AI actually has remarkably little lock-in. You can mostly swap one model out for another at will if someone comes out with a better one. There’s no need to run a model that matches the particular AI chips you own, either. AI itself will be able to simplify the ‘migration’ process or any lock-in issues.

    9. It’s not that whose AI models people use doesn’t matter at all. But in a world in which we will soon reach superintelligence, it’s mostly about market share in the meantime to fund AI development.

    10. If we don’t soon reach superintelligence, then we’re dealing with a far more ‘ordinary’ technology, and yes we want market share, but it’s no longer an existentially important race, it won’t have dramatic lock-in effects, and getting to the better AI products first will still depend on us retaining our compute advantages as long as possible.

  8. If we care about American dominance in global markets, including tech markets, and especially if we care about winning the race to AGI and superintelligence and otherwise protecting American national security, stop talking about how what we need to do is not regulate AI, and start talking about the things that will actually help us, or at least stop doing the things that actively hurt us and could actually make us lose.

    1. Straight talk. While it’s not my primary focus because development of AGI and ASI is more important, I strongly agree that we want American tech, especially American software, being used as widely as possible, especially by allies, across as much of the tech stack as possible. Even more than that, I strongly want America to have the lead in frontier highly capable AI, including AGI and then ASI, in the ways that determine the future.

    2. If we want to do that, what is most important to accomplishing this?

    3. We need allies to work with us and use our tech. Everyone says this. That means we need to have allies! That means working with them, building trust. Make them want to build on our tech stacks, and buy our products.

    4. That also means not imposing tariffs on them, or making them lose trust in us and our technology. Various recent actions have made our allies lose trust, in ways that are causing them to be less trusting of American tech stacks. And when we go to trade wars with them, you know what our main exports are that they will go after? Things like AI.

    5. It also means focusing most on our most important and trustworthy allies that have the most important markets. That means places like our NATO allies, Japan, South Korea and Australia, not Saudi Arabia, Qatar and the UAE. Those later markets don’t matter zero, but they are relatively tiny.

    6. Yes, avoiding hypothetical sufficiently onerous regulation on AI directly, and I will absolutely be keeping an eye out for this. Most of the regulatory and legal barriers that matter lie elsewhere.

    7. The key barriers are in the world of atoms, not the world of bits.

    8. Energy generation and transmission, permitting reform.

    9. High-skilled immigration, letting talent come to America.

    10. Education reform so AI helps teach rather than helping students cheat.

    11. Want reshoring? Repeal the Jones Act so we can transmit the resulting goods. Automate the ports. Allow self-driving cars and trucks broadly. And so on.

    12. Regulations that prevent the application of AI to high value sectors, or otherwise hold back America. Broad versions of YIMBY for housing. Occupational licensing. FDA requirements. The list goes on. Unleash the abundance agenda, it mostly lines up with what AI needs. It’s time to build.

    13. Dealing with various implications of other laws that often were crazy already and definitely don’t make sense in an AI world.

    14. The list goes on.

Or, as Derek Thompson put it:

Derek Thompson: Trump’s new AI directive (quoted below from David Sacks) argues the US should take care to:

– respect our trading partners/allies rather than punish them with dumb rules that restrict trade

– respect “due process”

It’d be interesting to apply these values outside of AI!

Jordan Schneider: It’s an NVDA press release. Just absurd.

David Sacks continues to beat the drum that the diffusion rule ‘undermines the goal of winning the AI race,’ as if the AI race is about Nvidia’s market share. It isn’t.

If we want to avoid allocations of resources by governmental decision, overreach of our executive branch authorities to restrict trade, alienating US allies and lack of due process, Sacks’s key points here? Yeah, those generally sound like good ideas.

To that end, yes, I do believe we can improve on Biden’s proposed diffusion rules, especially when it comes to US allies that we can trust. I like the idea that we should impose less trade restrictions on these friendly countries, so long as we can ensure that the chips don’t effectively fall into the wrong hands. We can certainly talk price.

Alas, in practice, it seems like the actual plans are to sell massive amounts of AI chips to places like UAE, Saudi Arabia and Malaysia. Those aren’t trustworthy American allies. Those are places with close China ties. We all know what those sales really mean, and where they could easily be going. And those are chips we could have kept in more trustworthy and friendly hands, that are eager to buy them, especially if they have help facilitating putting those chips to good use.

The policy conversations I would like to be having would focus not only on how to best superchange American AI and the American economy, but also on how to retain humanity’s ability to steer the future and ensure AI doesn’t take control, kill everyone or otherwise wipe out all value. And ideally, to invest enough in AI alignment, security, transparency and reliability that there would start to be a meaningful tradeoff where going safer would also mean going slower.

Alas. We massively underinvesting in reliability and alignment and security purely from a practical utility perspective and we not even having that discussion.

Instead we are having a discussion about how, even if your only goal is ‘America must beat China and let the rest handle itself,’ to stop shooting ourselves in the foot on that basis alone.

The very least we can do is not shoot ourselves in the foot, and not sell out our future for a little bit of corporate market share or some amount of oil money.

Discussion about this post

Fighting Obvious Nonsense About AI Diffusion Read More »

microsoft-shares-its-process-(and-discarded-ideas)-for-redone-windows-11-start-menu

Microsoft shares its process (and discarded ideas) for redone Windows 11 Start menu

Microsoft put a lot of focus on Windows 11’s design when it released the operating system in 2021, making a clean break with the design language of Windows 10 (which had, itself, simply tweaked and adapted Windows 8’s design language from 2012). Since then, Microsoft has continued to modify the software’s design in bits and pieces, both for individual apps and for foundational UI elements like the Taskbar, system tray, and Windows Explorer.

Microsoft is currently testing a redesigned version of the Windows 11 Start menu, one that reuses most of the familiar elements from the current design but reorganizes them and gives users a few additional customization options. On its Microsoft Design blog today, the company walked through the new design and showed some of the ideas that were tried and discarded in the process.

This discarded Start menu design toyed with an almost Windows XP-ish left-hand sidebar, among other elements. Microsoft

Microsoft says it tested its menu designs with “over 300 Windows 11 fans” in unmoderated studies, “and dozens more” in “live co-creation calls.” These testers’ behavior and reactions informed what Microsoft kept and what it discarded.

Many of the discarded menu ideas include larger previews for recently opened files, more space given to calendar reminders, and recommended “For You” content areas; one has a “create” button that would presumably activate some generative AI feature. Looking at the discarded designs, it’s easier to appreciate that Microsoft went with a somewhat more restrained redesign of the Start menu that remixes existing elements rather than dramatically reimagining it.

Microsoft has also tweaked the side menu that’s available when you have a phone paired to your PC, making it toggleable via a button in the upper-right corner. That area is used to display recent texts and calls and other phone notifications, recent contacts, and battery information, among a couple other things.

Microsoft’s team wanted to make sure the new menu “felt like it belonged on both a [10.5-inch] Surface Go and a 49-inch ultrawide,” a nod to the variety of hardware Microsoft needs to consider when making any design changes to Windows. The menu the team landed on is essentially what has been visible in Windows Insider Preview builds for a month or so now: two rows of pinned icons, a “Recommended” section with recently installed apps, recently opened files, a (sigh) Windows Store app that Microsoft thinks you should try, and a few different ways to access all the apps on your PC. By default, these will be arranged by category, though you can also view a hierarchical alphabetized list like you can in the current Start menu; the big difference is that this view is at the top level of the Start menu in the new version, rather than being tucked away behind a button.

For more on the history of the Start menu from its inception in the early ’90s through the release of Windows 10, we’ve collected tons of screenshots and other reminiscences here.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Microsoft shares its process (and discarded ideas) for redone Windows 11 Start menu Read More »

fcc-threatens-echostar-licenses-for-spectrum-that-spacex-wants-to-use

FCC threatens EchoStar licenses for spectrum that SpaceX wants to use

“If SpaceX had done a basic search of public filings, it would know that EchoStar extensively utilizes the 2 GHz band and that the Commission itself has confirmed the coverage, utilization, and methodology for assessing the quality of EchoStar’s 5G network based on independent drive-tests,” EchoStar told the FCC. “EchoStar’s deployment already reaches over 80 percent of the United States population with over 23,000 5G sites deployed.”

There is also a pending petition filed by Vermont-based VTel Wireless, which asked the FCC to reconsider a 2024 decision to extend EchoStar construction deadlines for several spectrum bands. VTel was outbid by Dish in auctions for licenses to use AWS H Block and AWS-3 bands.

“In this case, teetering on the verge of bankruptcy, EchoStar found itself unable to meet the commitments previously made to the Commission in connection with its approval of T-Mobile’s merger with Sprint—an approval predicated on EchoStar constructing a fourth nationwide 5G broadband network by June 14, 2025,” VTel wrote in its October 2024 petition. “But with no notice to or input from the public, WTB [the FCC’s Wireless Telecommunications Bureau] apparently cut a deal with EchoStar to give it yet more time to complete that network and finally put its wireless licenses to use.”

FCC seeks public input

Carr’s letter said he asked FCC staff to investigate EchoStar’s compliance with construction deadlines and “to issue a public notice seeking comment on the scope and scale of MSS [mobile satellite service] utilization in the 2 GHz band that is currently licensed to EchoStar or its affiliates.” The AWS-4 band (2000-2020 MHz and 2180-2200 MHz) was originally designated for satellite service. The FCC decided to also allow terrestrial use of the frequencies in 2012 to expand mobile broadband access.

The FCC Space Bureau announced yesterday that it is seeking comment on EchoStar’s use of the 2GHz spectrum, and the Wireless Telecommunications Bureau is seeking comment on VTel’s petition for reconsideration.

“In 2019, EchoStar’s predecessor, Dish, agreed to meet specific buildout obligations in connection with a number of spectrum licenses across several different bands,” Carr wrote. “In particular, the FCC agreed to relax some of EchoStar’s then-existing buildout obligations in exchange for EchoStar’s commitment to put its licensed spectrum to work deploying a nationwide 5G broadband network. EchoStar promised—among other things—that its network would cover, by June 14, 2025, at least 70 percent of the population within each of its licensed geographic areas for its AWS-4 and 700 MHz licenses, and at least 75 percent of the population within each of its licensed geographic areas for its H Block and 600 MHz licenses.”

FCC threatens EchoStar licenses for spectrum that SpaceX wants to use Read More »

dutch-scientists-built-a-brainless-soft-robot-that-runs-on-air 

Dutch scientists built a brainless soft robot that runs on air 

Most robots rely on complex control systems, AI-powered or otherwise, that govern their movement. These centralized electronic brains need time to react to changes in their environment and produce movements that are often awkwardly, well, robotic.

It doesn’t have to be that way. A team of Dutch scientists at the FOM Institute for Molecular and Atomic Physics (AMOLF) in Amsterdam built a new kind of robot that can run, go over obstacles, and even swim, all driven only by the flow of air. And it does all that with no brain at all.

Sky-dancing physics

“I was in a lab, working on another project, and had to bend a tube to stop air from going through it. The tube started oscillating at very high frequency, making a very loud noise,” says Alberto Comoretto, a roboticist at AMOLF and lead author of the study. To see what was going on with the tube, Comoretto set up a high-speed camera and recorded the movement. He found that the movement resulted from the interplay between the air pressure inside the tube and the state of the tube itself.

When there was a kink in the tube, the increasing pressure pushed that kink along the tube’s length. That caused the pressure to decrease, which enabled a new kink to appear and the cycle to repeat. “We were super excited because we saw this self-sustaining, periodic, asymmetric motion,” Comoretto told Ars.

The first reason for Comoretto’s excitement was that the flapping tube in his lab was driven by the kind of airflow physics that Peter Marshall, Doron Gazit, and Aireh Dranger harnessed to build their famous dancing “Fly Guys” for the Olympic Games in Atlanta in 1996. The second reason was that asymmetry and periodicity he saw in the tube’s movement pattern were also present in the way all living things moved, from single-celled organisms to humans.

Dutch scientists built a brainless soft robot that runs on air  Read More »

new-pope-chose-his-name-based-on-ai’s-threats-to-“human-dignity”

New pope chose his name based on AI’s threats to “human dignity”

“Like any product of human creativity, AI can be directed toward positive or negative ends,” Francis said in January. “When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation. Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here. Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used.”

History repeats with new technology

While Pope Francis led the call for respecting human dignity in the face of AI, it’s worth looking a little deeper into the historical inspiration for Leo XIV’s name choice.

In the 1891 encyclical Rerum Novarum, the earlier Leo XIII directly confronted the labor upheaval of the Industrial Revolution, which generated unprecedented wealth and productive capacity but came with severe human costs. At the time, factory conditions had created what the pope called “the misery and wretchedness pressing so unjustly on the majority of the working class.” Workers faced 16-hour days, child labor, dangerous machinery, and wages that barely sustained life.

The 1891 encyclical rejected both unchecked capitalism and socialism, instead proposing Catholic social doctrine that defended workers’ rights to form unions, earn living wages, and rest on Sundays. Leo XIII argued that labor possessed inherent dignity and that employers held moral obligations to their workers. The document shaped modern Catholic social teaching and influenced labor movements worldwide, establishing the church as an advocate for workers caught between industrial capital and revolutionary socialism.

Just as mechanization disrupted traditional labor in the 1890s, artificial intelligence now potentially threatens employment patterns and human dignity in ways that Pope Leo XIV believes demand similar moral leadership from the church.

“In our own day,” Leo XIV concluded in his formal address on Saturday, “the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor.”

New pope chose his name based on AI’s threats to “human dignity” Read More »

a-live-look-at-the-senate-ai-hearing

A Live Look at the Senate AI Hearing

So is the plan then to have AI developers not vet their systems before rolling them out? Is OpenAI not planning to vet their systems before rolling them out? This ‘sensible regulation that does not slow us down’ seems to translate to no regulation at all, as per usual.

And that was indeed the theme of the Senate hearing, both from the Senators and from the witnesses. The Senate is also setting up to attempt full preemption of all state and local AI-related laws, without any attempt to replace them, or realistically without any attempt to later pass any laws whatsoever to replace the ones that are barred.

Most of you should probably skip the main section that paraphrases the Senate hearing. I do think it is enlightening, and at times highly amusing at least to me, but it is long and one must prioritize, I only managed to cut it down ~80%. You can also pick and choose from the pull quotes at the end.

I will now give the shortened version of this hearing, gently paraphrased, in which we get bipartisan and corporate voices saying among other things rather alarmingly:

Needless to say, I do not agree with most of that.

It is rather grim out there. Almost everyone seems determined to not only go down the Missile Gap road into a pure Race, but also to use that as a reason to dismiss any other considerations out of hand, and indeed not to even acknowledge that there are any other worries out there to dismiss, beyond ‘the effect on jobs.’ This includes both the Senators and also Altman and company.

The discussion was almost entirely whether we should move to lock out all AI regulations and whether we should impose any standards of any kind on AI at all, except narrowly on deepfakes and such. There was no talk about even trying to make the government aware of what was happening. SB 1047 and other attempts at sensible rules were routinely completely mischaracterized.

There was no sign that anyone was treating this as anything other than a (very important) Normal Technology and Mere Tool.

If you think most of the Congress has any interest in not dying? Think again.

That could of course rapidly change once more. I expect it to. But not today.

The most glaring pattern was the final form of Altman’s pivot to jingoism and opposing all meaningful regulation, while acting as if AI poses no major downside risks, not even technological unemployment let alone catastrophic or existential risks or loss of human control over the future.

Peter then offers a summary of testimony, including how much Cruz is driving the conversation towards ‘lifting any finger anywhere dooms us to become the EU and to lose to China’ style rhetoric.

You also get to very quickly see which Senators are serious about policy and understanding the world, versus those here to create partisan soundbites or push talking points and hear themselves speak, versus those who want to talk about or seek special pork for their state. Which ones are curious and inquisitive versus which ones are hostile and mean. Which ones think they are clever and funny when they aren’t.

It is amazing how consistently and quickly the bad ones show themselves. Every time.

And to be clear, that has very little to do with who is in which party.

Indeed, the House Energy and Commerce Committee is explicitly trying for outright preemption, without replacement, trying to slip it into their budget proposal (edit: I originally thought this was the Senate, not the House):

That is, of course, completely insane, in addition to presumably being completely illegal to put into a budget. Presumably the Byrd rule kills it for now.

It would be one thing to pass this supposed ‘light touch’ AI regulation, that presented a new legal regime to handle AI models at the federal level, and do so while preempting state action.

It is quite another to have your offer be nothing. Literally nothing. As in, we are Congress, we cannot pass laws, but we will prevent you from enforcing any laws, or fixing any laws, for ten years.

They did at least include a carveout for laws that actively facilitate AI, including power generation for AI, so that states aren’t prevented from addressing the patchwork of existing laws that might kneecap AI and definitely will prevent adaptation and diffusion in various ways.

Even with that, the mind boggles to think of even the mundane implications. You couldn’t pass laws against anything, including CSAM or deepfakes. That’s in addition to the inability of the states to do the kinds of things we need them to do that this law is explicitly trying to prevent, such as SB 1047’s requirements that if you want to train a frontier AI model, you have to tell us you are doing that and share your safety and security protocol, and take reasonable care against catastrophic (and existential) risks.

Again, this is with zero sign of any federal rule at all.

In the following, if quote marks are used they literally said it. If not it’s a dramatization. I was maximizing truthiness, not aiming for a literal translation, read this as if it’s an extended SNL sketch, written largely for my own use and amusement.

Senator Ted Cruz (R-Texas): AI innovation. No rules. Have to beat China. Europe overregulates. Yay free internet. Biden had bad vibes and was woke. Not selling advanced AI chips to places that would let our competitors have them would cripple American tech companies. Trump. We need sandboxes and deregulation.

Senator Maria Cantwell (D-Washington): Pacific northwest. University of Washington got Chips act money. Microsoft contracted a fusion company in Washington. Expand electricity production. “Export controls are not a trade strategy.” Broad distribution of US-made AI chips. “American open AI systems” must be dominant across the globe.

Senator Cruz: Thank you. Our witnesses are OpenAI CEO Sam Altman, Lisa Su the CEO of AMD who is good because Texas, Michael Intrator the CEO of Core Weave, and Brad Smith the vice Chair and President of Microsoft.

Sam Altman (CEO OpenAI): Humility. Honor to be here. Scientists are now 2-3 times more productive and other neat stuff. America. Infrastructure. Texas. American innovation. Internet. America. Not Europe. America is Magic.

Dr. Lisa Su (CEO AMD): Honor. America. We make good chips. They do things. AI is transformative technology. Race. We could lose. Must race faster. Infrastructure. Open ecosystems. Marketplace of ideas. Domestic supply chain. Talent. Public-private partnership. Boo export controls, we need everyone to use our chips because otherwise they won’t use our technology and they might use other technology instead, this is the market we must dominate. AMD’s market. I had a Commodore 64 and an Apple II. They were innovative and American.

Senator Cruz: I also had an Apple II, but sadly I’m a politician now.

Michael Intrator (CEO Coreweave): I had a Vic 20. Honor. American infrastructure. Look at us grow. Global demand for AI infrastructure. American competitive edge. Productivity. Prosperity. Infrastructure. Need more compute. AI infrastructure will set the global economic agenda and shape human outcomes. China. AI race.

We need: Strategic investment stability. Stable, predictable policy frameworks, secure supply chains, regulatory environments that foster innovation. Energy infrastructure development. Permitting and regulatory reform. Market access. Trade agreements. Calibrated export controls. Public-private partnership. Innovation. Government and industry must work together.

Brad Smith (President of Microsoft): Chart with AI tech stack. All in this together. Infrastructure. Platform. Applications. Microsoft.

We need: Innovation. Infrastructure. Support from universities and government. Basic research. Faster adaptation, diffusion. Productivity. Economic growth. Investing in skilling and education. Right approach to export controls. Trust with the world. We need to not build machines that are better than people, only machines that make people better. Machines that give us jobs, and make our jobs better. We can do that, somehow.

Tech. People. Ambition. Education. Opportunity. The future.

Rep. Tim Sheey (R-MT, who actually said this verbatim): “Thank you for your testimony. Certainly makes me sleep better at night. Worried about Terminator and Skynet coming after us, knowing that you guys are behind the wheel, but in five words or less, start with you Mr. Smith. What are the five words you need to see from our government to make sure we win this AI race?”

Brad Smith: “More electricians. That’s two words. Broader AI education.”

Rep. Sheey: “And no using ChatGPT as a friend.”

Michael Intrator: “We need to focus on streamlining the ability to build large things.”

Dr. Lisa Su: “Policies to help us run faster in the innovation race.”

Sam Altman: “Allow supply chain-sensitive policy.”

Rep. Sheey: So we race. America wins races. Government support and staying out of your way are how America wins races. How do we incentivize companies so America wins the race? A non-state actor could win.

Sam Altman (a non-state actor currently winning the race): Stargate. Texas. We need electricity, permitting, supply chain. Investment. Domestic production. Talent recruitment. Legal clarity and clear rules. “Of course there will be guardrails” but please assure us you won’t impose any.

Dr. Lisa Su: Compute. AMD, I mean America, must build compute. Need domestic manufacturing. Need simple export rules.

Rep. Sheey: Are companies weighing doing AI business in America versus China?

Dr. Lisa Su: American tech is the best, but if it’s not available they’ll buy elsewhere.

Rep. Sheey: Infrastructure, electricians, universities, regulatory framework, can do. Innovation. Talent. Run faster. Those harder. Can’t manufacture talent, can’t make you run faster. Can only give you tools.

Senator Cantwell: Do we need NIST to set standards?

Sam Altman: “I don’t think we need it. It can be helpful.”

Michael Intrator: “Yes, yes.”

Senator Cantwell: Do we want NIST standards that let us move faster?

Brad Smith: “What I would say is this, first of all, NIST is where standards go to be adopted, but it’s not necessarily where they first go to be created.” “We will need industry standards, we will need American adoption of standards, and you are right. We will need US efforts to really ensure that the world buys into these standards.”

Michael Intrator: Standards need to be standardized.

Senator Cantwell: Standards let you move fast. Like HTTP or HTML. So, on exports, Malaysia. If we sell them chips can we ensure they don’t sell those chips to China?

Michael Intrator (‘well no, but…’): If we don’t sell them chips, someone else will.

Senator Cantwell: We wouldn’t want Huawei to get a better chip than us and put in a backdoor. Again. Don’t buy chips with backdoors. Also, did you notice this new tariff policy that also targets our allies is completely insane? We could use some allies.

Michael Intrator: Yes. Everyone demands AI.

Dr. Lisa Su: We need an export strategy. Our allies need access. Broad AI ecosystem.

Senator Bernie Moreno (R-Ohio): You all need moar power. TSMC making chips in America good. Will those semiconductor fabs use a lot of energy?

Dr. Lisa Su: Yes.

Senator Moreno: We need to make the highest-performing chips in America, right?

Dr. Lisa Su: Right.

Senator Moreno: Excuse me while rant that 90% of new power generation lately has been wind and solar when we could use natural gas. And That’s Terrible.

Brad Smith: Broad based energy solutions.

Senator Moreno: Hey I’m ranting here! Renewables suck. But anyway… “Mr. Altman, thank you for first of all, creating your platform and an open basis and agreeing to stick to the principles of nonprofit status. I think that’s very important.” So, Altman, how do we protect our children from AI? From having their friends be AI bots?

Sam Altman: Iterative deployment. Learn from mistakes. Treat adults like adults. Restrict child access. Happy to work with you on that. We must beware AI and social relationships.

Senator Moreno: Thanks. Mr. Itrator, talk about stablecoins?

Michael Intrator: Um, okay. Potential. Synergy.

Senator Klobuchar (D-Minnesota): AI is exciting. Minnesota. Renewables good, actually. “I think David Brooks put it the best when he said, I found it incredibly hard to write about AI because it is literally noble whether this technology is leading us to heaven or hell, we wanted to lead us to heaven. And I think we do that by making sure we have some rules of the road in place so it doesn’t get stymied or set backwards because of scams or because of use by people who want to do us harm.” Mr. Altman, do you agree that a risk-based approach to regulation is the best way to place necessary guardrails for AI without stifling innovation?

Sam Altman: “I do. That makes a lot of sense to me.”

Senator Klobuchar: “Okay, thanks. And did you figure that out in your attic?”

Sam Altman: “No, that was a more recent discovery.”

Senator Klobuchar: Do you agree that consumers need to be more educated?

Brad Smith: Yes.

Senator Klobuchar: Pivot. Altman, what evals do you use for hallucinations?

Sam Altman: Hallucations are getting much better. Users are smart, they can handle it.

Senator Klobuchar: Uh huh. Pivot. What about my bill about sexploitation and deepfakes? Can we build models that can detect deepfakes?

Brad Smith: Working on it.

Senator Klobuchar: Pivot. What about compensating content creators and journalists?

Brad Smith: Rural newspapers good. Mumble, collective negotiations, collaborative action, Congress, courts. Balance. They can get paid but we want data access.

Senator Cruz: I am very intelligent. Who’s winning, America or China, how close is it and how can we win?

Sam Altman: American models are best, but not by a huge amount of time. America. Innovation. Entrepreneurship. America. “We just need to keep doing the things that have worked for so long and not make a silly mistake.”

Dr. Lisa Su: I only care about chips. America is ahead in chips, but even without the best chips you can get a lot done. They’re catching up. Spirit of innovation. Innovate.

Michael Intrator: On physical infrastructure it’s not going so great. Need power and speed.

Brad Smith: America in the lead but it is close. What matters is market share and adaptation. For America, that is. We need to win trust of other countries and win the world’s markets first.

Ted Cruz: Wouldn’t it be awful if we did something like SB 1047? That’s just like something the EU would do, it’s exactly the same thing. Totally awful, am I right?

Sam Altman: Totally, sure, pivot. We need algorithms and data and compute and the best products. Can’t stop, won’t stop. Need infrastructure, need to build chips in this country. It would be terrible if the government tried to set standards. Let us set our own standards.

Lisa Su: What he said.

Brad Smith (presumably referring to when SB 1047 said you would have had to tell us the model exists and is being trained and what your safety plan was): Yep, what he said. Especially important is no pre-approval requirements.

Michael Intrator: A patchwork of regulatory overlays would cause friction.

Senator Brian Shatz (D-Hawaii): You do know no one is proposing these EU-style laws, right? And the alternative proposal seems to be nothing? Does nothing work for you?

Sam Altman: “No, I think some policy is good. I think it is easy for it to go too far and as I’ve learned more about how the world works, I’m more afraid that it could go too far and have really bad consequences. But people want use products that are generally safe. When you get on an airplane, you kind of don’t think about doing the safety testing yourself. You’re like, well, maybe this is a bad time to use the airplane example, but you kind of want to just trust that you can get on it.”

Senator Brian Shatz: Great example. We need to know what we’re racing for. American values. That said, should AI content be labeled as AI?

Brad Smith: Yes, working on it.

Senator Brian Shatz: “Data really is intellectual property. It is human innovation, human creativity.” You need to pay for it. Isn’t the tension that you want to pay as little as possible?

Brad Smith: No? And maybe we shouldn’t have to?

Brian Shatz: How can an AI agent deliver services and reduce pain points while interacting with government?

Sam Altman: The AI on your phone does the thing for you and answers your questions.

Brad Smith: That means no standing in line at the DMV. Abu Dhabi does this already.

Senator Ted Budd (R-North Carolina): Race. China. Energy. Permitting problems. They are command and control so they’re good at energy. I’m working on permit by rule. What are everyone’s experiences contracting power?

Michael Intrator: Yes, power. Need power to win race. Working on it. Regulation problems.

Brad Smith: We build a lot of power. We do a lot of permitting. Federal wetlands permit is our biggest issue, that takes 18-24 months, state and local is usually 6-9.

Senator Ted Budd: I’m worried people might build on Chinese open models like DeepSeek and the CCP might promote them. How important is American leadership in open and closed models? How can we help?

Sam Altman: “I think it’s quite important in both.” You can help with energy and infrastructure.

Senator Andy Kim (D-New Jersey): What is this race? You said it’s about adaptation?

Brad Smith: It’s about limiting chip export controls to tier 2 countries.

Senator Kim: Altman, is that the right framing of the race?

Sam Altman: It’s about the whole stack. We want them to use US chips and also ChatGPT.

Senator Kim (asking a good question): Does building on our chips mean they’ll use our products and applications?

Sam Altman: Marginally. Ideally they’ll use our entire stack.

Senator Kim: How’re YOU doin? On tools and applications.

Sam Altman: Really well. We’re #1, not close.

Senator Kim: How are we doing on talent?

Dr. Lisa Su: We have the smartest engineers and a great talent base but we need more. We need international students, high skilled immigration.

Senator Eric Schmitt (R-Missouri): St. Louis, Missouri. What can we learn from Europe’s overregulation and failures?

Sam Altman: We’d love to invest in St. Louis. Our EU releases take longer, that’s not good.

Senator Schmitt: How does vertical AI stack integration work? I’ve heard China is 2-6 months behind on LLMs. Does our chip edge give us an advantage here?

Sam Altman: People everywhere will make great models and chips. What’s important is to get users relying on us for their hardest daily tasks. But also chips, algorithm, infrastructure, data. Compound effects.

Senator Schmitt: EU censorship is bad. NIST mentioned misinformation, oh no. How do we not do all that here?

Michael Intrator: It makes Europe uninvestable but it’s not our focus area.

Sam Altman: Putting people in jail for speech is bad and un-American. Freedom.

Senator Hickenlooper (D-Colorado): How does Microsoft evaluate Copilot’s accuracy and performance? What are the independent reviews?

Brad Smith: Don’t look at me, those are OpenAI models, that’s their job, then we have a joint DSBA deployment safety board. We evaluate using tools and ensure it passes tests.

Senator Hickenlooper: “Good. I like that.” Altman, have you considered using independent standard and safety evaluations?

Sam Altman: We do that.

Senator Hickenlooper: Chips act. What is the next frontier in Chip technology in terms of energy efficiency? How can we work together to improve direct-to-chip cooling for high-performance computing?

Lisa Su: Innovation. Chips act. AI is accelerating chip improvements.

Senator John Curtis (R-Utah): Look at me, I had a TRS-80 made by Radio Shack with upgraded memory. So what makes a state, say Utah, attractive to Stargate?

Sam Altman: Power cooling, fast permitting, electricians, construction workers, a state that will partner to work quickly, you in?

Senator Curtis: I’d like to be, but for energy how do we protect rate payers?

Sam Altman: More power. If you permit it, they will build.

Brad Smith: Microsoft helps build capacity too.

Senator Curtis: Yay small business. Can ChatGPT help small business?

Sam Altman: Can it! It can run your small business, write your ads, review your legal docs, answer customer emails, you name it.

Senator Duckworth (D-Illinois): Lab-private partnerships. Illinois. National lab investments. Trump and Musk are cutting research. That’s bad. Doge bad. Innovation. Don’t cut innovation. Help me out here with the national labs.

Sam Altman: We partner with national labs, we even shared model weights with them. We fing love science. AI is great for science, we’ll do ten years of science in a year.

Brad Smith: All hail national labs. We work with them too. Don’t take them for granted.

Dr. Lisa Su: Amen. We also support public-private partnerships with national labs.

Michael Intrator: Science is key to AI.

Senator Duckworth: Anyone want to come to a lab in Illinois? Everyone? Cool.

Senator Cruz: Race. Vital race. Beyond jobs and economic growth. National security. Economic security. Need partners and allies. Race is about market share of RA AI models and solutions in other countries. American values. Not CCP values. Digital trade rules. So my question: If we don’t adopt standards via NIST or otherwise, won’t others pick standards without us?

Brad Smith: Yes, sir. Europe won privacy law. Need to Do Something. Lightweight. Can’t go too soon. Need a good model standard. Must harmonize.

Senator Lisa Blunt Rochester (D-Delaware): Future of work is fear. But never mind that, tell me about your decision to transition to a PBC and attempt to have the PBC govern your nonprofit.

Sam Altman: Oceania has always been at war with Eastasia, we’ve talked to lawyers and regulators about the best way to do that and we’re excited to move forward.

Senator Rochester: I have a bill, Promoting Resilient Supply Chains. Dr. Su, what specific policies would help you overcome supply chain issues?

Dr. Lisa Su: Semiconductors are critical to The Race. We need to think end-to-end.

Senator Rochester: Mr. Smith, how do you see interdependence of the AI stack sections creating vulnerabilities or opportunities in the AI supply chain?

Brad Smith: More opportunities than vulnerabilities, it’s great to work together.

Senator Moran (R-Kansas): Altman, given Congress is bad at job and can’t pass laws, how can consumers control their own data?

Altman: You will happily give us all your data so we can create custom responses for you, mwahahaha! But yeah, super important, we’ll keep it safe, pinky swear.

Senator Moran: Thanks. I hear AI matters for cyberattacks, how can Congress spend money on this?

Brad Smith: AI is both offense and defense and faster than any human. Ukraine. “We have recognize it’s ultimately the people who defend not just countries, but companies and governments” so we need to automate that, stat. America. China. You should fund government agencies, especially NSA.

Senator Moran: Kansas. Rural internet connectivity issues. On-device or low bandwidth AI?

Altman: No problem, most of the work is in the cloud anyway. But also shibboleth, rural connectivity is important.

Senator Ben Ray Lujan (D-New Mexico): Thanks to Altman and Smith for your involvement with NIST AISI, and Su and Altman for partnerships with national labs. Explain the lab thing again?

Altman: We give them our models. o3 is good at helping scientists. Game changer.

Dr. Lisa Su: What he said.

Senator Lujan: What investments in those labs are crucial to you?

Altman: Just please don’t standardize me, bro. Absolutely no standards until we choose them for ourselves first.

Dr. Lisa Su: Blue sky research. That’s your comparative advantage.

Senator Lujan: My bipartisan bill is Test AI Act, to build state capacity for tests and evals. Seems important. Trump is killing basic research, and that’s bad, we need to fix that. America. NSF, NIH, DOE, OSTP. But my question for you is about how many engineers are working optimizations for reduced energy, and what is your plan to reduce water use by data centers?

Brad Smith: Okay, sure, I’ll have someone track that number down. The data centers don’t actually use much water, that’s misinformation, but we also have more than 90 water replenishment projects including in New Mexico.

Michael Intrator: Yeah I also have no idea how many engineers are working on those optimizations, but I assure you we’re working on it, it turns out compute efficiency is kind of a big deal these days.

Senator Lujan: Wonderful. Okay, a good use of time would be to ask “yes or no: Is it important to ensure that in order for AI to reach its full prominence that people across the country should be able to connect to fast affordable internet?”

Dr. Su: Yes.

Senator Lujan: My work is done here.

Senator Cynthia Lummis (R-Wyoming): AI is escalating quickly. America. Europe overregulates. GDPR limits AI. But “China appears to be fast-tracking AI development.” Energy. Outcompete America. Must win. State frameworks burdensome. Only 6 months ahead of China. Give your anti-state-regulation speech?

Sam Altman: Happy to. Hard to comply. Slow us down. Need Federal only. Light touch.

Michael Intrator: Preach. Infrastructure could be trapped with bad regulations.

Senator Lummis: Permitting process. Talk about how it slows you down?

Michael Intrator: It’s excruciating. Oh, the details I’m happy to give you.

Senator Lummis: Wyoming. Natural gas. Biden doesn’t like it. Trump loves it. How terrible would it be if another president didn’t love natural gas like Trump?

Brad Smith: Terrible. Bipartisan natural gas love. Wyoming. Natural gas.

Senator Lummis (here let me Google that for you): Are you exploring small modular nuclear? In Wyoming?

Brad Smith: Yes. Wyoming.

Senator Lummis: Altman, it’s great you’re releasing an open… whoops time is up.

Senator Jacky Rosen (D-Nevada): AI is exciting. We must promote its growth. But it could also be bad. I’ll start with DeepSeek. I want to ban it on government devices and for contractors. How should we deal with PRC-developed models? Could they co-opt AI to promote an ideology? Collect sensitive data? What are YOU doing to combat this threat?

Brad Smith: DeepSeek is both a model and an app. We don’t let employees use the app, we didn’t even put it in our app store, for data reasons. But the model is open, we can analyze and change it. Security first.

Senator Rosen: What are you doing about AI and antisemitism? I heard AI is perpetuating stereotypes. Will you collaborate with civil society on a benchmark?

Sam Altman (probably wondering if Rosen knows that Altman is Jewish): We do collaborate with civil society. We’re not here to be antisemitic.

Senator Rosen (except less coherently than this, somehow): AI using Water. Energy. We’re all concerned. Also data center security. I passed a bill on that, yay me. Got new chip news? Faster and cooler is better. How can we make data more secure? Talk about interoperability.

Dr. Lisa Su (wisely): I think for all our sakes I’ll just say all of that is important.

Senator Dan Sullivan (R-Alaska): Agree with Cruz, national economic and national security. Race. China. Everyone agree? Good. Huge issue. Very important. Are we in the lead?

Sam Altman: We are landing and will keep leading. America. Right thing to do. But we need your help.

Senator Sullivan: Our help, you say. How can we help?

Sam Altman: Infrastructure. Supply chain. Everything in America. Infrastructure. Supply chain. Stargate. Full immunity from copyright for model training. “Reasonable fair like touch regulatory framework.” Ability to deploy quickly. Let us import talent.

Senator Sullivan (who I assume has not seen the energy production charts): Great. “One of our comparative advantages over China in my view has to be energy.” Alaska. Build your data centers here. It is a land. A cold land. With water. And gas.

Sam Altman: “That’s very compelling.”

Senator Sullivan: Alaska is colder than Texas. It has gas. I’m frustrated you can’t see that Alaska is cold. And has gas. And yet Americans invest in China. AI, quantum. Benchmark Capital invested $75 million in Chinese AI. How dare they.

Brad Smith: China can build power plants better than we can. Our advantage is the world’s best people plus venture capital. Need to keep bringing best people here. America.

Senator Sullivan: “American venture capital funds Chinese AI. Is that in our national interest?”

Brad Smith: Good question, good that you’re focusing on that but stop focusing on that, just let us do high-skilled immigration.

Senator Markey (D-Massachusetts): Environmental impact of AI. AI weather forecasting. Climate change. Electricity. Water. Backup diesel generators can cause respiratory and cardiovascular issues and cancer. Need more info. Do you agree more research is needed, like in the AIEIA bill?

Brad Smith: “Yes. One study was just completed last December.” Go ahead and convene your stakeholders and measure things.

Sam Altman: Yes, the Federal government should study and measure that. Use AI.

Senator Markey: AI could cure cancer, but it could also cause climate change. Equally true. Trump wants to destroy incentives for wind, solar, battery. “That’s something you have to weigh in on. Make sure he does not do that.” Now, AI’s impact on disadvantagd communities. Can algorithms be biased and cause discrimination?

Brad Smith: Yes. We test to avoid that outcome.

Senator Markey (clip presumably available now on TikTok): Altman doesn’t want privacy regulations. But AI can cause harm. To marginalized communities. Bias. Discrimination. Mortgage discrimination. Bias in hiring people with disabilities. Not giving women scholarships. Real harms. Happening now. So I wrote the AI Civil Rights Act to ensure you all eliminate bias and discrimination, and we hold you accountable. Black. Brown. LGBTQ. Virtual world needs same protections as real world. No question.

Senator Gary Peters (D-Michigan): America. Must be world leader in AI. Workforce. Need talent here. Various laws for AI scholarships, service, training. University of Michigan, providing AI tools to students. “Mr. Altman, when we met last year in my office and had a great, you said that upwards of 70% of jobs could be limited by AI.” Prepare for social disruption. Everyone must benefit. How can your industry mitigate job loss and social disruption?

Sam Altman: AI will come at you fast. But otherwise it’s like other tech, jobs change, we can handle change. Give people tools early. That’s why we do iterative development. We can make change faster by making it as fast as possible. But don’t worry, it’s just faster change, it’s just a transition.

Senator Peters: We need AI to enhance work, not displace work. Like last 100 years.

Sam Altman (why yes, I do mean the effect on jobs, senator): We can’t imagine the jobs on the other side of this. Look how much programming has changed.

Senator Peters: Open tech is good. Open hardware. What are the benefits of open standards and system interoperability at the hardware level? What are the supply chain implications?

Dr. Lisa Su: Big advantages. Open is the best solution. Also good for security.

Senator Peters: You’re open, Nvidia is closed. Why does that make you better?

Dr. Lisa Su: Anyone can innovate so we don’t have to. We can put that together.

Senator John Fetterman (D-Pennsylvania, I hope he’s okay): Yay energy. National security. Fossil. Nuclear. Important. Concern about Pennsylvania rate payers. Prices might rise 20%. Concerning. Plan to reopen Three Mile Island. But I had to grab my hamster and evacuate in 1979. I’m pro nuclear. Microsoft. But rate payers.

Brad Smith: Critical point. We invest to bring on as much power as we use, so it doesn’t raise prices. We’ll pay for grid upgrades. We create construction jobs.

Senator Fetterman: I’m a real senator which means I get to meet Altman, squee. But what about the singularity? Address that?

Sam Altman: Nice hoodies. I am excited by the rate of progress, also cautious. We don’t understand where this, the biggest technological revolution ever, is going to go. I am curious and interested, yet things will change. Humans adapt, things become normal. We’ll do extraordinary things with these tools. But they’ll do things we can’t wrap our heads around. And do recursive self-improvement. Some call that singularity, some call it takeoff. New era in humanity. Exciting that we get to live through that and make it a wonderful thing. But we’ve got to approach it with humility and some caution, always twirling twirling towards freedom.

Senator Klobuchar (oh no not again): There’s a new pope. I work really hard on a bill. YouTube, RIAA, SAG, MPAA all support it. 75k songs with unauthorized deepfakes. Minnesota has a Grammy-nominated artist. Real concern. What about unauthorized use of people’s image? How do we protect them? If you don’t violate people’s rights someone else will.

Brad Smith: Genuflection. Concern. Deepfakes are bad. AI can identify fakes. We apply voluntary guardrails.

Senator Klobuchar: And will you both read my bill? I worked so hard.

Brad Smith: Absolutely.

Sam Altman: Happy to. Deepfakes. Big issue. Can’t stop them. If you allow open models you can’t stop them from doing it. Guardrails. Have people on lookout.

Senator Klobuchar: “It’s coming, but there’s got to be some ways to protect people. We should do everything privacy. And you’ve got to have some way to either enforce it, damages, whatever, there’s just not going to be any consequences.”

Sam Altman: Absolutely. Bad actors don’t always follow laws.

Senator Cruz: Sorry about tweeting out that AI picture of Senator Federman as the Pope of Greenland.

Senator Klobuchar: Whoa, parody is allowed.

Senator Cruz: Yeah, I got him good. Anyway, Altman, what’s the most surprising use of ChatGPT things you’ve seen?

Sam Altman: Me figuring out how to take care of my newborn baby.

Senator Cruz: My teenage daughter sent me this long detailed emotional text, and it turns out ChatGPT wrote it.

Sam Altman (who lacks that option): “I have complicated feelings about that.”

Senator Cruz: “Well use the app and then tell me what your thoughts. Okay, Google just revealed that their search traffic on Safari declined for the first time ever. They didn’t send me a Christmas card. Will chat GPT replace Google as the primary search engine? And if so when?”

Sam Altman: Probably not, Google’s good, Gemini will replace it instead.

Senator Cruz: How big a deal was DeepSeek? A major seismic shocking development? Not that big a deal? Somewhere in between? What’s coming next?

Sam Altman: “Not a huge deal.” They made a good open-source model and they made a highly downloaded consumer app. Other companies are going to put out good models. If it was going to beat ChatGPT that would be bad, but it’s not.

Dr. Lisa Su: Somewhere in between. Different ways of doing things. Innovation. America. We’re #1 in models. Being open was impactful. But America #1.

Michael Intrator (saying the EMH is false and it was only news because of lack of situational awareness): It raised the specter of China’s AI capability. People became aware. Financial market implications. “China is not theoretically in the race for AI dominance, but actually is very much a formidable competitor.” Starting gun for broader population and consciousness that we have to work. America.

Brad Smith: Somewhere in between. Wasn’t shocking. We knew. All their 200 employees are 4 or less years out of college.

Ted Cruz: I’m glad the AI diffusion rule was rescinded. Bad rule. Too complex. Unfair to our trading partners. “That doesn’t necessarily mean there should be no restrictions and there are a variety of views on what the rules should be concerning AI diffusion.” Nivida wants no rules. What should be the rule?

Sam Altman: I’m also glad that was rescinded. Need some constraints. We need to win diffusion not stop diffusion. America. Best data centers in America. Other data centers elsewhere. Need them to use ChatGPT not DeepSeek. Want them using US chips and US data center technology and Microsoft. Model creation needs to happen here.

Dr. Lisa Su: Happy they were rescinded. Need some restrictions. National security. Simplify. Need widespread adoption of our tech and ecosystem. Simple rules, protect our tech also give out our tech. Devil’s in details. Balance. Broader hat. Stability.

Michael Intrator: Copy all that. National security. Work with regulators. Rule didn’t let us participate enough.

Brad Smith: Eliminate tier 2 restrictions to ensure confidence and access to American tech, even most advanced GPUs. Trusted providers only. Security standards. Protection against diversion or certain use cases. Keep China out, guard against catastrophic misuse like CBRN risks.

Ted Cruz: Would you support a 10-year ban on state AI laws, if we call it a ‘learning period’ and we can revisit after the singularity?

Sam Altman: Don’t meaningfully regulate us, and call it whatever you need to.

Dr. Lisa Su: “Aligned federal approach with really thoughtful regulation would be very, very much appreciated.”

Michael Intrador: Agreed.

Brad Smith: Agreed.

Ted Cruz: We’re done, thanks everyone, and live from New York it’s Saturday night.

This exchange was pretty wild, the one time someone asked about this, and Altman dodges it:

For the time being, we are not hoping the Congress will help us not die, or help the economy and society deal with the transformations that are coming, or even that it can help with the mess that is existing law.

We are instead hoping the Congress will not actively make things worse.

Congress has fully abrogated its job to consider the downside risks of AI, including catastrophic and existential risks. It is fully immersed in jingoism and various false premises, and under the sway of certain corporations. It is attempting to actively prevent states from being able to do literally anything on AI, even to fix existing non-AI laws that have unintended implications no one wants.

We have no real expectation that Congress will be able to step up and pass the laws it is preventing the states from passing. In many cases, it can’t, because fixes are needed for existing state laws. At most, we can expect Congress to manage things like laws against deepfakes, but that doesn’t address the central issues.

On the sacrificial altars of ‘competitiveness’ and ‘innovation’ and ‘market share’ they are going to attempt to sacrifice even the export controls that keep the most important technology, models and compute out of Chinese hands, accomplishing the opposite. They are vastly underinvesting in state capacity and in various forms of model safety and security, even for straight commercial and mundane utility purposes, out of spite and a ‘if you touch anything you kill innovation’ madness.

Meanwhile, they seem little interested in addressing that the main actions of the Federal government in 2025 have been to accomplish the opposite of these goals. Where we need talent, we drive it away. Where we need trade and allies, we alienate our allies and restrict trade.

There is talk of permitting reform and helping with energy. That’s great, as far as it goes. It would at least be one good thing. But I don’t see any substantive action.

They’re not only not coming to save us. They’re determined to get in the way.

That doesn’t mean give up. It especially doesn’t mean give up on the fight over the new diffusion rules, where things are very much up in the air, and these Senators, if they are good actors, have clearly been snookered.

These people can be reached and convinced. It is remarkably easy to influence them. Even under their paradigm of America, Innovation, Race, we are making severe mistakes, and it would go a long way to at least correct those. We are far from the Pareto frontier here, even ignoring the fact that we are all probably going to die from AI if we don’t do something about that.

Later, events will overtake this consensus, the same way they the vibes have previously shifted at the rate of at least once a year. We need to be ready for that, and also to stop them from doing something crazy when the next crisis happens and the public or others demand action.

A Live Look at the Senate AI Hearing Read More »

a-soviet-era-spacecraft-built-to-land-on-venus-is-falling-to-earth-instead

A Soviet-era spacecraft built to land on Venus is falling to Earth instead

Kosmos 482, a Soviet-era spacecraft shrouded in Cold War secrecy, will reenter the Earth’s atmosphere in the next few days after misfiring on a journey to Venus more than 50 years ago.

On average, a piece of space junk the size of Kosmos 482, with a mass of about a half-ton, falls into the atmosphere about once per week. What’s different this time is that Kosmos 482 was designed to land on Venus, with a titanium heat shield built to withstand scorching temperatures, and structures engineered to survive atmospheric pressures nearly 100 times higher than Earth’s.

So, there’s a good chance the spacecraft will survive the extreme forces it encounters during its plunge through the atmosphere. Typically, space debris breaks apart and burns up during reentry, with only a small fraction of material reaching the Earth’s surface. The European Space Agency, one of several institutions that track space debris, says Kosmos 482 is “highly likely” to reach Earth’s surface in one piece.

Fickle forecasts

The Kosmos 482 spacecraft launched from the Baikonur Cosmodrome, now part of Kazakhstan, aboard a Molniya rocket on March 31, 1972. A short time later, the rocket’s upper stage was supposed to propel the probe out of Earth orbit on an interplanetary journey toward Venus, where it would have become the third mission to land on the second planet from the Sun.

But the rocket failed, rendering it unable to escape the gravitational grip of Earth. The spacecraft separated into several pieces, and Russian engineers gave up on the mission. The main section of the Venus probe reentered the atmosphere in 1981, but for 53 years, the 3.3-foot-diameter (1-meter) segment of the spacecraft that was supposed to land on Venus remained in orbit around the Earth, its trajectory influenced only by the tenuous uppermost layers of the atmosphere.

The mission was part of the Soviet Union’s Venera program, which achieved the first soft landing of a spacecraft on another planet with the Venera 7 mission in 1970, and followed up with another successful landing with Venera 8 in 1972. Because it failed, Soviet officials gave the next mission, which would have become Venera 9, a non-descriptive name: Kosmos 482.

A Soviet-era spacecraft built to land on Venus is falling to Earth instead Read More »

ai-use-damages-professional-reputation,-study-suggests

AI use damages professional reputation, study suggests

Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation.

On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers.

“Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs,” write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke’s Fuqua School of Business.

The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled “Evidence of a social evaluation penalty for using AI,” reveal a consistent pattern of bias against those who receive help from AI.

What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn’t limited to specific groups.

Fig. 1. Effect sizes for differences in expected perceptions and disclosure to others (Study 1). Note: Positive d values indicate higher values in the AI Tool condition, while negative d values indicate lower values in the AI Tool condition. N = 497. Error bars represent 95% CI. Correlations among variables range from | r |= 0.53 to 0.88.

Fig. 1 from the paper “Evidence of a social evaluation penalty for using AI.” Credit: Reif et al.

“Testing a broad range of stimuli enabled us to examine whether the target’s age, gender, or occupation qualifies the effect of receiving help from Al on these evaluations,” the authors wrote in the paper. “We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness. This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one.”

The hidden social cost of AI adoption

In the first experiment conducted by the team from Duke, participants imagined using either an AI tool or a dashboard creation tool at work. It revealed that those in the AI group expected to be judged as lazier, less competent, less diligent, and more replaceable than those using conventional technology. They also reported less willingness to disclose their AI use to colleagues and managers.

The second experiment confirmed these fears were justified. When evaluating descriptions of employees, participants consistently rated those receiving AI help as lazier, less competent, less diligent, less independent, and less self-assured than those receiving similar help from non-AI sources or no help at all.

AI use damages professional reputation, study suggests Read More »

fidji-simo-joins-openai-as-new-ceo-of-applications

Fidji Simo joins OpenAI as new CEO of Applications

In the message, Altman described Simo as bringing “a rare blend of leadership, product and operational expertise” and expressed that her addition to the team makes him “even more optimistic about our future as we continue advancing toward becoming the superintelligence company.”

Simo becomes the newest high-profile female executive at OpenAI following the departure of Chief Technology Officer Mira Murati in September. Murati, who had been with the company since 2018 and helped launch ChatGPT, left alongside two other senior leaders and founded Thinking Machines Lab in February.

OpenAI’s evolving structure

The leadership addition comes as OpenAI continues to evolve beyond its origins as a research lab. In his announcement, Altman described how the company now operates in three distinct areas: as a research lab focused on artificial general intelligence (AGI), as a “global product company serving hundreds of millions of users,” and as an “infrastructure company” building systems that advance research and deliver AI tools “at unprecedented scale.”

Altman mentioned that as CEO of OpenAI, he will “continue to directly oversee success across all pillars,” including Research, Compute, and Applications, while staying “closely involved with key company decisions.”

The announcement follows recent news that OpenAI abandoned its original plan to cede control of its nonprofit branch to a for-profit entity. The company began as a nonprofit research lab in 2015 before creating a for-profit subsidiary in 2019, maintaining its original mission “to ensure artificial general intelligence benefits everyone.”

Fidji Simo joins OpenAI as new CEO of Applications Read More »