Author name: Rejus Almole

a-live-look-at-the-senate-ai-hearing

A Live Look at the Senate AI Hearing

So is the plan then to have AI developers not vet their systems before rolling them out? Is OpenAI not planning to vet their systems before rolling them out? This ‘sensible regulation that does not slow us down’ seems to translate to no regulation at all, as per usual.

And that was indeed the theme of the Senate hearing, both from the Senators and from the witnesses. The Senate is also setting up to attempt full preemption of all state and local AI-related laws, without any attempt to replace them, or realistically without any attempt to later pass any laws whatsoever to replace the ones that are barred.

Most of you should probably skip the main section that paraphrases the Senate hearing. I do think it is enlightening, and at times highly amusing at least to me, but it is long and one must prioritize, I only managed to cut it down ~80%. You can also pick and choose from the pull quotes at the end.

I will now give the shortened version of this hearing, gently paraphrased, in which we get bipartisan and corporate voices saying among other things rather alarmingly:

Needless to say, I do not agree with most of that.

It is rather grim out there. Almost everyone seems determined to not only go down the Missile Gap road into a pure Race, but also to use that as a reason to dismiss any other considerations out of hand, and indeed not to even acknowledge that there are any other worries out there to dismiss, beyond ‘the effect on jobs.’ This includes both the Senators and also Altman and company.

The discussion was almost entirely whether we should move to lock out all AI regulations and whether we should impose any standards of any kind on AI at all, except narrowly on deepfakes and such. There was no talk about even trying to make the government aware of what was happening. SB 1047 and other attempts at sensible rules were routinely completely mischaracterized.

There was no sign that anyone was treating this as anything other than a (very important) Normal Technology and Mere Tool.

If you think most of the Congress has any interest in not dying? Think again.

That could of course rapidly change once more. I expect it to. But not today.

The most glaring pattern was the final form of Altman’s pivot to jingoism and opposing all meaningful regulation, while acting as if AI poses no major downside risks, not even technological unemployment let alone catastrophic or existential risks or loss of human control over the future.

Peter then offers a summary of testimony, including how much Cruz is driving the conversation towards ‘lifting any finger anywhere dooms us to become the EU and to lose to China’ style rhetoric.

You also get to very quickly see which Senators are serious about policy and understanding the world, versus those here to create partisan soundbites or push talking points and hear themselves speak, versus those who want to talk about or seek special pork for their state. Which ones are curious and inquisitive versus which ones are hostile and mean. Which ones think they are clever and funny when they aren’t.

It is amazing how consistently and quickly the bad ones show themselves. Every time.

And to be clear, that has very little to do with who is in which party.

Indeed, the House Energy and Commerce Committee is explicitly trying for outright preemption, without replacement, trying to slip it into their budget proposal (edit: I originally thought this was the Senate, not the House):

That is, of course, completely insane, in addition to presumably being completely illegal to put into a budget. Presumably the Byrd rule kills it for now.

It would be one thing to pass this supposed ‘light touch’ AI regulation, that presented a new legal regime to handle AI models at the federal level, and do so while preempting state action.

It is quite another to have your offer be nothing. Literally nothing. As in, we are Congress, we cannot pass laws, but we will prevent you from enforcing any laws, or fixing any laws, for ten years.

They did at least include a carveout for laws that actively facilitate AI, including power generation for AI, so that states aren’t prevented from addressing the patchwork of existing laws that might kneecap AI and definitely will prevent adaptation and diffusion in various ways.

Even with that, the mind boggles to think of even the mundane implications. You couldn’t pass laws against anything, including CSAM or deepfakes. That’s in addition to the inability of the states to do the kinds of things we need them to do that this law is explicitly trying to prevent, such as SB 1047’s requirements that if you want to train a frontier AI model, you have to tell us you are doing that and share your safety and security protocol, and take reasonable care against catastrophic (and existential) risks.

Again, this is with zero sign of any federal rule at all.

In the following, if quote marks are used they literally said it. If not it’s a dramatization. I was maximizing truthiness, not aiming for a literal translation, read this as if it’s an extended SNL sketch, written largely for my own use and amusement.

Senator Ted Cruz (R-Texas): AI innovation. No rules. Have to beat China. Europe overregulates. Yay free internet. Biden had bad vibes and was woke. Not selling advanced AI chips to places that would let our competitors have them would cripple American tech companies. Trump. We need sandboxes and deregulation.

Senator Maria Cantwell (D-Washington): Pacific northwest. University of Washington got Chips act money. Microsoft contracted a fusion company in Washington. Expand electricity production. “Export controls are not a trade strategy.” Broad distribution of US-made AI chips. “American open AI systems” must be dominant across the globe.

Senator Cruz: Thank you. Our witnesses are OpenAI CEO Sam Altman, Lisa Su the CEO of AMD who is good because Texas, Michael Intrator the CEO of Core Weave, and Brad Smith the vice Chair and President of Microsoft.

Sam Altman (CEO OpenAI): Humility. Honor to be here. Scientists are now 2-3 times more productive and other neat stuff. America. Infrastructure. Texas. American innovation. Internet. America. Not Europe. America is Magic.

Dr. Lisa Su (CEO AMD): Honor. America. We make good chips. They do things. AI is transformative technology. Race. We could lose. Must race faster. Infrastructure. Open ecosystems. Marketplace of ideas. Domestic supply chain. Talent. Public-private partnership. Boo export controls, we need everyone to use our chips because otherwise they won’t use our technology and they might use other technology instead, this is the market we must dominate. AMD’s market. I had a Commodore 64 and an Apple II. They were innovative and American.

Senator Cruz: I also had an Apple II, but sadly I’m a politician now.

Michael Intrator (CEO Coreweave): I had a Vic 20. Honor. American infrastructure. Look at us grow. Global demand for AI infrastructure. American competitive edge. Productivity. Prosperity. Infrastructure. Need more compute. AI infrastructure will set the global economic agenda and shape human outcomes. China. AI race.

We need: Strategic investment stability. Stable, predictable policy frameworks, secure supply chains, regulatory environments that foster innovation. Energy infrastructure development. Permitting and regulatory reform. Market access. Trade agreements. Calibrated export controls. Public-private partnership. Innovation. Government and industry must work together.

Brad Smith (President of Microsoft): Chart with AI tech stack. All in this together. Infrastructure. Platform. Applications. Microsoft.

We need: Innovation. Infrastructure. Support from universities and government. Basic research. Faster adaptation, diffusion. Productivity. Economic growth. Investing in skilling and education. Right approach to export controls. Trust with the world. We need to not build machines that are better than people, only machines that make people better. Machines that give us jobs, and make our jobs better. We can do that, somehow.

Tech. People. Ambition. Education. Opportunity. The future.

Rep. Tim Sheey (R-MT, who actually said this verbatim): “Thank you for your testimony. Certainly makes me sleep better at night. Worried about Terminator and Skynet coming after us, knowing that you guys are behind the wheel, but in five words or less, start with you Mr. Smith. What are the five words you need to see from our government to make sure we win this AI race?”

Brad Smith: “More electricians. That’s two words. Broader AI education.”

Rep. Sheey: “And no using ChatGPT as a friend.”

Michael Intrator: “We need to focus on streamlining the ability to build large things.”

Dr. Lisa Su: “Policies to help us run faster in the innovation race.”

Sam Altman: “Allow supply chain-sensitive policy.”

Rep. Sheey: So we race. America wins races. Government support and staying out of your way are how America wins races. How do we incentivize companies so America wins the race? A non-state actor could win.

Sam Altman (a non-state actor currently winning the race): Stargate. Texas. We need electricity, permitting, supply chain. Investment. Domestic production. Talent recruitment. Legal clarity and clear rules. “Of course there will be guardrails” but please assure us you won’t impose any.

Dr. Lisa Su: Compute. AMD, I mean America, must build compute. Need domestic manufacturing. Need simple export rules.

Rep. Sheey: Are companies weighing doing AI business in America versus China?

Dr. Lisa Su: American tech is the best, but if it’s not available they’ll buy elsewhere.

Rep. Sheey: Infrastructure, electricians, universities, regulatory framework, can do. Innovation. Talent. Run faster. Those harder. Can’t manufacture talent, can’t make you run faster. Can only give you tools.

Senator Cantwell: Do we need NIST to set standards?

Sam Altman: “I don’t think we need it. It can be helpful.”

Michael Intrator: “Yes, yes.”

Senator Cantwell: Do we want NIST standards that let us move faster?

Brad Smith: “What I would say is this, first of all, NIST is where standards go to be adopted, but it’s not necessarily where they first go to be created.” “We will need industry standards, we will need American adoption of standards, and you are right. We will need US efforts to really ensure that the world buys into these standards.”

Michael Intrator: Standards need to be standardized.

Senator Cantwell: Standards let you move fast. Like HTTP or HTML. So, on exports, Malaysia. If we sell them chips can we ensure they don’t sell those chips to China?

Michael Intrator (‘well no, but…’): If we don’t sell them chips, someone else will.

Senator Cantwell: We wouldn’t want Huawei to get a better chip than us and put in a backdoor. Again. Don’t buy chips with backdoors. Also, did you notice this new tariff policy that also targets our allies is completely insane? We could use some allies.

Michael Intrator: Yes. Everyone demands AI.

Dr. Lisa Su: We need an export strategy. Our allies need access. Broad AI ecosystem.

Senator Bernie Moreno (R-Ohio): You all need moar power. TSMC making chips in America good. Will those semiconductor fabs use a lot of energy?

Dr. Lisa Su: Yes.

Senator Moreno: We need to make the highest-performing chips in America, right?

Dr. Lisa Su: Right.

Senator Moreno: Excuse me while rant that 90% of new power generation lately has been wind and solar when we could use natural gas. And That’s Terrible.

Brad Smith: Broad based energy solutions.

Senator Moreno: Hey I’m ranting here! Renewables suck. But anyway… “Mr. Altman, thank you for first of all, creating your platform and an open basis and agreeing to stick to the principles of nonprofit status. I think that’s very important.” So, Altman, how do we protect our children from AI? From having their friends be AI bots?

Sam Altman: Iterative deployment. Learn from mistakes. Treat adults like adults. Restrict child access. Happy to work with you on that. We must beware AI and social relationships.

Senator Moreno: Thanks. Mr. Itrator, talk about stablecoins?

Michael Intrator: Um, okay. Potential. Synergy.

Senator Klobuchar (D-Minnesota): AI is exciting. Minnesota. Renewables good, actually. “I think David Brooks put it the best when he said, I found it incredibly hard to write about AI because it is literally noble whether this technology is leading us to heaven or hell, we wanted to lead us to heaven. And I think we do that by making sure we have some rules of the road in place so it doesn’t get stymied or set backwards because of scams or because of use by people who want to do us harm.” Mr. Altman, do you agree that a risk-based approach to regulation is the best way to place necessary guardrails for AI without stifling innovation?

Sam Altman: “I do. That makes a lot of sense to me.”

Senator Klobuchar: “Okay, thanks. And did you figure that out in your attic?”

Sam Altman: “No, that was a more recent discovery.”

Senator Klobuchar: Do you agree that consumers need to be more educated?

Brad Smith: Yes.

Senator Klobuchar: Pivot. Altman, what evals do you use for hallucinations?

Sam Altman: Hallucations are getting much better. Users are smart, they can handle it.

Senator Klobuchar: Uh huh. Pivot. What about my bill about sexploitation and deepfakes? Can we build models that can detect deepfakes?

Brad Smith: Working on it.

Senator Klobuchar: Pivot. What about compensating content creators and journalists?

Brad Smith: Rural newspapers good. Mumble, collective negotiations, collaborative action, Congress, courts. Balance. They can get paid but we want data access.

Senator Cruz: I am very intelligent. Who’s winning, America or China, how close is it and how can we win?

Sam Altman: American models are best, but not by a huge amount of time. America. Innovation. Entrepreneurship. America. “We just need to keep doing the things that have worked for so long and not make a silly mistake.”

Dr. Lisa Su: I only care about chips. America is ahead in chips, but even without the best chips you can get a lot done. They’re catching up. Spirit of innovation. Innovate.

Michael Intrator: On physical infrastructure it’s not going so great. Need power and speed.

Brad Smith: America in the lead but it is close. What matters is market share and adaptation. For America, that is. We need to win trust of other countries and win the world’s markets first.

Ted Cruz: Wouldn’t it be awful if we did something like SB 1047? That’s just like something the EU would do, it’s exactly the same thing. Totally awful, am I right?

Sam Altman: Totally, sure, pivot. We need algorithms and data and compute and the best products. Can’t stop, won’t stop. Need infrastructure, need to build chips in this country. It would be terrible if the government tried to set standards. Let us set our own standards.

Lisa Su: What he said.

Brad Smith (presumably referring to when SB 1047 said you would have had to tell us the model exists and is being trained and what your safety plan was): Yep, what he said. Especially important is no pre-approval requirements.

Michael Intrator: A patchwork of regulatory overlays would cause friction.

Senator Brian Shatz (D-Hawaii): You do know no one is proposing these EU-style laws, right? And the alternative proposal seems to be nothing? Does nothing work for you?

Sam Altman: “No, I think some policy is good. I think it is easy for it to go too far and as I’ve learned more about how the world works, I’m more afraid that it could go too far and have really bad consequences. But people want use products that are generally safe. When you get on an airplane, you kind of don’t think about doing the safety testing yourself. You’re like, well, maybe this is a bad time to use the airplane example, but you kind of want to just trust that you can get on it.”

Senator Brian Shatz: Great example. We need to know what we’re racing for. American values. That said, should AI content be labeled as AI?

Brad Smith: Yes, working on it.

Senator Brian Shatz: “Data really is intellectual property. It is human innovation, human creativity.” You need to pay for it. Isn’t the tension that you want to pay as little as possible?

Brad Smith: No? And maybe we shouldn’t have to?

Brian Shatz: How can an AI agent deliver services and reduce pain points while interacting with government?

Sam Altman: The AI on your phone does the thing for you and answers your questions.

Brad Smith: That means no standing in line at the DMV. Abu Dhabi does this already.

Senator Ted Budd (R-North Carolina): Race. China. Energy. Permitting problems. They are command and control so they’re good at energy. I’m working on permit by rule. What are everyone’s experiences contracting power?

Michael Intrator: Yes, power. Need power to win race. Working on it. Regulation problems.

Brad Smith: We build a lot of power. We do a lot of permitting. Federal wetlands permit is our biggest issue, that takes 18-24 months, state and local is usually 6-9.

Senator Ted Budd: I’m worried people might build on Chinese open models like DeepSeek and the CCP might promote them. How important is American leadership in open and closed models? How can we help?

Sam Altman: “I think it’s quite important in both.” You can help with energy and infrastructure.

Senator Andy Kim (D-New Jersey): What is this race? You said it’s about adaptation?

Brad Smith: It’s about limiting chip export controls to tier 2 countries.

Senator Kim: Altman, is that the right framing of the race?

Sam Altman: It’s about the whole stack. We want them to use US chips and also ChatGPT.

Senator Kim (asking a good question): Does building on our chips mean they’ll use our products and applications?

Sam Altman: Marginally. Ideally they’ll use our entire stack.

Senator Kim: How’re YOU doin? On tools and applications.

Sam Altman: Really well. We’re #1, not close.

Senator Kim: How are we doing on talent?

Dr. Lisa Su: We have the smartest engineers and a great talent base but we need more. We need international students, high skilled immigration.

Senator Eric Schmitt (R-Missouri): St. Louis, Missouri. What can we learn from Europe’s overregulation and failures?

Sam Altman: We’d love to invest in St. Louis. Our EU releases take longer, that’s not good.

Senator Schmitt: How does vertical AI stack integration work? I’ve heard China is 2-6 months behind on LLMs. Does our chip edge give us an advantage here?

Sam Altman: People everywhere will make great models and chips. What’s important is to get users relying on us for their hardest daily tasks. But also chips, algorithm, infrastructure, data. Compound effects.

Senator Schmitt: EU censorship is bad. NIST mentioned misinformation, oh no. How do we not do all that here?

Michael Intrator: It makes Europe uninvestable but it’s not our focus area.

Sam Altman: Putting people in jail for speech is bad and un-American. Freedom.

Senator Hickenlooper (D-Colorado): How does Microsoft evaluate Copilot’s accuracy and performance? What are the independent reviews?

Brad Smith: Don’t look at me, those are OpenAI models, that’s their job, then we have a joint DSBA deployment safety board. We evaluate using tools and ensure it passes tests.

Senator Hickenlooper: “Good. I like that.” Altman, have you considered using independent standard and safety evaluations?

Sam Altman: We do that.

Senator Hickenlooper: Chips act. What is the next frontier in Chip technology in terms of energy efficiency? How can we work together to improve direct-to-chip cooling for high-performance computing?

Lisa Su: Innovation. Chips act. AI is accelerating chip improvements.

Senator John Curtis (R-Utah): Look at me, I had a TRS-80 made by Radio Shack with upgraded memory. So what makes a state, say Utah, attractive to Stargate?

Sam Altman: Power cooling, fast permitting, electricians, construction workers, a state that will partner to work quickly, you in?

Senator Curtis: I’d like to be, but for energy how do we protect rate payers?

Sam Altman: More power. If you permit it, they will build.

Brad Smith: Microsoft helps build capacity too.

Senator Curtis: Yay small business. Can ChatGPT help small business?

Sam Altman: Can it! It can run your small business, write your ads, review your legal docs, answer customer emails, you name it.

Senator Duckworth (D-Illinois): Lab-private partnerships. Illinois. National lab investments. Trump and Musk are cutting research. That’s bad. Doge bad. Innovation. Don’t cut innovation. Help me out here with the national labs.

Sam Altman: We partner with national labs, we even shared model weights with them. We fing love science. AI is great for science, we’ll do ten years of science in a year.

Brad Smith: All hail national labs. We work with them too. Don’t take them for granted.

Dr. Lisa Su: Amen. We also support public-private partnerships with national labs.

Michael Intrator: Science is key to AI.

Senator Duckworth: Anyone want to come to a lab in Illinois? Everyone? Cool.

Senator Cruz: Race. Vital race. Beyond jobs and economic growth. National security. Economic security. Need partners and allies. Race is about market share of RA AI models and solutions in other countries. American values. Not CCP values. Digital trade rules. So my question: If we don’t adopt standards via NIST or otherwise, won’t others pick standards without us?

Brad Smith: Yes, sir. Europe won privacy law. Need to Do Something. Lightweight. Can’t go too soon. Need a good model standard. Must harmonize.

Senator Lisa Blunt Rochester (D-Delaware): Future of work is fear. But never mind that, tell me about your decision to transition to a PBC and attempt to have the PBC govern your nonprofit.

Sam Altman: Oceania has always been at war with Eastasia, we’ve talked to lawyers and regulators about the best way to do that and we’re excited to move forward.

Senator Rochester: I have a bill, Promoting Resilient Supply Chains. Dr. Su, what specific policies would help you overcome supply chain issues?

Dr. Lisa Su: Semiconductors are critical to The Race. We need to think end-to-end.

Senator Rochester: Mr. Smith, how do you see interdependence of the AI stack sections creating vulnerabilities or opportunities in the AI supply chain?

Brad Smith: More opportunities than vulnerabilities, it’s great to work together.

Senator Moran (R-Kansas): Altman, given Congress is bad at job and can’t pass laws, how can consumers control their own data?

Altman: You will happily give us all your data so we can create custom responses for you, mwahahaha! But yeah, super important, we’ll keep it safe, pinky swear.

Senator Moran: Thanks. I hear AI matters for cyberattacks, how can Congress spend money on this?

Brad Smith: AI is both offense and defense and faster than any human. Ukraine. “We have recognize it’s ultimately the people who defend not just countries, but companies and governments” so we need to automate that, stat. America. China. You should fund government agencies, especially NSA.

Senator Moran: Kansas. Rural internet connectivity issues. On-device or low bandwidth AI?

Altman: No problem, most of the work is in the cloud anyway. But also shibboleth, rural connectivity is important.

Senator Ben Ray Lujan (D-New Mexico): Thanks to Altman and Smith for your involvement with NIST AISI, and Su and Altman for partnerships with national labs. Explain the lab thing again?

Altman: We give them our models. o3 is good at helping scientists. Game changer.

Dr. Lisa Su: What he said.

Senator Lujan: What investments in those labs are crucial to you?

Altman: Just please don’t standardize me, bro. Absolutely no standards until we choose them for ourselves first.

Dr. Lisa Su: Blue sky research. That’s your comparative advantage.

Senator Lujan: My bipartisan bill is Test AI Act, to build state capacity for tests and evals. Seems important. Trump is killing basic research, and that’s bad, we need to fix that. America. NSF, NIH, DOE, OSTP. But my question for you is about how many engineers are working optimizations for reduced energy, and what is your plan to reduce water use by data centers?

Brad Smith: Okay, sure, I’ll have someone track that number down. The data centers don’t actually use much water, that’s misinformation, but we also have more than 90 water replenishment projects including in New Mexico.

Michael Intrator: Yeah I also have no idea how many engineers are working on those optimizations, but I assure you we’re working on it, it turns out compute efficiency is kind of a big deal these days.

Senator Lujan: Wonderful. Okay, a good use of time would be to ask “yes or no: Is it important to ensure that in order for AI to reach its full prominence that people across the country should be able to connect to fast affordable internet?”

Dr. Su: Yes.

Senator Lujan: My work is done here.

Senator Cynthia Lummis (R-Wyoming): AI is escalating quickly. America. Europe overregulates. GDPR limits AI. But “China appears to be fast-tracking AI development.” Energy. Outcompete America. Must win. State frameworks burdensome. Only 6 months ahead of China. Give your anti-state-regulation speech?

Sam Altman: Happy to. Hard to comply. Slow us down. Need Federal only. Light touch.

Michael Intrator: Preach. Infrastructure could be trapped with bad regulations.

Senator Lummis: Permitting process. Talk about how it slows you down?

Michael Intrator: It’s excruciating. Oh, the details I’m happy to give you.

Senator Lummis: Wyoming. Natural gas. Biden doesn’t like it. Trump loves it. How terrible would it be if another president didn’t love natural gas like Trump?

Brad Smith: Terrible. Bipartisan natural gas love. Wyoming. Natural gas.

Senator Lummis (here let me Google that for you): Are you exploring small modular nuclear? In Wyoming?

Brad Smith: Yes. Wyoming.

Senator Lummis: Altman, it’s great you’re releasing an open… whoops time is up.

Senator Jacky Rosen (D-Nevada): AI is exciting. We must promote its growth. But it could also be bad. I’ll start with DeepSeek. I want to ban it on government devices and for contractors. How should we deal with PRC-developed models? Could they co-opt AI to promote an ideology? Collect sensitive data? What are YOU doing to combat this threat?

Brad Smith: DeepSeek is both a model and an app. We don’t let employees use the app, we didn’t even put it in our app store, for data reasons. But the model is open, we can analyze and change it. Security first.

Senator Rosen: What are you doing about AI and antisemitism? I heard AI is perpetuating stereotypes. Will you collaborate with civil society on a benchmark?

Sam Altman (probably wondering if Rosen knows that Altman is Jewish): We do collaborate with civil society. We’re not here to be antisemitic.

Senator Rosen (except less coherently than this, somehow): AI using Water. Energy. We’re all concerned. Also data center security. I passed a bill on that, yay me. Got new chip news? Faster and cooler is better. How can we make data more secure? Talk about interoperability.

Dr. Lisa Su (wisely): I think for all our sakes I’ll just say all of that is important.

Senator Dan Sullivan (R-Alaska): Agree with Cruz, national economic and national security. Race. China. Everyone agree? Good. Huge issue. Very important. Are we in the lead?

Sam Altman: We are landing and will keep leading. America. Right thing to do. But we need your help.

Senator Sullivan: Our help, you say. How can we help?

Sam Altman: Infrastructure. Supply chain. Everything in America. Infrastructure. Supply chain. Stargate. Full immunity from copyright for model training. “Reasonable fair like touch regulatory framework.” Ability to deploy quickly. Let us import talent.

Senator Sullivan (who I assume has not seen the energy production charts): Great. “One of our comparative advantages over China in my view has to be energy.” Alaska. Build your data centers here. It is a land. A cold land. With water. And gas.

Sam Altman: “That’s very compelling.”

Senator Sullivan: Alaska is colder than Texas. It has gas. I’m frustrated you can’t see that Alaska is cold. And has gas. And yet Americans invest in China. AI, quantum. Benchmark Capital invested $75 million in Chinese AI. How dare they.

Brad Smith: China can build power plants better than we can. Our advantage is the world’s best people plus venture capital. Need to keep bringing best people here. America.

Senator Sullivan: “American venture capital funds Chinese AI. Is that in our national interest?”

Brad Smith: Good question, good that you’re focusing on that but stop focusing on that, just let us do high-skilled immigration.

Senator Markey (D-Massachusetts): Environmental impact of AI. AI weather forecasting. Climate change. Electricity. Water. Backup diesel generators can cause respiratory and cardiovascular issues and cancer. Need more info. Do you agree more research is needed, like in the AIEIA bill?

Brad Smith: “Yes. One study was just completed last December.” Go ahead and convene your stakeholders and measure things.

Sam Altman: Yes, the Federal government should study and measure that. Use AI.

Senator Markey: AI could cure cancer, but it could also cause climate change. Equally true. Trump wants to destroy incentives for wind, solar, battery. “That’s something you have to weigh in on. Make sure he does not do that.” Now, AI’s impact on disadvantagd communities. Can algorithms be biased and cause discrimination?

Brad Smith: Yes. We test to avoid that outcome.

Senator Markey (clip presumably available now on TikTok): Altman doesn’t want privacy regulations. But AI can cause harm. To marginalized communities. Bias. Discrimination. Mortgage discrimination. Bias in hiring people with disabilities. Not giving women scholarships. Real harms. Happening now. So I wrote the AI Civil Rights Act to ensure you all eliminate bias and discrimination, and we hold you accountable. Black. Brown. LGBTQ. Virtual world needs same protections as real world. No question.

Senator Gary Peters (D-Michigan): America. Must be world leader in AI. Workforce. Need talent here. Various laws for AI scholarships, service, training. University of Michigan, providing AI tools to students. “Mr. Altman, when we met last year in my office and had a great, you said that upwards of 70% of jobs could be limited by AI.” Prepare for social disruption. Everyone must benefit. How can your industry mitigate job loss and social disruption?

Sam Altman: AI will come at you fast. But otherwise it’s like other tech, jobs change, we can handle change. Give people tools early. That’s why we do iterative development. We can make change faster by making it as fast as possible. But don’t worry, it’s just faster change, it’s just a transition.

Senator Peters: We need AI to enhance work, not displace work. Like last 100 years.

Sam Altman (why yes, I do mean the effect on jobs, senator): We can’t imagine the jobs on the other side of this. Look how much programming has changed.

Senator Peters: Open tech is good. Open hardware. What are the benefits of open standards and system interoperability at the hardware level? What are the supply chain implications?

Dr. Lisa Su: Big advantages. Open is the best solution. Also good for security.

Senator Peters: You’re open, Nvidia is closed. Why does that make you better?

Dr. Lisa Su: Anyone can innovate so we don’t have to. We can put that together.

Senator John Fetterman (D-Pennsylvania, I hope he’s okay): Yay energy. National security. Fossil. Nuclear. Important. Concern about Pennsylvania rate payers. Prices might rise 20%. Concerning. Plan to reopen Three Mile Island. But I had to grab my hamster and evacuate in 1979. I’m pro nuclear. Microsoft. But rate payers.

Brad Smith: Critical point. We invest to bring on as much power as we use, so it doesn’t raise prices. We’ll pay for grid upgrades. We create construction jobs.

Senator Fetterman: I’m a real senator which means I get to meet Altman, squee. But what about the singularity? Address that?

Sam Altman: Nice hoodies. I am excited by the rate of progress, also cautious. We don’t understand where this, the biggest technological revolution ever, is going to go. I am curious and interested, yet things will change. Humans adapt, things become normal. We’ll do extraordinary things with these tools. But they’ll do things we can’t wrap our heads around. And do recursive self-improvement. Some call that singularity, some call it takeoff. New era in humanity. Exciting that we get to live through that and make it a wonderful thing. But we’ve got to approach it with humility and some caution, always twirling twirling towards freedom.

Senator Klobuchar (oh no not again): There’s a new pope. I work really hard on a bill. YouTube, RIAA, SAG, MPAA all support it. 75k songs with unauthorized deepfakes. Minnesota has a Grammy-nominated artist. Real concern. What about unauthorized use of people’s image? How do we protect them? If you don’t violate people’s rights someone else will.

Brad Smith: Genuflection. Concern. Deepfakes are bad. AI can identify fakes. We apply voluntary guardrails.

Senator Klobuchar: And will you both read my bill? I worked so hard.

Brad Smith: Absolutely.

Sam Altman: Happy to. Deepfakes. Big issue. Can’t stop them. If you allow open models you can’t stop them from doing it. Guardrails. Have people on lookout.

Senator Klobuchar: “It’s coming, but there’s got to be some ways to protect people. We should do everything privacy. And you’ve got to have some way to either enforce it, damages, whatever, there’s just not going to be any consequences.”

Sam Altman: Absolutely. Bad actors don’t always follow laws.

Senator Cruz: Sorry about tweeting out that AI picture of Senator Federman as the Pope of Greenland.

Senator Klobuchar: Whoa, parody is allowed.

Senator Cruz: Yeah, I got him good. Anyway, Altman, what’s the most surprising use of ChatGPT things you’ve seen?

Sam Altman: Me figuring out how to take care of my newborn baby.

Senator Cruz: My teenage daughter sent me this long detailed emotional text, and it turns out ChatGPT wrote it.

Sam Altman (who lacks that option): “I have complicated feelings about that.”

Senator Cruz: “Well use the app and then tell me what your thoughts. Okay, Google just revealed that their search traffic on Safari declined for the first time ever. They didn’t send me a Christmas card. Will chat GPT replace Google as the primary search engine? And if so when?”

Sam Altman: Probably not, Google’s good, Gemini will replace it instead.

Senator Cruz: How big a deal was DeepSeek? A major seismic shocking development? Not that big a deal? Somewhere in between? What’s coming next?

Sam Altman: “Not a huge deal.” They made a good open-source model and they made a highly downloaded consumer app. Other companies are going to put out good models. If it was going to beat ChatGPT that would be bad, but it’s not.

Dr. Lisa Su: Somewhere in between. Different ways of doing things. Innovation. America. We’re #1 in models. Being open was impactful. But America #1.

Michael Intrator (saying the EMH is false and it was only news because of lack of situational awareness): It raised the specter of China’s AI capability. People became aware. Financial market implications. “China is not theoretically in the race for AI dominance, but actually is very much a formidable competitor.” Starting gun for broader population and consciousness that we have to work. America.

Brad Smith: Somewhere in between. Wasn’t shocking. We knew. All their 200 employees are 4 or less years out of college.

Ted Cruz: I’m glad the AI diffusion rule was rescinded. Bad rule. Too complex. Unfair to our trading partners. “That doesn’t necessarily mean there should be no restrictions and there are a variety of views on what the rules should be concerning AI diffusion.” Nivida wants no rules. What should be the rule?

Sam Altman: I’m also glad that was rescinded. Need some constraints. We need to win diffusion not stop diffusion. America. Best data centers in America. Other data centers elsewhere. Need them to use ChatGPT not DeepSeek. Want them using US chips and US data center technology and Microsoft. Model creation needs to happen here.

Dr. Lisa Su: Happy they were rescinded. Need some restrictions. National security. Simplify. Need widespread adoption of our tech and ecosystem. Simple rules, protect our tech also give out our tech. Devil’s in details. Balance. Broader hat. Stability.

Michael Intrator: Copy all that. National security. Work with regulators. Rule didn’t let us participate enough.

Brad Smith: Eliminate tier 2 restrictions to ensure confidence and access to American tech, even most advanced GPUs. Trusted providers only. Security standards. Protection against diversion or certain use cases. Keep China out, guard against catastrophic misuse like CBRN risks.

Ted Cruz: Would you support a 10-year ban on state AI laws, if we call it a ‘learning period’ and we can revisit after the singularity?

Sam Altman: Don’t meaningfully regulate us, and call it whatever you need to.

Dr. Lisa Su: “Aligned federal approach with really thoughtful regulation would be very, very much appreciated.”

Michael Intrador: Agreed.

Brad Smith: Agreed.

Ted Cruz: We’re done, thanks everyone, and live from New York it’s Saturday night.

This exchange was pretty wild, the one time someone asked about this, and Altman dodges it:

For the time being, we are not hoping the Congress will help us not die, or help the economy and society deal with the transformations that are coming, or even that it can help with the mess that is existing law.

We are instead hoping the Congress will not actively make things worse.

Congress has fully abrogated its job to consider the downside risks of AI, including catastrophic and existential risks. It is fully immersed in jingoism and various false premises, and under the sway of certain corporations. It is attempting to actively prevent states from being able to do literally anything on AI, even to fix existing non-AI laws that have unintended implications no one wants.

We have no real expectation that Congress will be able to step up and pass the laws it is preventing the states from passing. In many cases, it can’t, because fixes are needed for existing state laws. At most, we can expect Congress to manage things like laws against deepfakes, but that doesn’t address the central issues.

On the sacrificial altars of ‘competitiveness’ and ‘innovation’ and ‘market share’ they are going to attempt to sacrifice even the export controls that keep the most important technology, models and compute out of Chinese hands, accomplishing the opposite. They are vastly underinvesting in state capacity and in various forms of model safety and security, even for straight commercial and mundane utility purposes, out of spite and a ‘if you touch anything you kill innovation’ madness.

Meanwhile, they seem little interested in addressing that the main actions of the Federal government in 2025 have been to accomplish the opposite of these goals. Where we need talent, we drive it away. Where we need trade and allies, we alienate our allies and restrict trade.

There is talk of permitting reform and helping with energy. That’s great, as far as it goes. It would at least be one good thing. But I don’t see any substantive action.

They’re not only not coming to save us. They’re determined to get in the way.

That doesn’t mean give up. It especially doesn’t mean give up on the fight over the new diffusion rules, where things are very much up in the air, and these Senators, if they are good actors, have clearly been snookered.

These people can be reached and convinced. It is remarkably easy to influence them. Even under their paradigm of America, Innovation, Race, we are making severe mistakes, and it would go a long way to at least correct those. We are far from the Pareto frontier here, even ignoring the fact that we are all probably going to die from AI if we don’t do something about that.

Later, events will overtake this consensus, the same way they the vibes have previously shifted at the rate of at least once a year. We need to be ready for that, and also to stop them from doing something crazy when the next crisis happens and the public or others demand action.

A Live Look at the Senate AI Hearing Read More »

a-soviet-era-spacecraft-built-to-land-on-venus-is-falling-to-earth-instead

A Soviet-era spacecraft built to land on Venus is falling to Earth instead

Kosmos 482, a Soviet-era spacecraft shrouded in Cold War secrecy, will reenter the Earth’s atmosphere in the next few days after misfiring on a journey to Venus more than 50 years ago.

On average, a piece of space junk the size of Kosmos 482, with a mass of about a half-ton, falls into the atmosphere about once per week. What’s different this time is that Kosmos 482 was designed to land on Venus, with a titanium heat shield built to withstand scorching temperatures, and structures engineered to survive atmospheric pressures nearly 100 times higher than Earth’s.

So, there’s a good chance the spacecraft will survive the extreme forces it encounters during its plunge through the atmosphere. Typically, space debris breaks apart and burns up during reentry, with only a small fraction of material reaching the Earth’s surface. The European Space Agency, one of several institutions that track space debris, says Kosmos 482 is “highly likely” to reach Earth’s surface in one piece.

Fickle forecasts

The Kosmos 482 spacecraft launched from the Baikonur Cosmodrome, now part of Kazakhstan, aboard a Molniya rocket on March 31, 1972. A short time later, the rocket’s upper stage was supposed to propel the probe out of Earth orbit on an interplanetary journey toward Venus, where it would have become the third mission to land on the second planet from the Sun.

But the rocket failed, rendering it unable to escape the gravitational grip of Earth. The spacecraft separated into several pieces, and Russian engineers gave up on the mission. The main section of the Venus probe reentered the atmosphere in 1981, but for 53 years, the 3.3-foot-diameter (1-meter) segment of the spacecraft that was supposed to land on Venus remained in orbit around the Earth, its trajectory influenced only by the tenuous uppermost layers of the atmosphere.

The mission was part of the Soviet Union’s Venera program, which achieved the first soft landing of a spacecraft on another planet with the Venera 7 mission in 1970, and followed up with another successful landing with Venera 8 in 1972. Because it failed, Soviet officials gave the next mission, which would have become Venera 9, a non-descriptive name: Kosmos 482.

A Soviet-era spacecraft built to land on Venus is falling to Earth instead Read More »

ai-use-damages-professional-reputation,-study-suggests

AI use damages professional reputation, study suggests

Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation.

On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers.

“Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs,” write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke’s Fuqua School of Business.

The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled “Evidence of a social evaluation penalty for using AI,” reveal a consistent pattern of bias against those who receive help from AI.

What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn’t limited to specific groups.

Fig. 1. Effect sizes for differences in expected perceptions and disclosure to others (Study 1). Note: Positive d values indicate higher values in the AI Tool condition, while negative d values indicate lower values in the AI Tool condition. N = 497. Error bars represent 95% CI. Correlations among variables range from | r |= 0.53 to 0.88.

Fig. 1 from the paper “Evidence of a social evaluation penalty for using AI.” Credit: Reif et al.

“Testing a broad range of stimuli enabled us to examine whether the target’s age, gender, or occupation qualifies the effect of receiving help from Al on these evaluations,” the authors wrote in the paper. “We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness. This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one.”

The hidden social cost of AI adoption

In the first experiment conducted by the team from Duke, participants imagined using either an AI tool or a dashboard creation tool at work. It revealed that those in the AI group expected to be judged as lazier, less competent, less diligent, and more replaceable than those using conventional technology. They also reported less willingness to disclose their AI use to colleagues and managers.

The second experiment confirmed these fears were justified. When evaluating descriptions of employees, participants consistently rated those receiving AI help as lazier, less competent, less diligent, less independent, and less self-assured than those receiving similar help from non-AI sources or no help at all.

AI use damages professional reputation, study suggests Read More »

fidji-simo-joins-openai-as-new-ceo-of-applications

Fidji Simo joins OpenAI as new CEO of Applications

In the message, Altman described Simo as bringing “a rare blend of leadership, product and operational expertise” and expressed that her addition to the team makes him “even more optimistic about our future as we continue advancing toward becoming the superintelligence company.”

Simo becomes the newest high-profile female executive at OpenAI following the departure of Chief Technology Officer Mira Murati in September. Murati, who had been with the company since 2018 and helped launch ChatGPT, left alongside two other senior leaders and founded Thinking Machines Lab in February.

OpenAI’s evolving structure

The leadership addition comes as OpenAI continues to evolve beyond its origins as a research lab. In his announcement, Altman described how the company now operates in three distinct areas: as a research lab focused on artificial general intelligence (AGI), as a “global product company serving hundreds of millions of users,” and as an “infrastructure company” building systems that advance research and deliver AI tools “at unprecedented scale.”

Altman mentioned that as CEO of OpenAI, he will “continue to directly oversee success across all pillars,” including Research, Compute, and Applications, while staying “closely involved with key company decisions.”

The announcement follows recent news that OpenAI abandoned its original plan to cede control of its nonprofit branch to a for-profit entity. The company began as a nonprofit research lab in 2015 before creating a for-profit subsidiary in 2019, maintaining its original mission “to ensure artificial general intelligence benefits everyone.”

Fidji Simo joins OpenAI as new CEO of Applications Read More »

doge-software-engineer’s-computer-infected-by-info-stealing-malware

DOGE software engineer’s computer infected by info-stealing malware

Login credentials belonging to an employee at both the Cybersecurity and Infrastructure Security Agency and the Department of Government Efficiency have appeared in multiple public leaks from info-stealer malware, a strong indication that devices belonging to him have been hacked in recent years.

Kyle Schutt is a 30-something-year-old software engineer who, according to Dropsite News, gained access in February to a “core financial management system” belonging to the Federal Emergency Management Agency. As an employee of DOGE, Schutt accessed FEMA’s proprietary software for managing both disaster and non-disaster funding grants. Under his role at CISA, he likely is privy to sensitive information regarding the security of civilian federal government networks and critical infrastructure throughout the US.

A steady stream of published credentials

According to journalist Micah Lee, user names and passwords for logging in to various accounts belonging to Schutt have been published at least four times since 2023 in logs from stealer malware. Stealer malware typically infects devices through trojanized apps, phishing, or software exploits. Besides pilfering login credentials, stealers can also log all keystrokes and capture or record screen output. The data is then sent to the attacker and, occasionally after that, can make its way into public credential dumps.

“I have no way of knowing exactly when Schutt’s computer was hacked, or how many times,” Lee wrote. “I don’t know nearly enough about the origins of these stealer log datasets. He might have gotten hacked years ago and the stealer log datasets were just published recently. But he also might have gotten hacked within the last few months.”

Lee went on to say that credentials belonging to a Gmail account known to belong to Schutt have appeared in 51 data breaches and five pastes tracked by breach notification service Have I Been Pwned. Among the breaches that supplied the credentials is one from 2013 that pilfered password data for 3 million Adobe account holders, one in a 2016 breach that stole credentials for 164 million LinkedIn users, a 2020 breach affecting 167 million users of Gravatar, and a breach last year of the conservative news site The Post Millennial.

DOGE software engineer’s computer infected by info-stealing malware Read More »

nasa-scrambles-to-cut-iss-activity-due-to-budget-issues

NASA scrambles to cut ISS activity due to budget issues

Canceling the tracker layer upgrade to the spectrometer would also not be catastrophic. The addition of a silicon tracker layer on top of the detector would increase the amount of data from the $2 billion physics experiment over the next five years by a factor of three. However, the experiment has been in operation since 2011, so it has had ample time to collect information about dark matter and other fundamental physics in the universe.

Cutting crews down to size

The real eye-catching proposal in NASA’s options is reducing the crew size from four to three.

Typically, Crew Dragon missions carry two NASA astronauts, one Roscosmos cosmonaut, and an international partner astronaut. Therefore, although it appears that NASA would only be cutting its crew size by 25 percent, in reality, it would be cutting the number of NASA astronauts on Crew Dragon missions by 50 percent. Overall, this would lead to an approximately one-third decline in science conducted by the space station. (This is because there are usually three NASA astronauts on station: two from Dragon and one on each Soyuz flight.)

It’s difficult to see how this would result in enormous cost savings. Yes, NASA would need to send marginally fewer cargo missions to keep fewer astronauts supplied. And there would be some reduction in training costs. But it seems kind of nuts to spend decades and more than $100 billion building an orbital laboratory, putting all of this effort into developing commercial vehicles to supply the station and enlarge its crew, establishing a rigorous training program to ensure maximum science is done and then to say,Well, actually we don’t want to use it.’

NASA has not publicly announced the astronauts who will fly on Crew-12 next year, but according to sources, it has already assigned veteran astronaut Jessica Meir and newcomer Jack Hathaway, a former US Navy fighter pilot who joined NASA’s astronaut corps in 2021. If these changes go through, presumably one of these two would be removed from the mission.

NASA scrambles to cut ISS activity due to budget issues Read More »

matter-update-may-finally-take-the-tedium-out-of-setting-up-your-smart-home

Matter update may finally take the tedium out of setting up your smart home

There is no product category that better embodies the XKCD take on standards than smart home. With an ocean of connectivity options and incompatible standards, taming this mess has been challenging, but Matter could finally have a shot at making things a little less frustrating. The latest version of the standard has launched, offering multiple ways to streamline the usually aggravating setup process.

The first public release of Matter was in late 2022, but compatible systems didn’t get support until the following year. Now, there are Matter-certified devices like smart bulbs and sensors that will talk to Apple, Google, Amazon, and other smart home platforms. Matter 1.4.1 includes support for multi-device QR codes, NFC connection, and integrated terms and conditions—all of these have the potential to eliminate some very real smart home headaches.

It’s common for retailers to offer multi-packs of devices like light bulbs or smart plugs. That can save you some money, but setting up all those devices is tedious. With Matter 1.4.1, it might be much easier thanks to multi-device QR codes. Manufacturers can now include a QR code in the package that will pair all the included devices with your smart home system when scanned.

QR codes will still appear on individual devices for pairing, but it might not always be a QR code going forward. The new Matter also gives manufacturers the option of embedding NFC tags inside smart home gadgets. So all you have to do to add them to your system is tap your phone. That will be nice if you need to pair a device after it has been installed somewhere that obscures the visible code.

Matter update may finally take the tedium out of setting up your smart home Read More »

the-company-with-the-world’s-largest-aircraft-now-has-a-hypersonic-rocket-plane

The company with the world’s largest aircraft now has a hypersonic rocket plane

“Demonstrating the reuse of fully recoverable hypersonic test vehicles is an important milestone for MACH-TB,” said George Rumford, director of the Test Resource Management Center, in a statement. “Lessons learned from this test campaign will help us reduce vehicle turnaround time from months down to weeks.”

Krevor said Talon-A carried multiple experiments on each mission but did not offer any details about the nature of the payloads, citing proprietary reasons and customer agreements.

“We cannot disclose the nature of those payloads, other than to say typical materials, instrumentation, sensors, etc.,” he said. “The customers were thrilled with their ability to recover the payloads shortly after landing.”

Stratolaunch completed the first powered flight of a Talon-A vehicle last year, when the rocket plane launched over the Pacific Ocean and fired its liquid-fueled Hadley engine—produced by Ursa Major—for about 200 seconds. The Talon-A1 vehicle accelerated to just shy of hypersonic speed, then fell into the sea as planned and was not recovered.

That set the stage for Talon-A2’s first flight in December.

Military officials previously stated they set up the MACH-TB program to enable more frequent flight testing of hypersonic weapon technologies, including communication, navigation, guidance, sensors, and seekers. Stratolaunch aims for monthly flights of the Talon-A rocket plane by the end of the year and eventually wants to ramp up to weekly flights.

“These flights are setting the stage now to increase the cadence of hypersonic flight testing in this country,” Krevor said. “The ability to have a fully reusable hypersonic flight architecture enables a very high cadence of flight along with a lot of responsiveness. The DoD can call Stratolaunch if there’s a priority program, and we can have a hypersonic flight next week, assuming the readiness of all the other technologies and payloads.”

Pentagon officials in 2022 set a goal of growing US capacity for hypersonic testing from 12 to 50 flight tests per year. Krevor believes Stratolaunch will play a key part in making that happen.

Catching up

So, why is hypersonic flight testing important?

The Pentagon seeks to close what it views as a technological gap with China, which US officials acknowledge has become the world’s leader in hypersonic missile development. Hypersonic weapons are more difficult than conventional missiles for aerial defense systems to detect, track, and destroy. Unlike ballistic missiles, hypersonic weapons ride at the top of the atmosphere, enhancing their maneuverability and ability to evade interceptors.

Hypersonic flight is an unforgiving environment. Temperatures outside the Talon-A vehicle can reach up to 2,000° Fahrenheit (1,100° Celsius) as the plane plows through air molecules, Krevor said. He declined to disclose the duration, top speed, and maximum altitude of the December and March test flights but said the rocket plane performed a series of “high-G” maneuvers on the journey from its drop location to Vandenberg.

The company with the world’s largest aircraft now has a hypersonic rocket plane Read More »

trump-admin-picks-covid-critic-to-be-top-fda-vaccine-regulator

Trump admin picks COVID critic to be top FDA vaccine regulator

Oncologist Vinay Prasad, a divisive critic of COVID-19 responses, will be the next top vaccine regulator at the Food and Drug Administration, agency Commissioner Martin Makary announced on social media Tuesday.

Prasad will head the FDA’s Center for Biologics Evaluation and Research (CBER), which is in charge of approving and regulating vaccines and other biologics products, such as gene therapies and blood products.

“Dr. Prasad brings the kind of scientific rigor, independence, and transparency we need at CBER—a significant step forward,” Makary wrote on social media.

Prasad, a professor in the department of epidemiology and biostatistics at the University of California, San Francisco, is perhaps best known for his combative social media postings and criticism of the mainstream medical community. He gained notoriety amid the COVID-19 pandemic for assailing public health responses, such as masking and vaccine mandates.

In an October 2021 newsletter, titled “How Democracy Ends,” Prasad compared the country’s pandemic responses to the rise of Adolf Hitler’s Third Reich. The post led New York University bioethicist Arthur Caplan to rebuke Prasad, writing in The Cancer Letter that the comparison is “ludicrous, dangerous, and offensive,” before adding “imbecilic.”

Prasad has also criticized the FDA for approving COVID-19 booster vaccines. Last year, he accused his predecessor as the head of the CBER, Peter Marks, of being “either incompetent or corrupt” for allowing the approvals.

“Absurd”

More recently, Prasad has heaped praise on new FDA Commissioner Makary, while continuing to criticize Marks. In early March, Prasad called Makary “smart, thoughtful, and disciplined” and “exactly what we need at the FDA.” Later in the month, he continued to take shots at Marks, writing: “You could replace Peter Marks with a bobblehead doll that just stamps approval and you would have the same outcome at FDA with lower administrative fees. Maybe something DOGE should consider.”

Trump admin picks COVID critic to be top FDA vaccine regulator Read More »

apps-like-kindle-are-already-taking-advantage-of-court-mandated-ios-app-store-changes

Apps like Kindle are already taking advantage of court-mandated iOS App Store changes

As of an update released today, the iOS app still doesn’t allow books to be purchased directly in the app, but you can search Amazon’s virtual bookstore inside the app and tap a new “Get Book” button that automatically pops you over to Amazon.com in your phone or tablet’s default browser. This is not as convenient for users as allowing them to purchase digital goods or services directly in the app, but it does make things a lot more friendly for users of apps whose developers don’t want to pay Apple a cut.

For the first time ever, the Kindle app on iOS can automatically direct book buyers to Amazon’s site to complete a purchase. Credit: Andrew Cunningham

Apple’s position on its App Store commissions has generally been, to write a high-level summary, that these third-party app developers benefit from the size and reach of Apple’s platform, the work Apple does to maintain the App Store and to make apps discoverable, and Apple’s payment processing services, among other benefits.

Even when it complied with a court order to allow third-party developers to use alternate payment processors in their apps, Apple still insisted on a 12 to 27 percent cut (rather than the usual 15 to 30 percent) to cover these other less-tangible benefits of offering apps and services on Apple’s devices. (Apple’s method of complying with that ruling, including onerous filing requirements for developers who used third-party payment services, was one of many things Judge Gonzalez criticized Apple for in last week’s ruling.)

A new headache for Apple

Apple is appealing last week’s ruling, and it may well succeed in the end, giving the company the ability to roll back these rule changes and once again force developers to either use Apple’s in-app payments or force users to buy goods and services externally. But even if this change is only temporary, it still creates new potential PR headaches for Apple.

Apps like Kindle are already taking advantage of court-mandated iOS App Store changes Read More »

how-long-will-switch-2’s-game-key-cards-keep-working?

How long will Switch 2’s Game Key Cards keep working?

You could even argue that Nintendo is more likely to offer longer-term support for Game Key Card downloads since backward compatibility seems to be a priority for the Switch hardware line. If we presume that future Switch systems will remain backward compatible, we can probably also presume that Nintendo will want players on new hardware to still have access to their old Game Key Card purchases (or to be able to use Game Key Cards purchased on the secondhand market).

A pile of physical games that will never require a download server to work.

Credit: Aurich Lawson

A pile of physical games that will never require a download server to work. Credit: Aurich Lawson

There are no guarantees in life, of course, and nothing lasts forever. Nintendo will one day go out of business, at which point it seems unlikely that a Game Key Card will be able to download much of anything. Short of that, Nintendo could suffer a financial malady that makes download servers for legacy systems seem like an indulgence, or it could come under new management that doesn’t see value in supporting decades-old purchases made for ancient consoles.

As of this writing, though, Nintendo has kept its Wii game download servers active for 6,743 days and counting. If the Switch 2 Game Key Card servers last as long, that means those cards will still be fully functional through at least October 2043.

I don’t know what I will be doing with my life in 2043, but it’s comforting and extremely plausible to imagine that the “eighty dollar rental” I made of a Switch 2 Game Key Card back in 2025 will still work as intended.

Or, to put it another way, I think it’s highly likely that I will become “e-waste” long before any Switch 2 Game Key Cards.

How long will Switch 2’s Game Key Cards keep working? Read More »

zuckerberg’s-dystopian-ai-vision

Zuckerberg’s Dystopian AI Vision

You think it’s bad now? Oh, you have no idea. In his talks with Ben Thompson and Dwarkesh Patel, Zuckerberg lays out his vision for our AI future.

I thank him for his candor. I’m still kind of boggled that he said all of it out loud.

We will start with the situation now. How are things going on Facebook in the AI era?

Oh, right.

Sakib: Again, it happened again. Opened Facebook and I saw this. I looked at the comments and they’re just unsuspecting boomers congratulating the fake AI gen couple😂

Deepfates: You think those are real boomers in the comments?

This continues to be 100% Zuckerberg’s fault, and 100% an intentional decision.

The algorithm knows full well what kind of post this is. It still floods people with them, especially if you click even once. If they wanted to stop it, they easily could.

There’s also the rather insane and deeply embarrassing AI bot accounts they have tried out on Facebook and Instagram.

Compared to his vision of the future? You aint seen nothing yet.

Ben Thompson interviewed Mark Zuckerberg, centering on business models.

It was like if you took a left wing caricature of why Zuckerberg is evil, combined it with a left wing caricature about why AI is evil, and then fused them into their final form. Except it’s coming directly from Zuckerberg, as explicit text, on purpose.

It’s understandable that many leave such interviews and related stories saying this:

Ewan Morrison: Big tech atomises you, isolates you, makes you lonely and depressed – then it rents you an AI friend, and AI therapist, an AI lover.

Big tech are parasites who pretend they are here to help you.

When asked what he wants to use AI for, Zuckerberg’s primary answer is advertising, in particular an ‘ultimate black box’ where you ask for a business outcome and the AI does what it takes to make that outcome happen. I leave all the ‘do not want’ and ‘misalignment maximalist goal out of what you are literally calling a black box, film at 11 if you need to watch it again’ and ‘general dystopian nightmare’ details as an exercise to the reader. He anticipates that advertising will then grow from the current 1%-2% of GDP to something more, and Thompson is ‘there with’ him, ‘everyone should embrace the black box.’

His number two use is ‘growing engagement on the customer surfaces and recommendations.’ As in, advertising by another name, and using AI in predatory fashion to maximize user engagement and drive addictive behavior.

In case you were wondering if it stops being this dystopian after that? Oh, hell no.

Mark Zuckerberg: You can think about our products as there have been two major epochs so far.

The first was you had your friends and you basically shared with them and you got content from them and now, we’re in an epoch where we’ve basically layered over this whole zone of creator content.

So the stuff from your friends and followers and all the people that you follow hasn’t gone away, but we added on this whole other corpus around all this content that creators have that we are recommending.

Well, the third epoch is I think that there’s going to be all this AI-generated content…

So I think that these feed type services, like these channels where people are getting their content, are going to become more of what people spend their time on, and the better that AI can both help create and recommend the content, I think that that’s going to be a huge thing. So that’s kind of the second category.

The third big AI revenue opportunity is going to be business messaging.

And the way that I think that’s going to happen, we see the early glimpses of this because business messaging is actually already a huge thing in countries like Thailand and Vietnam.

So what will unlock that for the rest of the world? It’s like, it’s AI making it so that you can have a low cost of labor version of that everywhere else.

Also he thinks everyone should have an AI therapist, and that people want more friends so AI can fill in for the missing humans there. Yay.

PoliMath: I don’t really have words for how much I hate this

But I also don’t have a solution for how to combat the genuine isolation and loneliness that people suffer from

AI friends are, imo, just a drug that lessens the immediate pain but will probably cause far greater suffering

Well, I guess the fourth one is the normal ‘everyone use AI now,’ at least?

And then, the fourth is all the more novel, just AI first thing, so like Meta AI.

He also blames Llama-4’s terrible reception on user error in setup, and says they now offer an API so people have a baseline implementation to point to, and says essentially ‘well of course we built a version of Llama-4 specifically to score well on Arena, that only shows off how easy it is to steer it, it’s good actually.’ Neither of them, of course, even bothers to mention any downside risks or costs of open models.

The killer app of Meta AI is that it will know all about all your activity on Facebook and Instagram and use it against for you, and also let you essentially ‘talk to the algorithm’ which I do admit is kind of interesting but I notice Zuckerberg didn’t mention an option to tell it to alter the algorithm, and Thompson didn’t ask.

There is one area where I like where his head is at:

I think one of the things that I’m really focused on is how can you make it so AI can help you be a better friend to your friends, and there’s a lot of stuff about the people who I care about that I don’t remember, I could be more thoughtful.

There are all these issues where it’s like, “I don’t make plans until the last minute”, and then it’s like, “I don’t know who’s around and I don’t want to bug people”, or whatever. An AI that has good context about what’s going on with the people you care about, is going to be able to help you out with this.

That is… not how I would implement this kind of feature, and indeed the more details you read the more Zuckerberg seems determined to do even the right thing in the most dystopian way possible, but as long as it’s fully opt-in (if not, wowie moment of the week) then at least we’re trying at all.

Also interviewing Mark Zuckerberg is Dwarkesh Patel. There was good content here, Zuckerberg in many ways continues to be remarkably candid. But it wasn’t as dense or hard hitting as many of Patel’s other interviews.

One key difference between the interviews is that when Zuckerberg lays out his dystopian vision, you get the sense that Thompson is for it, whereas Patel is trying to express that maybe we should be concerned. Another is that Patel notices that there might be more important things going on, whereas to Thompson nothing could be more important than enhancing ad markets.

  1. When asked what changed since Llama 3, Zuckerberg leads off with the ‘personalization loop.’

  2. Zuckerberg still claims Llama 4 Scout and Maverick are top notch. Okie dokie.

  3. He doubles down on ‘open source will become most used this year’ and that this year has been Great News For Open Models. Okie dokie.

  4. His heart’s clearly not in claiming it’s a good model, sir. His heart is in it being a good model for Meta’s particular commercial purposes and ‘product value’ as per people’s ‘revealed preferences.’ That’s the modes he talked about with Thompson.

  5. He’s very explicit about this. OpenAI and Anthropic are going for AGI and a world of abundance, with Anthropic focused on coding and OpenAI towards reasoning. Meta wants fast, cheap, personalized, easy to interact with all day, and (if you add what he said to Thompson) to optimize feeds and recommendations for engagement, and to sell ads. It’s all for their own purposes.

  6. He says Meta is specifically creating AI tools to write their own code for internal use, but I don’t understand what makes that different from a general AI coder? Or why they think their version is going to be better than using Claude or Gemini? This feels like some combination of paranoia and bluff.

  7. Thus, Meta seems to at this point be using the open model approach as a recruiting or marketing tactic? I don’t know what else it’s actually doing for them.

  8. As Dwarkesh notes, Zuckerberg is basically buying the case for superintelligence and the intelligence explosion, then ignoring it to form an ordinary business plan, and of course to continue to have their safety plan be ‘lol we’re Meta’ and release all their weights.

  9. I notice I am confused why their tests need hundreds of thousands or millions of people to be statistically significant? Impacts must be very small and also their statistical techniques they’re using don’t seem great. But also, it is telling that his first thought of experiments to run with AI are being run on his users.

  10. In general, Zuckerberg seems to be thinking he’s running an ordinary dystopian tech company doing ordinary dystopian things (except he thinks they’re not dystopian, which is why he talks about them so plainly and clearly) while other companies do other ordinary things, and has put all the intelligence explosion related high weirdness totally out of his mind or minimized it to specific use cases, even though he intellectually knows that isn’t right.

  11. He, CEO of Meta, says people use what is valuable to them and people are smart and know what is valuable in their lives, and when you think otherwise you’re usually wrong. Queue the laugh track.

  12. First named use case is talking through difficult conversations they need to have. I do think that’s actually a good use case candidate, but also easy to pervert.

  13. (29: 40) The friend quote: The average American only has three friends ‘but has demand for meaningfully more, something like 15… They want more connection than they have.’ His core prediction is that AI connection will be a compliment to human connection rather than a substitute.

    1. I tentatively agree with Zuckerberg, if and only if the AIs in question are engineered (by the developer, user or both, depending on context) to be complements rather than substitutes. You can make it one way.

    2. However, when I see Meta’s plans, it seems they are steering it the other way.

  14. Zuckerberg is making a fully general defense of adversarial capitalism and attention predation – if people are choosing to do something, then later we will see why it turned out to be valuable for them and why it adds value to their lives, including virtual therapists and virtual girlfriends.

    1. But this proves (or implies) far too much as a general argument. It suggests full anarchism and zero consumer protections. It applies to heroin or joining cults or being in abusive relationships or marching off to war and so on. We all know plenty of examples of self-destructive behaviors. Yes, the great classical liberal insight is that mostly you are better off if you let people do what they want, and getting in the way usually backfires.

    2. If you add AI into the mix, especially AI that moves beyond a ‘mere tool,’ and you consider highly persuasive AIs and algorithms, asserting ‘whatever the people choose to do must be benefiting them’ is Obvious Nonsense.

    3. I do think virtual therapists have a lot of promise as value adds, if done well. And also great danger to do harm, if done poorly or maliciously.

  15. Dwarkesh points out the danger of technology reward hacking us, and again Zuckerberg just triples down on ‘people know what they want.’ People wouldn’t let there be things constantly competing for their attention, so the future won’t be like that, he says. Is this a joke?

  16. I do get that the right way to design AI-AR glasses is as great glasses that also serve as other things when you need them and don’t flood your vision, and that the wise consumer will pay extra to ensure it works that way. But where is this trust in consumers coming from? Has Zuckerberg seen the internet? Has he seen how people use their smartphones? Oh, right, he’s largely directly responsible.

    1. Frankly, the reason I haven’t tried Meta’s glasses is that Meta makes them. They do sound like a nifty product otherwise, if execution is good.

  17. Zuckerberg is a fan of various industrial policies, praising the export controls and calling on America to help build new data centers and related power sources.

  18. Zuckerberg asks, would others be doing open models if Meta wasn’t doing it? Aren’t they doing this because otherwise ‘they’re going to lose?’

    1. Do not flatter yourself, sir. They’re responding to DeepSeek, not you. And in particular, they’re doing it to squash the idea that r1 means DeepSeek or China is ‘winning.’ Meta’s got nothing to do with it, and you’re not pushing things in the open direction in a meaningful way at this point.

  19. His case for why the open models need to be American is because our models embody an America view of the world in a way that Chinese models don’t. Even if you agree that is true, it doesn’t answer Dwarkesh’s point that everyone can easily switch models whenever they want. Zuckerberg then does mention the potential for backdoors, which is a real thing since ‘open model’ only means open weights, they’re not actually open source so you can’t rule out a backdoor.

  20. Zuckerberg says the point of Llama Behemoth will be the ability to distill it. So making that an open model is specifically so that the work can be distilled. But that’s something we don’t want the Chinese to do, asks Padme?

  21. And then we have a section on ‘monetizing AGI’ where Zuckerberg indeed goes right to ads and arguing that ads done well add value. Which they must, since consumers choose to watch them, I suppose, per his previous arguments?

To be fair, yes, it is hard out there. We all need a friend and our options are limited.

Roman Helmet Guy (reprise from last week): Zuckerberg explaining how Meta is creating personalized AI friends to supplement your real ones: “The average American has 3 friends, but has demand for 15.”

Daniel Eth: This sounds like something said by an alien from an antisocial species that has come to earth and is trying to report back to his kind what “friends” are.

Sam Ro: imagine having 15 friends.

Modest Proposal (quoting Chris Rock): “The Trenchcoat Mafia. No one would play with us. We had no friends. The Trenchcoat Mafia. Hey I saw the yearbook picture it was six of them. I ain’t have six friends in high school. I don’t got six friends now.”

Kevin Roose: The Meta vision of AI — hologram Reelslop and AI friends keeping you company while you eat breakfast alone — is so bleak I almost can’t believe they’re saying it out loud.

Exactly how dystopian are these ‘AI friends’ going to be?

GFodor.id (being modestly unfair): What he’s not saying is those “friends” will seem like real people. Your years-long friendship will culminate when they convince you to buy a specific truck. Suddenly, they’ll blink out of existence, having delivered a conversion to the company who spent $3.47 to fund their life.

Soible_VR: not your weights, not your friend.

Why would they then blink out of existence? There’s still so much more that ‘friend’ can do to convert sales, and also you want to ensure they stay happy with the truck and give it great reviews and so on, and also you don’t want the target to realize that was all you wanted, and so on. The true ‘AI ad buddy’ plays the long game, and is happy to stick around to monetize that bond – or maybe to get you to pay to keep them around, plus some profit margin.

The good ‘AI friend’ world is, again, one in which the AI friends are complements, or are only substituting while you can’t find better alternatives, and actively work to help you get and deepen ‘real’ friendships. Which is totally something they can do.

Then again, what happens when the AIs really are above human level, and can be as good ‘friends’ as a person? Is it so impossible to imagine this being fine? Suppose the AI was set up to perfectly imitate a real (remote) person who would actually be a good friend, including reacting as they would to the passage of time and them sometimes reaching out to you, and also that they’d introduce you to their friends which included other humans, and so on. What exactly is the problem?

And if you then give that AI ‘enhancements,’ such as happening to be more interested in whatever you’re interested in, having better information recall, watching out for you first more than most people would, etc, at what point do you have a problem? We need to be thinking about these questions now.

I do get that, in his own way, the man is trying. You wouldn’t talk about these plans in this way if you realized how the vision would sound to others. I get that he’s also talking to investors, but he has full control of Meta and isn’t raising capital, although Thompson thinks that Zuckerberg has need of going on a ‘trust me’ tour.

In some ways this is a microcosm of key parts of the alignment problem. I can see the problems Zuckerberg thinks he is solving, the value he thinks or claims he is providing. I can think of versions of these approaches that would indeed be ‘friendly’ to actual humans, and make their lives better, and which could actually get built.

Instead, on top of the commercial incentives, all the thinking feels alien. The optimization targets are subtly wrong. There is the assumption that the map corresponds to the territory, that people will know what is good for them so any ‘choices’ you convince them to make must be good for them, no matter how distorted you make the landscape, without worry about addiction to Skinner boxes or myopia or other forms of predation. That the collective social dynamics of adding AI into the mix in these ways won’t get twisted in ways that make everyone worse off.

And of course, there’s the continuing to model the future world as similar and ignoring the actual implications of the level of machine intelligence we should expect.

I do think there are ways to do AI therapists, AI ‘friends,’ AI curation of feeds and AI coordination of social worlds, and so on, that contribute to human flourishing, that would be great, and that could totally be done by Meta. I do not expect it to be at all similar to the one Meta actually builds.

Discussion about this post

Zuckerberg’s Dystopian AI Vision Read More »