Author name: Kris Guyer

medical-groups-warn-senate-budget-bill-will-create-dystopian-health-care-system

Medical groups warn Senate budget bill will create dystopian health care system

Medical organizations are blasting the Senate’s budget bill in the wake of its narrow passage Tuesday, warning of the dystopian health care system that will arise from the $1.1 trillion in cuts to Medicaid and other federal health programs if it is passed into law. The bill has moved back to the House for a vote on the Senate’s changes.

Over the weekend, an analysis from the Congressional Budget Office estimated that 11.8 million people would lose their health insurance over the next decade due to the cuts to Medicaid and other programs. Those cuts, which are deeper than the House’s version of the bill, were maintained in the Senate’s final version of the bill after amendments, with few concessions.

Organizations representing physicians, pediatricians, medical schools, and hospitals were quick to highlight the damage the proposal could cause.

The president of the American Academy of Pediatrics, Susan Kressly, released a stark statement saying the legislation “will harm the health of children, families, and communities.” The cuts to Medicaid and the Supplemental Nutrition Assistance Program (SNAP) will mean that “many children will not have healthy food to eat. When they are sick, they will not have health insurance to cover their medical bills—which means some children will simply forgo essential health care.” And the cuts are so deep that they will also have “devastating consequences that reach far beyond even those who rely on the program,” Kressly added.

Medical groups warn Senate budget bill will create dystopian health care system Read More »

congress-asks-better-questions

Congress Asks Better Questions

Back in May I did a dramatization of a key and highly painful Senate hearing. Now, we are back for a House committee meeting. It was entitled ‘Authoritarians and Algorithms: Why U.S. AI Must Lead’ and indeed a majority of talk was very much about that, with constant invocations of the glory of democratic AI and the need to win.

The majority of talk was this orchestrated rhetoric that assumes the conclusion that what matters is ‘democracy versus authoritarianism’ and whether we ‘win,’ often (but not always) translating that as market share without any actual mechanistic model of any of it.

However, there were also some very good signs, some excellent questions, signs that there is an awareness setting in. As far as Congressional discussions of real AGI issues go, this was in part one of them. That’s unusual.

(And as always there were a few on random other high horses, that’s how this works.)

Partly because I was working from YouTube rather than a transcript, instead of doing a dramatization I will be first be highlighting some other coverage of the events to skip to some of the best quotes, then doing a more general summary and commentary.

Most of you should likely read the first section or two, and then stop. I did find it enlightening to go through the whole thing, but most people don’t need to do that.

Here is the full video of last week’s congressional hearing, here is a write-up by Shakeel Hashim with some quotes.

Also from the hearing, here’s Congressman Nathaniel Moran (R-Texas) asking a good question about strategic surprise arising from automated R&D and getting a real answer. Still way too much obsession with ‘beat China’, but this is at least progress. And here’s Tokuda (D-HI):

Peter Wildeford: Ranking Member Raja Krishnamoorthi (D-IL) opened by literally playing a clip from The Matrix, warning about a “rogue AI army that has broken loose from human control.”

Not Matrix as a loose metaphor, but speaking of a ‘machine uprising’ as a literal thing that could happen and is worth taking seriously by Congress.

The hearing was entitled “Algorithms and Authoritarians: Why U.S. AI Must Lead”. But what was supposed to be a routine House hearing about US-China competition became the most AGI-serious Congressional discussion in history.

Rep. Neal Dunn (R-FL) asked about an Anthropic paper where Claude “attempted to blackmail the chief engineer” in a test scenario and another paper about AI “sleeper agents” that could act normally for months before activating. While Jack Clark, a witness and Head of Policy at Anthropic, attempted to reassure by saying safety testing might mitigate the risks, Dunn’s response was perfect — “I’m not sure I feel a lot better, but thank you for your answer.”

Rep. Nathaniel Moran (R-TX) got to the heart of what makes modern AI different:

Instead of a programmer writing each rule a system will follow, the system itself effectively writes the rules […] AI systems will soon have the capability to conduct their own research and development.

That was a good illustration of both sides of what we saw.

This was also a central case of why Anthropic and Jack Clark are so frustrating.

Anthropic should indeed be emphasizing the need for testing, and Clark does this, but we shouldn’t be ‘attempting to reassure’ anyone based on that. Anthropic knows it is worse than you know, and hides this information thinking this is a good strategic move.

Throughout the hearing, Jack Clark said many very helpful things, and often said them quite well. He also constantly pulled back from the brink and declined various opportunities to inform people of important things, and emphasized lesser concerns and otherwise played it quiet.

Peter Wildeford:

The hearing revealed we face three interlocking challenges:

  1. Commercial competition: The traditional great power race with China for economic and military advantage through AI

  2. Existential safety: The risk that any nation developing superintelligence could lose control — what Beall calls a race of “humanity against time”

  3. Social disruption: Mass technological unemployment as AI makes humans “not just unemployed, but unemployable”

I can accept that framing. The full talk about humans being unemployable comes at the very end. Until then, there is talk several times about jobs and societal disruption, but it tries to live in the Sam Altman style fantasy where not much changes. Finally, at the end, Mark Beall gets an opportunity to actually Say The Thing. He doesn’t miss.

It is a good thing I knew there was better ahead, because oh boy did things start out filled with despair.

As our first speaker, after urging us to ban AI therapist bots because one sort of encouraged a kid to murder his parents ‘so they could be together,’ Representative Krishnamoorthi goes on show a clip of Chinese robot dogs, then to say we must ban Chinese and Russian AI models so we don’t sent them our data (no one tell him about self-hosting) and then plays ‘a clip from The Matrix’ that is not even from The Matrix, claiming that the army or Mr. Smiths is ‘a rogue AI army that is broken loose from human control.’

I could not even. Congress often lives in the ultimate cringe random half-right associative Gell-Mann Amnesia world. But that still can get you to realize some rather obvious true things, and luckily that was indeed the worst of it even from Krishnamoorthi, this kind of thinking can indeed point towards important things.

Mr. Krishnamoorthi: OpenAI’s chief scientist wanted to quote unquote build a bunker before we release AGI as you can see on the visual here. Rather than building bunkers however we should be building safer AI whether it’s American AI or Chinese AI it should not be released until we know it’s safe that’s why I’m working on a new bill the AGI Safety Act that will require AGI to be aligned with human values and require it to comply with laws that apply to humans. That is just common sense.

I mean yes that is common sense. Yes, rhetoric from minutes prior (and after) aside, we should be building ‘safer AGI’ and if we can’t do that we shouldn’t be building AGI at all.

It’s a real shame that no one has any idea how to ensure that AGI is aligned with human values, or how to get it to comply with laws that apply to humans. Maybe we should get to work on that.

And then we get another excellent point.

Mr. Krishnamoorthi: I’d like to conclude with something else that’s common sense. Not shooting ourselves in the foot. 70% of America’s AI researchers are foreign born or foreign educated. Jack Clark our eminent witness today is an immigrant. We cannot be deporting the people we depend on to build AI we also can’t be defunding the agency that make AI miracles like Ann’s ability to speak again a reality federal grants from agencies like NSF are what allow scientists across America to make miracles happen. AI is the defining technology of our lifetimes to do AI right and prevent nightmares we need.

Yes, at a bare minimum not deporting our existing AI researchers and cutting off existing related research programs does seem like the least you could do? I’d also like to welcome a lot more talent, but somehow this is where we are.

We then get Dr. Mahnken’s opening statement, which emphasizes that we are in a battle for technical dominance, America is good and free and our AI will be empowering and innovative whereas China is bad and low trust and a fast follower. He also emphasizes the need for diffusion in key areas.

Of course, if you are facing a fast follower, you should think about what does and doesn’t help them follow, and also you can’t panic every time they fast follow you and respond with ‘go faster or they’ll take the lead!’ as they then fast follow your new faster pace. Nor would you want to hand out your top technology for free.

Next up is Mr. Beall. He frames the situation as two races. I like this a lot. First, we have the traditional battle for economic, military and geopolitical advantage in mundane terms played with new pieces.

Many only see this game, or pretend only this game exists. This is a classic, well-understood type of game. You absolutely want to fight for profits and military strength and economic growth and so on in mundane terms. We all agree on things like the need to greatly expand American energy production (although the BBB does not seem to share this opinion) and speed adaptation in government.

I still think that even under this framework the obsession with ‘market share’ especially of chip sales (instead of chip ownership and utilization) makes absolutely no sense and would make no sense even if that question was in play, as does the obsession with the number of tokens models serve as opposed to looking at productivity, revenue and profits. There’s so much rhetoric behind metrics that don’t matter.

The second race is the race to artificial superintelligence (ASI) or to AGI. This is the race that counts, and even if we get there first (and even more likely if China gets there first) the default result is that everyone loses.

He asks for the ‘three Ps,’ protect our capabilities, promote American technology abroad and prepare by getting it into the hands of those that need it and gathering the necessary information. He buys into this new centrality of the ‘American AI tech stack’ line that’s going around, despite the emphasis on superintelligence, but he does warn that AGI may come soon and we need to urgently gather information about that so we can make informed choices, and even suggests narrow dialogue with China on potential mitigations of certain risks and verification measures, while continuing to compete with China otherwise.

Third up we have Jack Clark of Anthropic, he opens like this.

Jack Clark: America can win the race to build powerful AI and winning the race is a necessary but not sufficient achievement. We have to get safety right.

When I discuss powerful AI I’m talking about AI systems that represent a major advancement beyond today’s capabilities a useful conceptual framework is to think of this as like a country of geniuses in a data center and I believe that that technology could be buildable by late 2026 or early 2027.

America is well positioned to build this technology but we need to deal with its risks.

He then goes on to talk about how American AI will be democratic and Chinese AI will be authoritarian and America must prevail, as we are now required to say by law, Shibboleth. He talks about misuse risk and CBRN risks and notes DeepSeek poses these as well, and then mentions the blackmail findings, and calls for tighter export controls and stronger federal ability to test AI models, and broader deployment within government.

I get what Clark is trying to do here, and the dilemma he is facing. I appreciate talking about safety up front, and warning about the future pace of progress, but I still feel like he is holding back key information that needs to be shared if you want people to understand the real situation.

Instead, we still have 100 minutes that touch on this in places but mostly are about mundane economic or national security questions, plus some model misbehavior.

Now we return to Representative Krishnamoorthi, true master of screen time, who shows Claude refusing to write a blog post promoting eating disorders, then DeepSeek being happy to help straight up and gets Clark to agree that DeepSeek does not do safety interventions beyond CCP protocols and that this is unacceptable, then reiterates his bill to not let the government use DeepSeek, citing that they store data on Chinese servers. I mean yes obviously don’t use their hosted version for government purposes, but does he not know how open source works, I wonder?

He pivots to chip smuggling and the risk of DeepSeek using our chips. Clark is happy to once again violently agree. I wonder if this is a waste or good use of time, since none of it is new, but yes obviously what matters is who is using the chip, not who made it, and selling our chips to China (at least at current market prices) is foolish, Krishnamoorthi points out Nvidia’s sales are growing like gangbusters despite export controls and Clark points out that every AI company keeps using more compute than expected.

Then there’s a cool question, essentially asking about truesight and ability to infer missing information when given context, before finishing by asking about recent misalignment results:

Representative Krishnamoorthi: If someone enters their diary into Claude for a year and then ask Claude to guess what they did not write down Claude is able to accurately predict what they left out isn’t that right?

Jack Clark: Sometimes that’s accurate yes these systems are increasingly advanced and are able to make subtle predictions like this which is why we need to ensure that our own US intelligence services use this technology and know how to get the most out of it.

Representative Moolenaar then starts with a focus on chip smuggling and diffusion, getting Beall to affirm smuggling is a big deal then asking Clark about how this is potentially preventing American technological infrastructure diffusion elsewhere. There is an obvious direct conflict, you need to ensure the compute is not diverted or misused at scale. Comparisons are made to nuclear materials.

Then he asks Clark, as an immigrant, about how to welcome immigrants especially from authoritarian states to help our AI work, and what safeguards we would need. Great question. Clark suggests starting with university-level STEM immigration, the earlier the better. I agree, but it would be good to have a more complete answer here about containing information risks. It is a real issue.

Representative Carson is up next and asks about information warfare. Clark affirms AI can do this and says we need tools to fight against it.

Representative Lahood asks about the moratorium that was recently removed from the BBB, warning about the ‘patchwork of states.’ Clark says we need a federal framework, but that without one powerful AI is coming soon and you’d just be creating a vacuum, which would be flooded if something went wrong. Later Clark, in response to another question, emphasizes that the timeline is short and we need to be open to options.

Representative Dunn asks about the blackmail findings and asks if he should be worried about AIs using his bank information against him. Clark says no, because we publish the research and we should encourage more of this and also closely study Chinese models, and I agree with that call but it doesn’t actually explain why you shouldn’t worry (for now, anyway). Dunn then asks about the finding that you can put a sleeper agent into an AI, Clark says testing for such things likely would take them a month.

Dunn then asks Manhken what would be the major strategic missteps Congress might make in an AGI world. He splits his answer into insufficient export controls and overregulation, it seems he thinks there are not other things to worry about when it comes to AGI.

Here’s one that isn’t being noticed enough:

Mr. Moulton (56: 50): The concern is China and so we have to somehow get to an international framework a Geneva Conventions like agreement that has a chance at least at limiting uh what what our adversaries might do with AI at the extremes.

He then asks Beall what should be included in that. Beall starts off with strategic missile-related systems and directive 3000.09 on lethal autonomous systems. Then he moves to superintelligence, but time runs out before he can explain what he wants.

Representative Johnson notes the members are scared and that ‘losing this race’ could ‘trigger a global crisis,’ and asks about dangers of data centers outside America, which Beall notes of course are that we won’t ultimately own the chips or AI, so we should redouble our efforts to build domestically even if we have to accept some overseas buildout for energy reasons.

Johnson asks about the tradeoff between safety and speed, seeing them in conflict. Jack points out that, at current margins, they’re not.

Jack Clark: We all buy cars because we know that if they if they get dinged we’re not going to suffer in them because they have airbags and they have seat belts. You’ve grown the size of the car market by innovating on safety technology and American firms compete on safety technology to sell to consumers.

The same will be true of AI. So far, we do not see there being a trade-off here we see that making more reliable trustworthy technology ultimately helps you grow the size of the market and grows the attractiveness of American platforms vis-a-vie China so I would constructively sort of push back on this and put it to you that there’s an amazing opportunity here to use safety as a way to grow the American existing dominance in the market.

Those who set up the ‘slow down’ and safety versus speed framework must of course take the L on how that (in hindsight inevitably) went down. Certainly there are still sometimes tradeoffs here on some margins, on some questions, especially when you are the ‘fun police’ towards your users, or you delay releases for verification. Later down the road, there will be far more real tradeoffs that occur at various points.

But also, yes, for now the tradeoffs are a lot like those in cars, in that improving the safety and security of the models helps them be a lot more useful, something you can trust and that businesses especially will want to use. At this point, Anthropic’s security focus is a strategic advantage.

Johnson wants to believe Clark, but is skeptical and asks Manhken, who says too much emphasis on safety could indeed slow us down (which, as phrased, is obviously true), that he’s worried we won’t go fast enough and there’s no parallel conversation at the PRC.

Representative Torres asks Clark how close China is to matching ASML and TSMC. Clark says they are multiple years behind. Torres then goes full poisoned banana race:

Torres: The first country to reach ASI will likely emerge as the superpower of the 21st century the superpower who will set the rules for the rest of the world. Mr clark what do you make of the Manhattan project framing?

Clark says yes in terms of doing it here but no because it’s from private actors and they agree we desperately need more energy.

Hissen says Chinese labs aren’t doing healthy competition, they’re stealing our tech, then praises the relaxation of the Biden diffusion rules that prevent China from stealing our tech, and asks about what requirements we should attach to diffusion deals and everyone talks arms race and market share. Sigh.

In case you were wonder where that was coming from, well, here we go:

Hinson: members of of your key team at Anthropic have held very influential roles in this space both open philanthropy and in the previous administration with the Biden administration as well.

Can you speak to how you manage you know obviously we’ve got a lot of viewpoints but how you manage potential areas of conflict of interest in advancing this tech and ensuring that everybody’s really on that same page with helping to shape this national AI policy that we’re talking about the competition on the global stage for this for this technology.

You see, if you’re trying to not die that’s a conflict of interest and your role must have been super important, never mind all that lobbying by major tech corporations. Whereas if you want American policy to focus on your own market share, that’s good old fashioned patriotism, that must be it.

Jack Clark: Thank you for the question we have a simple goal. Win the race and make technology that can be relied on and all of the work that we do at our company starts from looking at that and then just trying to work out the best way to get there and we work with people from a variety of backgrounds and skills and our goal is to just have the best most substantive answer that we can bring to hearings.

No, ma’am, we too are only trying to win the race and maximize corporate profits and keep our fellow patriots informed, it is fine. Anthropic doesn’t care about everyone not dying or anything, that would be terrible. Again, I get the strategic bind here, but I continue to find this deeply disappointing, and I don’t think it is a good play.

She then asks Beall about DeepSeek’s ability to quickly copy our tech and potential future espionage threats and Beall reminds her that export controls work with a lag and notes DeepSeek was a wakeup call (although one that I once again note was blown out or proportion for various reasons, but we’re stuck with it). Beall recommends the Remote Access Security Act and then he says we have to ‘grapple with the open source issue.’ Which is that if you open the model they can copy it. Well, there is that.

Representative Brown pulls out They Took Our Jobs and ensuring people (like those in her district, Ohio’s 11th) don’t get left behind by automation and benefit instead, calling for investing in the American workforce, so Clark goes into those speeches and encouraging diffusion and adjusting regulation and acts as if Dario hadn’t predicted the automation of half of white-collar entry level jobs within five years.

Representative Nun notes (along with various other race-related things) the commissioning of four top AI teams as lieutenant kernels, which I and Patrick McKenzie both noticed but has gotten little attention. He then brings up a Chinese startup called Zhipu (currently valued around $20 billion) as some sort of global threat.

Nun: A new AI group out of Beijing called Zhipu is an AI anomaly that is now facing off against the likes of OpenAI and their entire intent is to lock in Chinese systems and standards into emerging markets before the West so this is clearly a largescale attempt by the Chinese to box the United States out now as a counter intelligence officer who was on the front line in fighting against Huawei’s takeover of the United States through something called Huawei America.

That is indeed how a number of Congress people talk these days, including this sudden paranoia with some mysterious ‘lock in’ mechanism for API calls or self-hosted open models that no one has ever been able to explain to me. He does then ask an actual good question:

Nun: Is the US currently prepared for an AI accelerated cyber attack a zero-day attack or a larger threat that faces us today?

Mahnken does some China bad, US good and worries the Chinese will be deluded into thinking AI will let them do things they can’t do and they might start a war? Which is such a bizarre thing to worry about and also not an answer? Are we prepared? I assume mostly no.

Nun then pushes his HR 2152 for government AI diffusion.

He asks Clark how government and business can cooperate. Clark points to the deployment side and the development of safety standards as a way to establish trust and sell globally.

Representative Tokuda starts out complaining about us gutting our institutions, Clark of course endorses investing more in NIST and other such institutions. Tokuda asks about industry responsibility, including for investment in related infrastructure, Clark basically says he works on that and for broader impact questions get back to him in 3-4 years to talk more.

Then she gives us the remarkable quote above about superintelligence (at 1: 30: 20), the full quote is even stronger, but she doesn’t leave enough time for an answer.

I am very grateful for the statement, even with no time left to respond. There is something so weird about asking two other questions first, then getting to ASI.

Representative Moran asks Clark, what’s the most important thing to win this race? Clark chooses power followed by compute and then government infrastructure, and suggests working backwards from the goal of 50 GW in 2027. Mahnkin is asked next and suggests trying to slow down the Chinese.

Moran notices that AI is not like older programming, that it effectively will write its own rules and programming and will soon do its own research and asks what’s up with that. Clark says more research is urgently needed, and points out you wouldn’t want an AI that can blackmail you designing its successor. I’m torn on whether that cuts to the heart of the question in a useful way or not here.

Moran then asks, what is the ‘red line’ on AI the Chinese cannot be allowed cross? Beall confirms AI systems are grown, not built, that it is alchemy, and that the automated R&D is the red line and a really big deal, we need to be up to speed on that.

Representative Conner notes NIST’s safety testing is voluntary and asks if there should be some minimum third party verification required, if only to verify the company’s own standards. All right, Clark, he served it up for you, here’s the ball, what have you got?

Clark: this question is illustrates the challenge we have about weighing safety versus you know moving ahead as quickly as possible we need to first figure out what we want to hold to that standard of testing.

Today the voluntary agreements rest on CBRN testing and some forms of cyber cyber attack testing once we have standards that we’re confident of I think you can take a look at the question of whether voluntary is sufficient or you need something else.

But my sense is it’s too early and we first need to design those tests and really agree on those before figuring out what the next step would be and who would design those tests is it the AI institute or is it the private sector who who comes up with what those tests should be today these tests are done highly collaboratively between US private sector which you mentioned and parts of the US government including those in the the intelligence and defense community i think bringing those people together.

So that we have the nation’s best experts on this and standards and tests that we all agree on is the first step that we can take to get us to everything else and by when do you think that needs to be done. It would be ideal to have this within a year the timelines that I’ve spoken about in this hearing are powerful AI arrives at the end of 2026 or early 2027. Before then we would ideally have standard tests for the national security properties that we deeply care about.

I’m sorry, I think the word you were looking for was ‘yes’? What the hell? This is super frustrating. I mean as worded how is this even a question? You don’t need to know the final exact testing requirements before you start to move towards such a regime. There are so many different ways this answer is a missed opportunity.

The last question goes back to They Took Our Jobs, and Clark basically can only say we can gather data, and there are areas that won’t be impacted soon by AI, again pretending his CEO Dario Amodei hadn’t warned of a jobs ‘bloodbath.’ Beall steps up and says the actual damn thing (within the jobs context), which is that we face a potential future where humans are not only unemployed but unemployable, and we have to have those conversations in advance.

And we end on this not so reassuring note:

Mark Beall: when I hear folks in industry claim things about universal basic income and this sort of digital utopia I you know I study history. I worry that that sort of leads to one place and that place is the Goolog.

That is quite the bold warning, and an excellent place to end the hearing. It is not the way I would have put it, but yes the idea of most or all of humanity being entirely disempowered and unproductive except for our little status games, existing off of gifted resources, property rights and rule of law and some form of goodwill and hoping all of this holds up does not seem like a plan that is likely to end well. At least, not for those humans. No, having ‘solved the alignment problem’ does not on its own get you out of this in any way, solving the alignment problem is the price to try at all.

And that is indeed one kind of thing we need to think about now.

Is this where I wanted the conversation to be in 2025? Oh, hell no.

It’s a start.

Discussion about this post

Congress Asks Better Questions Read More »

nothing-phone-3-arrives-july-15-with-a-tiny-dot-matrix-rear-display

Nothing Phone 3 arrives July 15 with a tiny dot matrix rear display

Nothing, a startup from OnePlus co-founder Carl Pei, has announced its first flagship phone since 2023. The company bills its new Nothing Phone 3 as a “true flagship” device, but it doesn’t have the absolute best hardware you can get in a mobile device. Neither does it have the highest price, clocking in at a mere $799. That’s shaping up to be a good value, but it’s also the highest price yet for a Nothing phone.

A few weeks back, Nothing teased the end of its trademark Glyph interface. Indeed, the Nothing Phone 3 doesn’t have the illuminated panels of the company’s previous phones. Instead, it has a small dot “Glyph Matrix” LED screen. It’s on the back in the upper right corner, opposite the camera modules. Nothing has a few quirky games and notification icons that will flash on the screen, and it can be used as a very low-fi selfie mirror. Nothing is committed to the new Glyph screen, going so far as adding a button on the back to control it.

The rest of the design maintains the Nothing aesthetic, featuring a clear glass panel with a visible mid-frame and screws. The phone will come in either black or white—Nothing isn’t really into colors. However, the company does promise the Phone 3 will be a little more compact than the 2023 Phone 2. The new device is 18 percent thinner and has symmetrical 1.87-millimeter bezels around the 6.67-inch OLED screen. That panel supports 120 Hz refresh and has a peak brightness of 4,500 nits, which is competitive with the likes of Samsung and OnePlus.

Nothing’s first “true flagship”?

Nothing has continued releasing budget phones during its break from high-end devices. While the Nothing Phone 3 is supposedly five times faster than the Phone 3a, it probably won’t be able to keep up with the fastest phones on the market today. Rather than the top-of-the-line Snapdragon 8 Elite, the Nothing Phone 3 has a Snapdragon 8s Gen 4, which was released earlier this year to expand premium silicon features to slightly cheaper phones. It doesn’t have Qualcomm’s new Oryon CPU cores, and the GPU is a bit slower. However, it’ll be faster than most devices in its price range. The specs are rounded out with 12GB or 16GB of RAM and 256GB or 512GB of storage.

Nothing Phone 3 arrives July 15 with a tiny dot matrix rear display Read More »

ai-moratorium-stripped-from-bbb

AI Moratorium Stripped From BBB

The insane attempted AI moratorium has been stripped from the BBB. That doesn’t mean they won’t try again, but we are good for now. We should use this victory as an opportunity to learn. Here’s what happened.

Senator Ted Cruz and others attempted to push hard for a 10-year moratorium on enforcement of all AI-specific regulations at the state and local level, and attempted to ram this into the giant BBB despite it being obviously not about the budget.

This was an extremely aggressive move, which most did not expect to survive the Byrd amendment, likely as a form of reconnaissance-in-force for a future attempt.

It looked for a while like it might work and get passed outright, with it even surviving the Byrd amendment, but opposition steadily grew.

We’d previously seen a remarkable group notice that this moratorium was rather insane. R Street offered an actually solid analysis of the implications that I discussed in AI #119. In AI #120, we saw Joe Rogan and Jesse Michels react to the proposal with ‘WHAT’? Marjorie Taylor Greene outright said she would straight vote no on the combined bill if the provision wasn’t stripped out and got retweeted by Elizabeth Warren, and Thomas Massie called it ‘worse than you think.’ Steve Bannon raised an alarm and the number of senators opposed rose to four. In #122, as the provision was modified to survive the Byrd amendment. Amazon, Google, Meta and Microsoft were all backing the moratorium.

As the weekend began, various Republican officials kept their eyes on the insane AI moratorium, and resistance intensified, including a letter from 16 Republican governors calling for it to be removed from the bill. Charlie Bullock notes that this kind of attempted moratorium is completely unprecedented. Gabriel Weinberg is the latest to point out that Congress likely won’t meaningfully regulate AI, which is what makes it insane to prevent states from doing it. The pressure was mounting, despite a lot of other things in the bill fighting for the Senate’s attention.

Then on Sunday night, Blackburn and Cruz ‘reached an AI pause deal.’ That’s right, Axios, they agreed to an AI pause… on state governments doing anything about AI. The good news was it is down from 10 years to 5, making it more plausible that it expires before our fate is sealed. The purported goal of the deal was to allow ‘protecting kids online’ and it lets tech companies sue if they claim an obligation ‘overly burdens’ them. You can guess how that would have gone, even for the intended target, and this still outright bans anything that would help where it counts.

Except then Blackburn backed off the deal, saying that while she appreciated Cruz’s attempts to find acceptable language, the current language was not acceptable.

Senator Blackburn (R-Tennessee): This provision could allow Big Tech to continue to exploit kids, creators and conservatives. Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can’t block states from making laws that protect their citizens.

As always Blackburn is focusing on the wrong threats and concerns about AI, such as child safety, but she’s right on the money about the logic of a moratorium. If you can’t do the job, you shouldn’t stop others from doing it.

I’d also note that there is some sort of strange idea that if a state passes AI regulations that are premature or unwise, then there is nothing Congress could do about this, we’d be stuck, so we need to stop them in advance. But the moratorium would have been retroactive. It prevents enforcement of existing rules.

So why couldn’t you take that same ‘wait and see’ attitude here, and then preempt if and when state laws actually threaten to cause trouble in practice? Or do so for each given type of AI regulation when you were ready with a preemptive federal bill to do the job?

So Blackburn moved to strip the moratorium from the bill. Grace Chong claims that Bannon’s War Room played a key role in convincing Blackburn to abandon the deal, ensuring we weren’t ‘duped by the tech bros.’

At first Ted Cruz believed he could still head this off.

Diego Areas Munhoz: “The night is young,” Sen Ted Cruz tells me in response to Blackburn walking back on their deal. There are 3 likely GOP nos on moratorium. He’ll need to ensure there are no other detractors or find a Democratic partner.

Instead, support utterly collapsed, leaving Cruz completely on his own. 99-1, and the one was Thillis voting against all amendments on principle.

Mike Davis: There are more than 3. The damn is breaking.

Thank you for your attention to this matter.

The larger bill, including stripping quite a bit of funding from American energy production despite our obviously large future needs, among many other things that are beyond scope here, did still pass the Senate 51-50.

The opposition that ultimately killed the bill seems to have had essentially nothing to do with the things I worry most about. It did not appear to be driven by worries about existential or catastrophic risk, and those worries were not expressed aloud almost at all (with the fun exception of Joe Rogan). That does not mean that such concerns weren’t operating in the background, I presume they did have a large impact in that way, but it wasn’t voiced.

All the important opposition came from the Republican side, including some very MAGA sources. Very MAGA sources proved crucial. Opposition from those sources was vocally motivated by fear of big tech, and a few specific mundane policy concerns like privacy, protecting children, copyright and protecting creatives, and potential bias against conservatives.

This was a pleasant surprise break from the usual tribalism where a lot of people seem to think that anything that makes us less safe is therefore a good idea on principle (they would say it as ‘the doomers are against it so it must be good’ which is secretly an even more perverse filter than that, consider what those same people oppose in other contexts.) Now have a different kind of tribalism, which does not seem like it is going to be better in some ways? But I do think the concerns are coming largely from ultimately good places, even if so far not in sophisticated ways, similar to the way the public has good instincts here.

I am happy the moratorium did not pass, but this was a terrible bit of discourse. It does not bode well for the future. No one on any side of this, based on everything I have heard, raised any actual issues of AI long term governance, or offered any plan on what to do. One side tried to nuke all regulations of any kind from orbit, and the other thought that nuke might have some unfortunate side effects on copyright. The whole thing got twisted up in knots to fit it into a budget bill.

How does this relate to the question of which arguments to make and emphasize about AI going forward? My guess is that a lot of this has to do with the fact that this fight was about voting down a terrible bill rather than trying to pass a good bill.

If you’re trying to pass a good bill, you need to state and emphasize the good reasons you want to pass that bill, and what actually matters, as Note Sores explained recently at LessWrong. You can and should also offer reasons for those with other concerns to support the bill, and help address those concerns. As we saw here, a lot of politicians care largely about different narrow specific concerns.

However, if you are in opposition to a terrible bill, that’s a different situation. Then you can and should point out all the problems and reasons to oppose the bill, even if they are not your primary concerns, and there is nothing incongruent about that.

It also requires a very different coalition. The key here was to peel off a few Republicans willing to take a stand. Passing a bill is a different story in that way, too.

The other thing to notice is the final vote was 99-1, and the 1 had nothing to do with the content of the amendment. As in, no one, not even Ted Cruz, wanted to be on record as voting for this once it was clear it was going to fail. Or alternatively, everyone agreed to give everyone else cover.

That second explanation is what Neil Chilson says happened, that this wasn’t a real vote, instead meant as a way to save face, a claim that I saw only after publication, so this ending has been edited to reflect the new information – I disagree with Neil on many things but I see no reason not to believe him here.

Neil Chilson: This vote was not a “preference cascade.” This was a procedural effort by leadership to reassemble the republican conference in prep for the final vote on the whole BBB after Blackburn’s reneging threw it into chaos.

The intent was to vote 100-0 in support of the repeal, to both unify the group and still send the signal that this wasn’t the real count. I think Cruz actually moved the vote for the amendment. But apparently Tillis (who hadn’t really been involved in the whole thing) voted *againstthe repeal. Hence the 99-1.

This was a really close win for opponents of the moratorium, so an adjust your expectations accordingly.

Exactly. A 100-0 vote is what leadership does when it knows the issue is lost and they don’t want to make everyone own the L. It’s not a reflection of where the vote actually would have landed.

This still involved a preference cascade, although this new scenario is more complex. Rather than ‘everyone knew this was crazy and a bad look so once they had cover they ran for the exits’ it is more ‘once there was somewhat of a preference cascade the leadership made a strategic decision to have a fake unified vote instead’ and felt getting people’s positions on record was net negative.

This is all now common knowledge. Everyone knows what happened, and that the big tech anti-regulation coalition overplayed its hand and wants to outright have a free hand to do whatever they want (while pretending, often, that they are ‘little tech’).

I would also caution against being too attached to the vibes this time around, or any other time around. The vibes change very quickly. If the CCP committee meeting and this vote were any indication, they are going to change again soon. Every few months it feels like everything is different. It will happen again, for better and for worst. Best be ready.

Discussion about this post

AI Moratorium Stripped From BBB Read More »

analyst:-m5-vision-pro,-vision-air,-and-smart-glasses-coming-in-2026–2028

Analyst: M5 Vision Pro, Vision Air, and smart glasses coming in 2026–2028

Apple is also reportedly planning a “Vision Air” product, with production expected to start in Q3 2027. Kuo says it will be more than 40 percent lighter than the first-generation Vision Pro, and that it will include Apple’s flagship iPhone processor instead the more robust Mac processor found in the Vision Pro—all at a “significantly lower price than Vision Pro.” The big weight reduction is “achieved through glass-to-plastic replacement, extensive magnesium alloy use (titanium alloy deemed too expensive), and reduced sensor count.”

True smart glasses in 2027

The Vision Pro (along with the planned Vision Air) is a fully immersive VR headset that supports augmented reality by displaying the wearer’s surroundings on the internal screens based on what’s captured by 3D cameras on the outside of the device. That allows for some neat applications, but it also means the device is bulky and impractical to wear in public.

The real dream for many is smart glasses that are almost indistinguishable from normal glasses, but which display some of the same AR content as the Vision Pro on transparent lenses instead of via a camera-to-screen pipeline.

Apple is also planning to roll that out, Kuo says. But first, mass production of display-free “Ray-Ban-like” glasses is scheduled for Q2 2027, and Kuo claims Apple plans to ship between 3 million and 5 million units through 2027, suggesting the company expects this form factor to make a much bigger impact than the Vision Pro’s VR-like HMD approach.

The glasses would have a “voice control and gesture recognition user interface” but no display functionality at all. Instead, “core features include: audio playback, camera, video recording, and AI environmental sensing.”

The actual AR glasses would come later, in 2028.

Analyst: M5 Vision Pro, Vision Air, and smart glasses coming in 2026–2028 Read More »

drug-cartel-hacked-fbi-official’s-phone-to-track-and-kill-informants,-report-says

Drug cartel hacked FBI official’s phone to track and kill informants, report says

The Sinaloa drug cartel in Mexico hacked the phone of an FBI official investigating kingpin Joaquín “El Chapo” Guzmán as part of a surveillance campaign “to intimidate and/or kill potential sources or cooperating witnesses,” according to a recently published report by the Justice Department.

The report, which cited an “individual connected to the cartel,” said a hacker hired by its top brass “offered a menu of services related to exploiting mobile phones and other electronic devices.” The hired hacker observed “’people of interest’ for the cartel, including the FBI Assistant Legal Attache, and then was able to use the [attache’s] mobile phone number to obtain calls made and received, as well as geolocation data, associated with the [attache’s] phone.”

“According to the FBI, the hacker also used Mexico City’s camera system to follow the [attache] through the city and identify people the [attache] met with,” the heavily redacted report stated. “According to the case agent, the cartel used that information to intimidate and, in some instances, kill potential sources or cooperating witnesses.”

The report didn’t explain what technical means the hacker used.

Existential threat

The report said the 2018 incident was one of many examples of “ubiquitous technical surveillance” threats the FBI has faced in recent decades. UTS, as the term is abbreviated, is defined as the “widespread collection of data and application of analytic methodologies for the purpose of connecting people to things, events, or locations.” The report identified five UTS vectors, including visual and physical, electronic signals, financial, travel, and online.

Credit: Justice Department

While the UTS threat has been longstanding, the report authors said, recent advances in commercially available hacking and surveillance tools are making such surveillance easier for less sophisticated nations and criminal enterprises. Sources within the FBI and CIA have called the threat “existential,” the report authors said

A second example of UTS threatening FBI investigations occurred when the leader of an organized crime family suspected an employee of being an informant. In an attempt to confirm the suspicion, the leader searched call logs of the suspected employee’s cell phone for phone numbers that might be connected to law enforcement.

Drug cartel hacked FBI official’s phone to track and kill informants, report says Read More »

substack-and-other-blog-recommendations

Substack and Other Blog Recommendations

Substack recommendations are remarkably important, and the actual best reason to write here instead of elsewhere.

As in, even though I have never made an active attempt to seek recommendations, approximately half of my subscribers come from recommendations from other blogs. And for every two subscribers I have, my recommendations have generated approximately one subscription elsewhere. I am very thankful to all those who have recommended this blog, either through substack or otherwise.

As the blog has grown, I’ve gotten a number of offers for reciprocal recommendations. So far I have turned all of these down, because I have yet to feel any are both sufficiently high quality and a good match for me and my readers.

Instead, I’m going to do the following:

  1. This post will go through the 16 blogs I do currently recommend, and explain what I think is awesome about each of them.

  2. I will also go over the top other non-substack blogs and sources I read regularly.

  3. Then I will briefly review the 10 other substacks that sent me the most readers.

  4. I will plan on doing something similar periodically in the future, each time looking at at least 10 previously uncovered substacks, and each time evaluating which blogs I should add or remove from the recommendation list. In the future I will skip any inactive substacks.

  5. The comments section will be an opportunity to pitch your blog, or pitch someone else’s blog. Please aggressively like comments to vote for blogs.

  1. Note on Paywalls.

  2. The Substacks I Recommend (Not Centrally AI).

  3. Astral Codex Ten (Scott Alexander).

  4. Overcoming Bias (Robin Hanson).

  5. Silver Bulletin (Nate Silver).

  6. Rough Diamonds (Sarah Constantin).

  7. Construction Physics (Brian Potter).

  8. In My Tribe (Arnold Kling).

  9. Dominic Cummings Substack (Dominic Cummings).

  10. Bet On It (Bryan Caplan).

  11. Slow Boring (Matthew Yglesias).

  12. Useful Fictions (Cate Hall).

  13. The Substacks I Recommend (Centrally AI).

  14. AI Futures Project (Group Blog, Including Scott Alexander).

  15. The Power Law (Peter Wildeford).

  16. China Talk (Jordan Schneider).

  17. Musings On the Alignment Problem (Jan Leike).

  18. Gwern Newsletter (Gwern).

  19. Some Additional Substacks That Are Good And Aren’t Otherwise Covered But That I Am Not Ready to Recommend At This Time.

  20. The 10 Top Other Substack Subscription Blog Sources.

  21. Other Blogs And News Sources In Heavy Rotation.

  22. Twitter.

  23. Wrapping Up.

I have a very high bar for being willing to go behind a paywall, the same way I have decided not to have one of my own. If something is behind a unique paywall, then almost all of my readers can’t read it, and the post is effectively outside of the broader conversation. So not only do I have to be excited enough about the content to initiate a subscription, I have to be excited enough despite it being unavailable to others.

I am of course happy to accept and respond to gift subscriptions for various content, including other blogs and newspapers, which substantially increase the chance I read, discuss or ultimately recommend the content in question.

Going over this list, I notice that I highly value a unique focus and perspective, a way of thinking and a set of tools that I want to have available, that is compatible with the rationalist-style perspective of thinking with gears and figuring out what will actually work versus not work. I want to be offered a part of that elephant I would otherwise miss, a different reality-tunnel to incorporate into my own.

What I do not need to do is agree with the person about most things. I have profound and increasing important disagreements with most of the people on this list. When I recommend the substack, I am absolutely not endorsing the opinions or worldview contained therein.

I also reminded myself, doing this, that there is a lot of great content I simply don’t have the time to check out properly, especially recently with several things creating big temporary additional time commitments, that will continue for a few more weeks.

Scott Alexander is one of the great ones. If you haven’t dived into the Slate Star Codex archives, you should do that sometime. In My Culture, many of those posts are foundational, and I link back to a number of them periodically.

There was a period of years during the SSC era when Scott Alexander had very obviously the best blog by a wide margin, and everyone I knew would drop what they were doing and read posts whenever they came out.

I do think that golden age has passed for now, and things have slipped somewhat. I can get frustrated, especially when he focuses on Effective Altruist content. I usually skip the guest posts unless something catches my eye.

That still leaves this as my top Substack or personal blog.

Robin Hanson is unique. I hope he never changes, and never stops Robin Hansoning.

Most of the time, when I read one of his posts, I disagree with it. But it is almost always interesting, and valuable to think about exactly what I disagree with and why. The questions are always big. I could happily write an expanded response to most of his posts, as I have done a number of times in the past. I would do this more if AI wasn’t taking up so much attention.

I find Hanson frustrating on AI in particular, especially on Twitter where he discusses this more often and more freely. We strongly disagree on all aspects of AI, including over its economic value and pace of progress, and whether we should welcome it causing human extinction. That part I actually like, that’s him Robin Hansoning.

What I do not like, but also find valuable in its own way, is that he often seems willing, especially on Twitter, to amplify essentially anything related to AI that makes one of his points, highlight pull quotes supporting those points, and otherwise act in a way that I see as compromising his otherwise very strong epistemic standards.

The reason I find this valuable is that this then acts as a forcing function to ensure I consider and am aware of opposing rhetoric and arguments, and retain balance.

Before we first met up so he could interview me for his book On the Edge, I knew I had a lot in common with Nate Silver. I’ve been following him since Baseball Prospectus. Only when we talked more, and when I read the final book, did I realize quite how much we matched up in interests and ways of thinking. I hope to do a collaboration at some point if the time can be found.

His politics model will probably always be what we most remember him for, but Silver Bulletin is Nate Silver’s best non-modeling work, and it also comes with the modern version of the politics model that remains the best game in town when properly understood in context. He offers a unique form of grounding on political, sports and other issues, and a way of bridging my kinds of ideas into mainstream discourse. I almost never regret reading.

Sarah is one of my closest friends, and at one point was one of my best employees.

In her blog she covers opportunities in science and technology, and also in society, and offers a mix of doing the research and explaining it in ways few others can, pointing our attention in places it would not otherwise go but that are often worth going to, and offering her unique perspectives on many questions.

If these topics are relevant to your interests, you should absolutely jump on this. If there’s one blog that deserves more of my attention than it gets, if only I had more time to actually dig deep, this would be it.

No one drills down into the practical, detail level in an accessible way as well as Brian Potter. One needs grounding in these kinds of physical details. When AI was going less crazy, and I was spending a lot more time on diverse topics especially housing and energy, I was always excited to dive into one of this posts. They very much help distinguish what is and is not important, and where to put your focus, helping you build models made of gears.

I noticed compiling this list that I haven’t been reading these posts for a while due to lack of time, but the moment I went there I was excited to dive back in, and the most recent post is definitely going directly into the queue. My current plan is to do a full catching up before my next housing roundup.

There is something highly refreshing about the way Kling offers his own consistent old school economic libertarian perspective, takes his time responding to anything, and generally tries to understand everything including developments in AI from that perspective. He knows a few important things and it is good to get reminders of them. He often offers links, the curation of which is good enough that they are worth considering.

Perhaps never has a man more simultaneously given both all and none of the fs.

Dominic writes with a combination of urgency, deep intellectual curiosity and desire to explain things across a huge variety of topics, and utter contempt for all those idiots trying to run our civilization and especially its governments, or for what anyone else thinks, or for the organizing principles of writing.

He screams into the void, warning that so many things are going wrong, that all of politics is incompetent to either achieve outcomes or even win elections, and for us to all wake up and stop acting like utter idiots. What he cares about is what actually works and would work, and what actually happened or will happen.

His posts are collections of snippets even more than mine, jumping from one thing to another, trying to weave them together over time to explain various important things about today and from history that people almost never notice.

I find a dose of these perspectives highly valuable. One needs to learn from people, even when you disagree with them on quite a lot of things, and to be clear I disagree with Dominic on many big things. I feel I understand the world substantially better than I would have without Dominic. One does not need to in any way approve of his politics or preferences, starting of course with Brexit, to benefit from these insights.

Similar to Dominic Cummings and Robin Hanson, I think most people who read this would benefit from a healthy dose, at least once, of Bryan Caplan and the way he views the world, offering us a different reality tunnel where things you don’t consider are emphasized and things you often emphasize are dismissed rather than considered.

Bryan expresses himself as if he is supremely confident he is right about everything, that everyone else is simply obviously wrong about them, and that the reasoning is very simple, if only you would listen. You want this voice as part of the chorus in your head.

I found The Case Against Education and Selfish Reasons To Have More Kids both extremely valuable.

However, I do notice that I feel like I already have enough of this voice on the issues he emphasizes most often, especially calls for free markets including open borders. So I notice that I am increasingly skipping more of his posts on these subjects.

This blog is of course primarily about politics, in particular how much better he thinks everything would be if normie Democrats were moderate, and supported policies that were both good and popular and did the things that win elections and improved outcomes, and how to go about doing that. He takes a very proto-rationalist, practical approach to this, that I very much appreciate, complete with yearly predictions. The writing is consistently enjoyable and clear.

It is important to think carefully about exactly how much politics one wants to engage with, and in which ways, and with what balance of incoming perspectives. Many of us should minimize such exposure most of the time.

Cate Hall is a super awesome person and her blog is all about taking that and then saying ‘and so can you.’ She shares various tricks and tactics she has used to be more agentic and happier and make her life better. They definitely won’t all work for you or be good ideas for you, but the rate of return on considering them, why they might work and what you might want to do with them is fantastic, and it’s a joy to read about it.

I have to be extremely stingy recommending AI blogs, because the whole point of my AI blog is that you shouldn’t need to read either AI Twitter or the other AI blogs, because I hope to consistently flag such things.

There are still a few that made the list.

This is the blog for AI Futures Project which includes AI 2027. A lot of the posts explain the thinking behind how they model the AI future, aimed at a broader audience. Others include advocacy, consistently for modest common sense proposals. All of it is written with a much lower barrier to entry than my work, and I find the writing consistently strong and interesting.

This is very much a case of someone largely doing a subset of what I am doing, finding, summarizing and responding to key news events with an emphasis on AI. So I would hope that if you read my AI posts, you don’t also need to directly read Peter. But he has a remarkably high hit rate of finding things I would have otherwise overlooked or offering useful new perspective. If you are looking for something shorter and sweeter, and easier to process, or to compare multiple sources, this is one of the best choices.

China Talk focuses on China, as the name suggests, and also largely on Chinese AI and other Chinese tech. When this is good and on point, it is very, very good, and provides essential perspective I couldn’t have found elsewhere, especially when doing key interviews.

It is highly plausible that I should be focusing way more on these topics.

He hasn’t posted since January, but I hope he gets back to it. We need more musings, especially musings I strongly disagree with so I can think about and explain why I disagree with them. I still would like to give better arguments in response to his thoughts.

Once a month you used to get a collection of notes and links, without my emphasis on the new hotness. The curation level was consistently excellent.

I save these, so that if I ever have need of more good content. But at this point it’s been four years.

Without loss of generality, all of these would be highly reasonable for me to include in my recommendation list, might well do so in the future after more consideration, and for which I would be happy to offer reciprocity:

The Pursuit of Happiness (Scott Sumner) about things Sumner, including economics, culture and movies.

Knowingness (Aella) about Aella things, often sexual.

Second Person (Jacob Falkovich) about dating.

Derek Thompson (Derek Thompson) about things in general.

Works In Progress (Various) about progress studies.

The Grumpy Economist (John Cochrane) about free markets and economics.

Dwarkesh Podcast (Dwarkesh Patel) mostly hosts the podcast. I highly recommend the podcast and watch him on YouTube.

Rising Tide (Helen Toner) about AI policy.

Understanding AI (Timothy Lee) about AI and tech.

One Useful Thing (Ethan Mollick) about AI.

Import AI (Jack Clark) about AI.

My review process for each blog by default is to look at two to four posts, depending on how it is going, with a mix of best-of posts or what catches my eye, and at least one most recent post to avoid cherry-picking.

  1. ControlAI advocates for, well, controlling AI, making the case for why we need to avoid extinction risks from AI, how we might go about doing that, and covering related news items. They are out there in the field making real efforts, so they often report back useful information about that. This is very much an advocacy blog rather than an explanation blog, if you want that then it is pretty good.

  2. Applied Psychology = Communication appears to be a dead blog at this point, with the tagline ‘psychology that is immensely useful to your everyday life.’ These appear to be writeups of very simple points where You Should Know This Already, but there is a decent chance you don’t, and the benefits of finding one you didn’t know are plausibly high, so it is not crazy to quickly look, but I failed to learn anything new.

  3. Get Down and Shruti by Shruti Rajagopalan is about Indian culture and politics and hasn’t been updated since January. I very much appreciate the recommendation, since our interests and worlds seem so disjointed. I found her posts interesting and full of facts I did not know. The issue is that this is mostly not relevant to my core interests, but if you have the interest or bandwidth, and don’t mind that it’s a bit dry, go for it, there’s lots of good stuff here.

  4. Meaningness by David Chapman is a philosophy blog. I’ve extensively read Chapman previously via his previous online incarnation of his work, which is a book in slightly odd, hyperlinked form that you can jump around in, that is still in progress. That book is refreshingly accessible as such things go, if you don’t mind a tone of superiority and authority and claims of understanding it all that appear throughout, which passes through to the blog. I suspect what it takes to benefit from such things is the ability to look at such a system that is refreshingly made of gears, figure out which gears seem right for your situation and are doing the relevant work, which ones don’t work or are overreaching, and take the parts that you need. It is an excellent sign that the most recent post is one I could easily write quite a lot about in response, if I had more time. I’m glad I looked here.

  5. The Roots of Progress by Jason Crawford is what you would expect from the name, if you are familiar with progress studies. The problem here is that Jason is writing an excellent book, but it is a book trying to present a case that most people reading this will already take for granted. Which is a good thing, but also means most of you don’t need to read the blog. The important progress questions are soon largely going to be about AI, where Jason is better at taking the differences seriously than most similar others, although there is still much work to do there, and again that is not the topic here. The question is, who both is going to sit for this length of progress message, and also needs it? For those people this should be a great fit.

  6. Homo Economicus by Nicholas Decker. Decker is certainly not afraid to touch third rails, I will give him that, including his most popular post (way more popular than anything I’ve posted, life is like that) entitled ‘When Must We Kill Them?’ and subtitled ‘evil has come to America.’ He has a post asking, Should We Take Everything From The Old To Give To The Young? He seems at his best when making brief particular points in ways others wouldn’t dare, especially on Twitter where Icarus has some real bangers, whereas the longform writing, in my brief survey, needs work.

  7. Marketing BS with Edward Nevraumont, dormant since 2023. Not my cup of tea, and the posts are very of-the-moment, with much less value now 18 months later.

  8. Doom Debates by Liron Shapira, essentially a YouTube channel as a Substack. Debates about AI Doom, I tell you, doom! Recent debate partners for Liron are Richard Hanania, Emmett Shear and Scott Sumner. Would benefit from transcripts, these are strong guests but my barrier to consuming audio AI content is high and for debates it is even higher. I do love that he is doing this.

  9. Donkeyspace by Frank Lantz. There’s an interesting core to these posts, a lot of the writing is fun, but there’s definitely ‘could have been a Tweet’ vibes to a lot of it. As in, there’s a core point that Lantz is making. It’s a good point, but we don’t need this much explanation to get to it. I don’t feel like I’m thinking along with him, so much as I wish he’d just share the one sentence already. Yes, I know who is typing this.

  10. State of the Future by Lawrence Lundy-Bryan, examining questions about AI, mundane AI impacts and They Took Our Jobs. I noticed it failing to hold my interest, despite the topic being something I often focus on.

It is remarkable how few blogs are left that are not Substack, that I still want to read.

Most of the remaining alpha in blogs is in Marginal Revolution and LessWrong. Without either of those this blog would be a lot worse.

  1. LessWrong is a community blog, which obviously one would not want to read in full. That’s what the karma system, curation and other filters are for, including remembering which authors are worthwhile. Karma is very far from a perfect system, and it will sometimes miss very good posts, and the curated post selection is also very good. But I find it to be very good at positive selection. For recent posts, if you set a threshold of 200 and check out those plus the curated posts, and then filter by whether you are interested in the topic, you should do very well. For quick takes and comments, a threshold around 50-75 should be similar. There is also always tons of alpha in the archives, especially the sequences.

    1. Note that my posts are cross-posted there, if you want to check out additional comments, or prefer the way they do formatting.

    2. Reading the comments is often not a mistake on LessWrong.

  2. Marginal Revolution (Tyler Cowen and Alex Tabarrok). A large percentage of interesting links and ideas I find come from Marginal Revolution, many of which I would have otherwise missed, by far the richest vein if you don’t count Twitter as a source. Alex often makes good points very cleanly. Tyler is constantly bringing unique and spicy takes. I have great fun sparring with Tyler Cowen on a variety of topics, and his support growing the blog via links and emergent ventures has been invaluable, he really makes an effort to assist thinkers and talent. Tyler in particular often plays his actual views very close to his chest, only gesturing at what one might think about or how to frame looking at something, or offering declarations without explaining. In many cases this is great, since it helps you think for yourself. You can tell that Tyler has a purpose behind every word he writes, and he should be read as such.

    1. Alas, I have become increasingly frustrated with Tyler recently, especially in the AI realm, and not only because I don’t understand how he can hold the views he expresses given what he gets correct elsewhere. He seems to increasingly aggressively be placing his rhetorical thumb on the scale in ways I would not have expected him to previously. He also seems willing to amplify voices and points where I assume he must know better. It reads as if a strategic decision has been made. This also seems to be happening in other ways with respect to the current administration. So one must adjust, but MR is still a highly valuable and irreplaceable source of ideas and information.

  3. Shtetl-Optimized (Scott Aaronson). It rarely ends up being relevant to my work but I find the perspective valuable, and it’s good to keep up with different forms of science like quantum computing and get Scott’s unique perspective on various other events as well.

  4. Saturday Morning Breakfast Cereal. Yes, this is an online comic, but it is full of actual ideas often enough I’m going to list it here anyway, also xkcd of course.

  5. Meteuphoric (Katja Grace) is a series of simple points put bluntly and well, that I find worth considering.

  6. Luke Muehlhauser is mostly tracking his media consumption. I do find this a worthwhile data point to periodically scan.

  7. Stratechery (Ben Thompson). It is mostly paywalled. There was sufficient relevance that I paid up. Ben deals with AI and tech from a pure business perspective of profit maximization, and believes that this perspective is the important one and should carry the day, and is dismissive of downside concerns. Within that framework, he gets a lot of things right, and provides a lot of strong information and insight that I would otherwise miss. But this does cause me to strongly disagree with a lot of the points he emphasizes most often.

  8. Bloomberg. This is my most expensive subscription, and it is money well spent. If I had to choose one mainstream news source to rely upon, this would be it. It is very much not perfect, but I find I can trust Bloomberg’s bounded distrust principles far more than I trust those of other mainstream sources, and also for them to focus on what matters more often and in more depth.

  9. Wall Street Journal. This is my go to newspaper, and they often have articles that are important coverage of events in AI.

  10. Washington Post. This also often has good unique coverage, and offers a counterweight to the Wall Street Journal. For conventional news stories I’ll often expand to a number of additional news websites.

And of course, there is always there is Twitter.

I always check all posts from my followers and the Rationalist and AI lists.

In total that is on the order of 500 unique accounts.

This effectively is the majority of my consumption, if you include things I find via Twitter.

I write this guide back in March 2022 on how to best use Twitter to follow events in real time. Using Twitter to follow AI is somewhat different since it is less about real time developments and more about catching all the important stories, but most of the advice there still applies today.

If I had to make one change to the ‘beware’ list it would be that I did not emphasize enough the need to aggressively block people whose posts make your life worse, especially in the sense of making you emotionally worse off without compensation, or that draw you into topics you want to avoid, or that indicate general toxicity. A block is now even more than before only a small mistake, as they can still see your posts. If someone has not provided value before, a hair trigger is appropriate. This includes blocking people whose posts are shown to you via someone’s retweet.

The other note is that I take care to cultivate a mix of perspectives. I keep a number of accounts around so that I know what the other half is thinking, in various ways, especially legacy accounts that I was already following for another reason, many of which pivoted to AI in one way or another. I also count on them to ‘bubble up’ the key posts from the accounts that are truly obnoxious, too much so to put on the lists. One as to protect one’s own mental health. The rule is then that I can’t simply dismiss what those sources have to say out of hand, although of course sometimes what they say is not newsworthy.

I consume writing and write about it as a full time job. Most people of course cannot do this, plus you hopefully want to read this blog too, so you’ll have to be a lot more picky than this. If I was primarily working on something else, I’d be consuming vastly less content than I am now, and even now I don’t fully read a lot of these.

What am I potentially missing here, and should consider including? I encourage sharing in the comments, especially in the Substack comments. You are free to pitch your own work, but do say you are doing so.

Discussion about this post

Substack and Other Blog Recommendations Read More »

after-27-years,-engineer-discovers-how-to-display-secret-photo-in-power-mac-rom

After 27 years, engineer discovers how to display secret photo in Power Mac ROM

“If you double-click the file, SimpleText will open it,” Brown explains on his blog just before displaying the hidden team photo that emerges after following the steps.

The discovery represents one of the last undocumented Easter eggs from the pre-Steve Jobs return era at Apple. The Easter egg works through Mac OS 9.0.4 but appears to have been disabled by version 9.1, Brown notes. The timing aligns with Jobs’ reported ban on Easter eggs when he returned to Apple in 1997, though Brown wonders whether Jobs ever knew about this particular secret.

The G3 All-in-One is often nicknamed the

The ungainly G3 All-in-One set the stage for the smaller and much bluer iMac soon after. Credit: Jonathan Zufi

In his post, Brown expressed hope that he might connect with the Apple employees featured in the photo—a hope that was quickly fulfilled. In the comments, a man named Bill Saperstein identified himself as the leader of the G3 team (pictured fourth from left in the second row) in the hidden image.

“We all knew about the Easter egg, but as you mention; the technique to extract it changed from previous Macs (although the location was the same),” Saperstein wrote in the comment. “This resulted from an Easter egg in the original PowerMac that contained Paula Abdul (without permissions, of course). So the G3 team wanted to still have our pictures in the ROM, but we had to keep it very secret.”

He also shared behind-the-scenes details in another comment, noting that his “bunch of ragtag engineers” developed the successful G3 line as a skunk works project, with hardware that Jobs later turned into the groundbreaking iMac series of computers. “The team was really a group of talented people (both hw and sw) that were believers in the architecture I presented,” Saperstein wrote, “and executed the design behind the scenes for a year until Jon Rubenstein got wind of it and presented it to Steve and the rest is ‘history.'”

After 27 years, engineer discovers how to display secret photo in Power Mac ROM Read More »

supreme-court-overturns-5th-circuit-ruling-that-upended-universal-service-fund

Supreme Court overturns 5th Circuit ruling that upended Universal Service Fund

Finally, the Consumers’ Research position produces absurd results, divorced from any reasonable understanding of constitutional values. Under its view, a revenue-raising statute containing non-numeric, qualitative standards can never pass muster, no matter how tight the constraints they impose. But a revenue-raising statute with a numeric limit will always pass muster, even if it effectively leaves an agency with boundless power. In precluding the former and approving the latter, the Consumers’ Research approach does nothing to vindicate the nondelegation doctrine or the separation of powers.

The Gorsuch dissent said the “combination” question isn’t the deciding factor. He said the only question that needs to be answered is whether Congress violated the Constitution by delegating the power to tax to the FCC.

“As I see it, this case begins and ends with the first question. Section 254 [of the Communications Act] impermissibly delegates Congress’s taxing power to the FCC, and knowing that is enough to know the Fifth Circuit’s judgment should be affirmed,” Gorsuch said.

“Green light” for FCC to support Internet access

In the Gorsuch view, it doesn’t matter whether the FCC exceeded its authority by delegating Universal Service management to a private administrative company. “As far as I can tell, and as far as petitioners have informed us, this Court has never approved legislation allowing an executive agency to tax domestically unless Congress itself has prescribed the tax rate,” Gorsuch wrote.

The FCC and Department of Justice asked the Supreme Court to reverse the 5th Circuit decision. The court also received a challenge from broadband-focused advocacy groups and several lobby groups representing ISPs.

“Today is a great day,” said Andrew Jay Schwartzman, counsel for the Benton Institute for Broadband & Society; the National Digital Inclusion Alliance; and the Center for Media Justice. “We will need some time to sort through the details of today’s decision, but what matters most is that the Supreme Court has given the green light to the FCC to continue to support Internet access to the tens of millions of Americans and the thousands of schools, libraries and rural hospitals that rely on the Universal Service Fund.”

FCC Chairman Brendan Carr praised the ruling but said he plans to make changes to Universal Service. “I am glad to see the court’s decision today and welcome it as an opportunity to turn the FCC’s focus towards the types of reforms necessary to ensure that all Americans have a fair shot at next-generation connectivity,” Carr said.

Supreme Court overturns 5th Circuit ruling that upended Universal Service Fund Read More »

android-phones-could-soon-warn-you-of-“stingrays”-snooping-on-your-communications

Android phones could soon warn you of “Stingrays” snooping on your communications

Smartphones contain a treasure trove of personal data, which makes them a worthwhile target for hackers. However, law enforcement is not above snooping on cell phones, and their tactics are usually much harder to detect. Cell site simulators, often called Stingrays, can trick your phone into revealing private communications, but a change in Android 16 could allow phones to detect this spying.

Law enforcement organizations have massively expanded the use of Stingray devices because almost every person of interest today uses a cell phone at some point. These devices essentially trick phones into connecting to them like a normal cell tower, allowing the operator to track that device’s location. The fake towers can also shift a phone to less secure wireless technology to intercept calls and messages. There’s no indication this is happening on the suspect’s end, which is another reason these machines have become so popular with police.

However, while surveilling a target, Stingrays can collect data from other nearby phones. It’s not unreasonable to expect a modicum of privacy if you happen to be in the same general area, but sometimes police use Stingrays simply because they can. There’s also evidence that cell simulators have been deployed by mysterious groups outside law enforcement. In short, it’s a problem. Google has had plans to address this security issue for more than a year, but a lack of hardware support has slowed progress. Finally, in the coming months, we will see the first phones capable of detecting this malicious activity, and Android 16 is ready for it.

Android phones could soon warn you of “Stingrays” snooping on your communications Read More »

microsoft-changes-windows-in-attempt-to-prevent-next-crowdstrike-style-catastrophe

Microsoft changes Windows in attempt to prevent next CrowdStrike-style catastrophe

Working with third-party companies to define these standards and address those companies’ concerns seems to be Microsoft’s way of trying to avoid that kind of controversy this time around.

“We will continue to collaborate deeply with our MVI partners throughout the private preview,” wrote Weston.

Death comes for the blue screen

Microsoft is changing the “b” in BSoD, but that’s less interesting than the under-the-hood changes. Credit: Microsoft

Microsoft’s post outlines a handful of other security-related Windows tweaks, including some that take alternate routes to preventing more CrowdStrike-esque outages.

Multiple changes are coming for the “unexpected restart screen,” the less-derogatory official name for what many Windows users know colloquially as the “blue screen of death.” For starters, the screen will now be black instead of blue, a change that Microsoft briefly attempted to make in the early days of Windows 11 but subsequently rolled back.

The unexpected restart screen has been “simplified” in a way that “improves readability and aligns better with Windows 11 design principles, while preserving the technical information on the screen for when it is needed.”

But the more meaningful change is under the hood, in the form of a new feature called “quick machine recovery” (QMR).

If a Windows PC has multiple unexpected restarts or gets into a boot loop—as happened to many systems affected by the CrowdStrike bug—the PC will try to boot into Windows RE, a stripped-down recovery environment that offers a handful of diagnostic options and can be used to enter Safe Mode or open the PC’s UEFI firmware. QMR will allow Microsoft to “broadly deploy targeted remediations to affected devices via Windows RE,” making it possible for some problems to be fixed even if the PCs can’t be booted into standard Windows, “quickly getting users to a productive state without requiring complex manual intervention from IT.”

QMR will be enabled by default on Windows 11 Home, while the Pro and Enterprise versions will be configurable by IT administrators. The QMR functionality and the black version of the blue screen of death will both be added to Windows 11 24H2 later this summer. Microsoft plans to add additional customization options for QMR “later this year.”

Microsoft changes Windows in attempt to prevent next CrowdStrike-style catastrophe Read More »

anthropic-summons-the-spirit-of-flash-games-for-the-ai-age

Anthropic summons the spirit of Flash games for the AI age

For those who missed the Flash era, these in-browser apps feel somewhat like the vintage apps that defined a generation of Internet culture from the late 1990s through the 2000s when it first became possible to create complex in-browser experiences. Adobe Flash (originally Macromedia Flash) began as animation software for designers but quickly became the backbone of interactive web content when it gained its own programming language, ActionScript, in 2000.

But unlike Flash games, where hosting costs fell on portal operators, Anthropic has crafted a system where users pay for their own fun through their existing Claude subscriptions. “When someone uses your Claude-powered app, they authenticate with their existing Claude account,” Anthropic explained in its announcement. “Their API usage counts against their subscription, not yours. You pay nothing for their usage.”

A view of the Anthropic Artifacts gallery in the “Play a Game” section. Benj Edwards / Anthropic

Like the Flash games of yesteryear, any Claude-powered apps you build run in the browser and can be shared with anyone who has a Claude account. They’re interactive experiences shared with a simple link, no installation required, created by other people for the sake of creating, except now they’re powered by JavaScript instead of ActionScript.

While you can share these apps with others individually, right now Anthropic’s Artifact gallery only shows examples made by Anthropic and your own personal Artifacts. (If Anthropic expanded it into the future, it might end up feeling a bit like Scratch meets Newgrounds, but with AI doing the coding.) Ultimately, humans are still behind the wheel, describing what kinds of apps they want the AI model to build and guiding the process when it inevitably makes mistakes.

Speaking of mistakes, don’t expect perfect results at first. Usually, building an app with Claude is an interactive experience that requires some guidance to achieve your desired results. But with a little patience and a lot of tokens, you’ll be vibe coding in no time.

Anthropic summons the spirit of Flash games for the AI age Read More »