Author name: Tim Belzer

nobel-laureate-david-baltimore-dead-at-87

Nobel laureate David Baltimore dead at 87

Nobel Prize-winning molecular biologist and former Caltech president David Baltimore—who found himself at the center of controversial allegations of fraud against a co-author—has died at 87 from cancer complications. He shared the 1975 Nobel Prize in Physiology for his work upending the then-consensus that cellular information flowed only in one direction. Baltimore is survived by his wife of 57 years, biologist Alice Huang, as well as a daughter and granddaughter.

“David Baltimore’s contributions as a virologist, discerning fundamental mechanisms and applying those insights to immunology, to cancer, to AIDS, have transformed biology and medicine,” current Caltech President Thomas F. Rosenbaum said in a statement. “David’s profound influence as a mentor to generations of students and postdocs, his generosity as a colleague, his leadership of great scientific institutions, and his deep involvement in international efforts to define ethical boundaries for biological advances fill out an extraordinary intellectual life.”

Baltimore was born in New York City in 1938. His father worked in the garment industry, and his mother later became a psychologist at the New School and Sarah Lawrence. Young David was academically precocious and decided he wanted to be a scientist after spending a high school summer learning about mouse genetics at the Jackson Laboratory in Maine. He graduated from Swarthmore College and earned his PhD in biology from Rockefeller University in 1964 with a thesis on the study of viruses in animal cells. He joined the Salk Institute in San Diego, married Huang, and moved to MIT in 1982, founding the Whitehead Institute.

Baltimore initially studied viruses like polio and mengovirus that make RNA copies of the RNA genomes to replicate, but later turned his attention to retroviruses, which have enzymes that make DNA copies of viral RNA. He made a major breakthrough when he proved the existence of that viral enzyme, now known as reverse transcriptase. Previously scientists had thought that the flow of information went from DNA to RNA to protein synthesis. Baltimore showed that process could be reversed, ultimately enabling researchers to use disabled retroviruses to insert genes into human DNA to correct genetic diseases.

Nobel laureate David Baltimore dead at 87 Read More »

yes,-ai-continues-to-make-rapid-progress,-including-towards-agi

Yes, AI Continues To Make Rapid Progress, Including Towards AGI

That does not mean AI will successfully make it all the way to AGI and superintelligence, or that it will make it there soon or on any given time frame.

It does mean that AI progress, while it could easily have been even faster, has still been historically lightning fast. It has exceeded almost all expectations from more than a few years ago. And it means we cannot disregard the possibility of High Weirdness and profound transformation happening within a few years.

GPT-5 had a botched rollout and was only an incremental improvement over o3, o3-Pro and other existing OpenAI models, but was very much on trend and a very large improvement over the original GPT-4. Nor would one disappointing model from one lab have meant that major further progress must be years away.

Imminent AGI (in the central senses in which that term AGI used, where imminent means years rather than decades) remains a very real possibility.

Part of this is covering in full Gary Marcus’s latest editorial in The New York Times, since that is the paper of record read by many in government. I felt that piece was in many places highly misleading to the typical Times reader.

Imagine if someone said ‘you told me in 1906 that there was increasing imminent risk of a great power conflict, and now it’s 1911 and there has been no war, so your fever dream of a war to end all wars is finally fading.’ Or saying that you were warned in November 2019 that Covid was likely coming, and now it’s February 2020 and no one you know has it, so it was a false alarm. That’s what these claims sound like to me.

I have to keep emphasizing this because it now seems to be an official White House position, with prominent White House official Sriram Krishnan going so far as to say on Twitter that AGI any time soon has been ‘disproven,’ and David Sacks spending his time ranting and repeating Nvidia talking points almost verbatim.

When pressed, there is often a remarkably narrow window in which ‘imminent’ AGI is dismissed as ‘proven wrong.’ But this is still used as a reason to structure public policy and one’s other decisions in life as if AGI definitely won’t happen for decades, which is Obvious Nonsense.

Sriram Krishnan: I’ll write about this separately but think this notion of imminent AGI has been a distraction and harmful and now effectively proven wrong.

Prinz: “Imminent AGI” was apparently “proven wrong” because OpenAI chose to name a cheap/fast model “GPT-5” instead of o3 (could have been done 4 months earlier) or the general reasoning model that won gold on both the IMO and the IOI (could have been done 4 months later).

Rob Miles: I’m a bit confused by all the argument about GPT-5, the truth seems pretty mundane: It was over-hyped, they kind of messed up the launch, and the model is good, a reasonable improvement, basically in line with the projected trend of performance over time.

Not much of an update.

To clarify a little, the projected trend GPT-5 fits with is pretty nuts, and the world is on track to be radically transformed if it continues to hold. Probably we’re going to have a really wild time over the next few years, and GPT-5 doesn’t update that much in either direction.

Rob Miles is correct here as far as I can tell.

If imminent means ‘within the next six months’ or maybe up to a year I think Sriram’s perspective is reasonable, because of what GPT-5 tells us about what OpenAI is cooking. For sensible values of imminent that are more relevant to policy and action, Sriram Krishnan is wrong, in a ‘I sincerely hope he is engaging in rhetoric rather than being genuinely confused about this, or his imminently only means in the next year or at most two’ way.

I am confused how he can be sincerely mistaken given how deep he is into these issues, or that he shares his reasons so we can quickly clear this up because this is a crazy thing to actually believe. I do look forward to Sriram providing a full explanation as to why he believes this. So far we we only have heard ‘GPT-5.’

Not only is imminent AGI not disproven, there are continuing important claims that it is likely. Here is some clarity on Anthropic’s continued position, as of August 31.

Prinz: Jack, I assume no changes to Anthropic’s view that transformative AI will arrive by the end of next year?

Jack Clark: I continue to think things are pretty well on track for the sort of powerful AI system defined in machines of loving grace – buildable end of 2026, running many copies 2027. Of course, there are many reasons this could not occur, but lots of progress so far.

Anthropic’s valuation has certainly been on a rocket ship exponential.

Do I agree that we are on track to meet that timeline? No. I do not. I would be very surprised to see it go down that fast, and I am surprised that Jack Clark has not updated based on, if nothing else, previous projections by Anthropic CEO Dario Amodei falling short. I do think it cannot be ruled out. If it does happen, I do not think you have any right to be outraged at the universe for it.

It is certainly true that Dario Amodei’s early predictions of AI writing most of the code, as in 90% of all code within 3-6 months after March 11. This was not a good prediction, because the previous generation definitely wasn’t ready and even if it had been that’s not how diffusion works, and has been proven definitively false, it’s more like 40% of all code generated by AI and 20%-25% of what goes into production.

Which is still a lot, but a lot less than 90%.

Here’s what I said at the time about Dario’s prediction:

Zvi Mowshowitz (AI #107): Dario Amodei says AI will be writing 90% of the code in 6 months and almost all the code in 12 months. I am with Arthur B here, I expect a lot of progress and change very soon but I would still take the other side of that bet. The catch is: I don’t see the benefit to Anthropic of running the hype machine in overdrive on this, at this time, unless Dario actually believed it.

I continue to be confused why he said it, it’s highly unstrategic to hype this way. I can only assume on reflection this was an error about diffusion speed more than it was an error about capabilities? On reflection yes I was correctly betting ‘no’ but that was an easy call. I dock myself more points on net here, for hedging too much and not expressing the proper level of skepticism. So yes, this should push you towards putting less weight on Anthropic’s projections, although primarily on the diffusion front.

As always, remember that projections of future progress include the possibility, nay the inevitability, of discovering new methods. We are not projecting ‘what if the AI labs all keep ramming their heads against the same wall whether or not it works.’

Ethan Mollick: 60 years of exponential growth in chip density was achieved not through one breakthrough or technology, but a series of problems solved and new paradigms explored as old ones hit limits.

I don’t think current AI has hit a wall, but even if it does, there many paths forward now.

Paul Graham: One of the things that strikes me when talking to AI insiders is how they believe both that they need several new discoveries to get to AGI, and also that such discoveries will be forthcoming, based on the past rate.

My talks with AI insiders also say we will need new discoveries, and we definitely will need new major discoveries in alignment. But it’s not clear how big those new discoveries need to be in order to get there.

I agree with Ryan Greenblatt that precise timelines for AGI don’t matter that much in terms of actionable information, but big jumps in the chance of things going crazy within a few years can matter a lot more. This is similar to questions of p(doom), where as long as you are in the Leike Zone of a 10%-90% chance of disaster, you mostly want to react in the same ways, but outside that range you start to see big changes in what makes sense.

Ryan Greenblatt: Pretty short timelines (<10 years) seem likely enough to warrant strong action and it's hard to very confidently rule out things going crazy in <3 years.

While I do spend some time discussing AGI timelines (and I’ve written some posts about it recently), I don’t think moderate quantitative differences in AGI timelines matter that much for deciding what to do. For instance, having a 15-year median rather than a 6-year median doesn’t make that big of a difference. That said, I do think that moderate differences in the chance of very short timelines (i.e., less than 3 years) matter more: going from a 20% chance to a 50% chance of full AI R&D automation within 3 years should potentially make a substantial difference to strategy.

Additionally, my guess is that the most productive way to engage with discussion around timelines is mostly to not care much about resolving disagreements, but then when there appears to be a large chance that timelines are very short (e.g., >25% in <2 years) it's worthwhile to try hard to argue for this. I think takeoff speeds are much more important to argue about when making the case for AI risk.

I do think that having somewhat precise views is helpful for some people in doing relatively precise prioritization within people already working on safety, but this seems pretty niche.

Given that I don’t think timelines are that important, why have I been writing about this topic? This is due to a mixture of: I find it relatively quick and easy to write about timelines, my commentary is relevant to the probability of very short timelines (which I do think is important as discussed above), a bunch of people seem interested in timelines regardless, and I do think timelines matter some.

Consider reflecting on whether you’re overly fixated on details of timelines.

Jason Calacanis of the All-In Podcast (where he is alongside AI Czar David Sacks) has a bold prediction, if you believe that his words have or are intended to have meaning. Which is an open question.

Jason: Before 2030 you’re going to see Amazon, which has massively invested in [AI], replace all factory workers and all drivers … It will be 100% robotic, which means all of those workers are going away. Every Amazon worker. UPS, gone. FedEx, gone.

Aaron Slodov: hi @Jason how much money can i bet you to take the other side of the factory worker prediction?

Jason (responding to video of himself saying the above): In 2035 this will not be controversial take — it will be reality.

Hard, soul-crushing labor is going away over the next decade. We will be deep in that transition in 2030, when humanoid robots are as common as bicycles.

Notice the goalpost move of ‘deep in that transition’ in 2030 versus saying full replacement by 2030, without seeming to understand there is any contradiction.

These are two very different predictions. The original ‘by 2030’ prediction is Obvious Nonsense unless you expect superintelligence and a singularity, probably involving us all dying. There’s almost zero chance otherwise. Technology does not diffuse that fast.

Plugging 2035 into the 2030 prediction is also absurd, if we take the prediction literally. No, you’re not going to have zero workers at Amazon, UPS and FedEx within ten years unless we’ve not only solved robotics and AGI, we’ve also diffused those technologies at full scale. In which case, again, that’s a singularity.

I am curious what his co-podcaster David Sacks or Sriram Krishnan would say here. Would they dismiss Jason’s confident prediction as already proven false? If not, how can one be confident that AGI is far? Very obviously you can’t have one without the other.

GPT-5 is not a good reason to dismiss AGI, and to be safe I will once again go into why, and why we are making rapid progress towards AGI.

GPT-5 and GPT-4 were both major leaps in benchmarks from the previous generation.

The differences are dramatic, and the time frame between releases was similar.

The actual big difference? That there was only one incremental release between GPT-3 and GPT-4, GPT-3.5, with little outside competition. Whereas between GPT-4 and GPT-5 we saw many updates. At OpenAI alone we saw GPT-4o, and o1, and o3, plus updates that didn’t involve number changes, and at various points Anthropic’s Claude and Google’s Gemini were plausibly on top. Our frog got boiled slowly.

Epoch AI: However, one major difference between these generations is release cadence. OpenAI released relatively few major updates between GPT-3 and GPT-4 (most notably GPT-3.5). By contrast, frontier AI labs released many intermediate models between GPT-4 and 5. This may have muted the sense of a single dramatic leap by spreading capability gains over many releases.

Benchmarks can be misleading, especially as we saturate essentially all of them often well ahead of predicted schedules, but the overall picture is not. The mundane utility and user experience jumps across all use cases are similarly dramatic. The original GPT-4 was a modest aid to coding, GPT-5 and Opus 4.1 transform how it is done. Most of the queries I make with GPT-5-Thinking or GPT-5-Pro would not be worth bothering to give to the original GPT-4, or providing the context would not even be possible. So many different features have been improved or added.

This ideas, frequently pushed by among others David Sacks, that everyone’s models are about the same and aren’t improving? These claims simply are not true. Observant regular users are not about to be locked into one model or ecosystem.

Everyone’s models are constantly improving. No one would seriously consider using models from the start of the year for anything but highly esoteric purposes.

The competition is closer than one would have expected. There are three major labs, OpenAI, Anthropic and Google, that each have unique advantages and disadvantages. At various times each have had the best model, and yes currently it is wise to mix up your usage depending on your particular use case.

Those paying attention are always ready to switch models. I’ve switched primary models several times this year alone, usually switching to a model from a different lab, and tested many others as well. And indeed we must switch models often either way, as it is expected that everyone’s models will change on the order of every few months, in ways that break the same things that would break if you swapped GPT-5 for Opus or Gemini or vice versa, all of which one notes typically run on three distinct sets of chips (Nvidia for GPT-5, Amazon Trainium for Anthropic and Google TPUs for Gemini) but we barely notice.

Most people notice AI progress much better when it impacts their use cases.

If you are not coding, and not doing interesting math, and instead asking simple things that do not require that much intelligence to answer correctly, then upgrading the AI’s intelligence is not going to improve your satisfaction levels much.

Jack Clark: Five years ago the frontier of LLM math/science capabilities was 3 digit multiplication for GPT-3. Now, frontier LLM math/science capabilities are evaluated through condensed matter physics questions. Anyone who thinks AI is slowing down is fatally miscalibrated.

David Shapiro: As I’ve said before, AI is “slowing down” insofar as most people are not smart enough to benefit from the gains from here on out.

Once you see this framing, you see the contrast everywhere.

Patrick McKenzie: I think a lot of gap between people who “get” LLMs and people who don’t is that some people understand current capabilities to be a floor and some people understand them to be either a ceiling or close enough to a ceiling.

And even if you explain “Look this is *obviouslya floor” some people in group two will deploy folk reasoning about technology to say “I mean technology decays in effectiveness all the time.” (This is not considered an insane POV in all circles.)

And there are some arguments which are persuasive to… people who rate social pressure higher than received evidence of their senses… that technology does actually frequently regress.

For example, “Remember how fast websites were 20 years ago before programmers crufted them up with ads and JavaScript? Now your much more powerful chip can barely keep up. Therefore, technological stagnation and backwards decay is quite common.”

Some people would rate that as a powerful argument. Look, it came directly from someone who knew a related shibboleth, like “JavaScript”, and it gestures in the direction of at least one truth in observable universe.

Oh the joys of being occasionally called in as the Geek Whisperer for credentialed institutions where group two is high status, and having to titrate how truthful I am about their worldview to get message across.

As in, it’s basically this graph but for AI:

Here’s another variant of this foolishness, note the correlation to ‘hitting a wall’:

Prem Kumar Aparanji: It’s not merely the DL “hitting a wall” (as @GaryMarcus put it & everybody’s latched on) now as predicted, even the #AI data centres required for all the training, fine-tuning, inferencing of these #GenAI models are also now predicted to be hitting a wall soon.

Quotes from Futurism: For context, Kupperman notes that Netflix brings in just $39 billion in annual revenue from its 300 million subscribers. If AI companies charged Netflix prices for their software, they’d need to field over 3.69 billion paying customers to make a standard profit on data center spending alone — almost half the people on the planet.

“Simply put, at the current trajectory, we’re going to hit a wall, and soon,” he fretted. “There just isn’t enough revenue and there never can be enough revenue. The world just doesn’t have the ability to pay for this much AI.”

Prinz: Let’s assume that AI labs can charge as much as Netflix per month (they currently charge more) and that they’ll never have any enterprise revenue (they already do) and that they won’t be able to get commissions from LLM product recommendations (will happen this year) and that they aren’t investing in biotech companies powered by AI that will soon have drugs in human trial (they already have). How will they ever possibly be profitable?

He wrote a guest opinion essay. Things didn’t go great.

That starts with the false title (as always, not entirely up to the author, and it looks like it started out as a better one), dripping with unearned condescension, ‘The Fever Dream of Imminent ‘Superintelligence’ Is Finally Breaking,’ and the opening paragraph in which he claims Altman implied GPT-5 would be AGI.

Here is the lead:

GPT-5, OpenAI’s latest artificial intelligence system, was supposed to be a game changer, the culmination of billions of dollars of investment and nearly three years of work. Sam Altman, the company’s chief executive, implied that GPT-5 could be tantamount to artificial general intelligence, or A.G.I. — A.I. that is as smart and as flexible as any human expert.

Instead, as I have written, the model fell short. Within hours of its release, critics found all kinds of baffling errors: It failed some simple math questions, couldn’t count reliably and sometimes provided absurd answers to old riddles. Like its predecessors, the A.I. model still hallucinates (though at a lower rate) and is plagued by questions around its reliability. Although some people have been impressed, few saw it as a quantum leap, and nobody believed it was A.G.I. Many users asked for the old model back.

GPT-5 is a step forward but nowhere near the A.I. revolution many had expected. That is bad news for the companies and investors who placed substantial bets on the technology.

Did you notice the stock market move in AI stocks, as those bets fell down to Earth when GPT-5 was revealed? No? Neither did I.

The argument above is highly misleading on many fronts.

  1. GPT-5 is not AGI, but this was entirely unsurprising – expectations were set too high, but nothing like that high. Yes, Altman teased that it was possible AGI could arrive relatively soon, but at no point did Altman claim that GPT-5 would be AGI, or that AGI would arrive in 2025. Approximately zero people had median estimates of AGI in 2025 or earlier, although there are some that have estimated the end of 2026, in particular Anthropic (they via Jack Clark continue to say ‘powerful’ AI buildable by end of 2026, not AGI arriving 2026).

  2. The claim that it ‘couldn’t count reliably’ is especially misleading. Of course GPT-5 can count reliably. The evidence here is a single adversarial example. For all practical purposes, if you ask GPT-5 to count something, it will count that thing.

  3. Old riddles is highly misleading. If you give it an actual old riddle it will nail it. What GPT-5 and other models get wrong are, again, adversarial examples that do not exist ‘in the wild’ but are crafted to pattern match well-known other riddles while having a different answer. Why should we care?

  4. GPT-5 still is not fully reliable but this is framed as it being still highly unreliable, when in most circumstances this is not the case. Yes, if you need many 9s of reliability LLMs are not yet for you, but neither are humans.

  5. AI valuations and stocks continue to be rising not falling.

  6. Yes, the fact that OpenAI chose to have GPT-5 not be a scaled up model does tell us that directly scaling up model size alone has ‘lost steam’ in relative terms due to the associated costs, but this is not news, o1 and o3 (and GPT-4.5) tell us this as well. We are now working primarily on scaling and improving in other ways, but very much there are still plans to scale up more in the future. In the context of all the other facts quoted about other scaled up models, it seems misleading to many readers to not mention that GPT-5 is not scaled up.

  7. Claims here are about failures of GPT-5-Auto or GPT-5-Base, whereas the ‘scaled up’ version of GPT-5 is GPT-5-Pro or at least GPT-5-Thinking.

  8. Gary Marcus clarifies that his actual position is on the order of 8-15 years to AGI, with 2029 being ‘awfully unlikely.’ Which is a highly reasonable timeline, but that seems pretty imminent. That’s crazy soon. That’s something I would want to be betting on heavily, and preparing for at great cost, AGI that soon seems like the most important thing happening in the world right now if likely true?

    1. The article does not give any particular timeline, and does not imply we will never get to AGI, but I very much doubt those reading the post would come away with the impression that things strictly smarter than people are only about 10 years away. I mean, yowsers, right?

The fact about ‘many users asked for the old model back’ is true, but lacking the important context that what users wanted was the old personality, so it risks giving an uninformed user the wrong impression.

To Gary’s credit, he then does hedge, as I included in the quote, acknowledging GPT-5 is indeed a good model representing a step forward. Except then:

And it demands a rethink of government policies and investments that were built on wildly overinflated expectations.

Um, no? No it doesn’t. That’s silly.

The current strategy of merely making A.I. bigger is deeply flawed — scientifically, economically and politically. Many things, from regulation to research strategy, must be rethought.

As many now see, GPT-5 shows decisively that scaling has lost steam.

Again, no? That’s not the strategy. Not ‘merely’ doing that. Indeed, a lot of the reason GPT-5 was so relatively unimpressive was GPT-5 was not scaled up so much. It was instead optimized for compute efficiency. There is no reason to have to rethink much of anything in response to a model that, as explained above, was pretty much exactly on the relevant trend lines.

I do appreciate this:

Gary Marcus: However, as I warned in a 2022 essay, “Deep Learning Is Hitting a Wall,” so-called scaling laws aren’t physical laws of the universe like gravity but hypotheses based on historical trends.

As in, the ‘hitting the wall’ claim was back in 2022. How did that turn out? Look at GPT-5, look at what we had available in 2022, and tell me we ‘hit a wall.’

What does ‘imminent’ superintelligence mean in this context?

Gary Marcus (NYT): The chances of A.G.I.’s arrival by 2027 now seem remote.

Notice the subtle goalpost move, as AGI ‘by 2027’ means AGI 2026. These people are gloating, in advance, that someone predicted a possibility of privately developed AGI in 2027 (with a median in 2028, in the AI 2027 scenario OpenBrain tells the government but does not release its AGI right away to the public) and then AGI will have not arrived, to the public, in 2026.

According to my sources (Opus 4.1 and GPT-5 Thinking) even ‘remote’ still means on the order of 2% chance in the next 16 months, implying an 8%-25% chance in 5 years. I don’t agree, but even if one did, that’s hardly something one can safety rule out.

But then, there’s this interaction on Twitter that clarifies what Gary Marcus meant:

Gary Marcus: Anyone who thinks AGI is impossible: wrong.

Anyone who thinks AGI is imminent: just as wrong.

It’s not that complicated.

Peter Wildeford: what if I think AGI is 4-15 years away?

Gary Marcus: 8-15 and we might reach an agreement. 4 still seems awfully unlikely to me. to many core cognitive problems aren’t really being addressed, and solutions may take a while to roll once we find the basic insights we are lacking.

But it’s a fair question.

That’s a highly reasonable position one can take. Awfully unlikely (but thus possible) in four years, likely in 8-15, median timeline of 2036 or so.

Notice that on the timescale of history, 8-15 years until likely AGI, the most important development in the history of history if and when it happens, seems actually kind of imminent and important? That should demand an aggressive policy response focused on what we are going to do when we get to do that, not be treated as a reason to dismiss this?

Imagine saying, in 2015, ‘I think AGI is far away, we’re talking 18-25 years’ and anticipating the looks you would get.

The rest of the essay is a mix of policy suggestions and research direction suggestions. If indeed he is right about research directions, of which I am skeptical, we would still expect to see rapid progress soon as the labs realize this and pivot.

A common tactic among LLM doubters, which was one of the strategies used in the NYT editorial, is to show a counterexample, where a model fails a particular query, and say ‘model can’t do [X]’ or the classic Colin Fraser line of ‘yep it’s dumb.’

Here’s a chef’s kiss example I saw on Monday morning:

I mean, that’s very funny, but it is rather obvious how it happened with the strawberries thing all over Twitter and thus the training data, and it tells us very little about overall performance.

In such situations, we have to differentiate between different procedures, the same as in any other scientific experiment. As in:

Did you try to make it fail, or try to set it up to succeed? Did you choose an adversarial or a typical example? Did you get this the first time you tried it or did you go looking for a failure? Are you saying it ‘can’t [X]’ because it can’t ever do [X], because it can’t ever do [X] out of the box, it can’t reliably do [X], or it can’t perfectly do [X], etc?

If you conflate ‘I can elicit wrong answers on [X] if I try’ with ‘it can’t do [X]’ then the typical reader will have a very poor picture.

Daniel Litt (responding to NYT article by Gary Marcus that says ‘[GPT-5] failed some simple math questions, couldn’t count reliably’): While it’s true one can elicit poor performance on basic math question from frontier models like GPT-5, IMO this kind of thing (in NYTimes) is likely to mislead readers about their math capabilities.

Derya Unutmaz: AI misinformation at the NYT is at its peak. What a piece of crap “newspaper” it has become. It’s not even worth mentioning the author of this article-but y’all can guess. Meanwhile, just last night I posted a biological method invented by GPT-5 Pro, & I have so much more coming!

Ethan Mollick: This is disappointing. Purposefully underselling what models can do is a really bad idea. It is possible to point out that AI is flawed without saying it can’t do math or count – it just isn’t true.

People need to be realistic about capabilities of models to make good decisions.

I think the urge to criticize companies for hype blends into a desire to deeply undersell what models are capable of. Cherry-picking errors is a good way of showing odd limitations to an overethusiastic Twitter crowd, but not a good way of making people aware that AI is a real factor.

Shakeel: The NYT have published a long piece by Gary Marcus on why GPT-5 shows scaling doesn’t work anymore. At no point does the piece mention that GPT-5 is not a scaled up model.

[He highlights the line from the post, ‘As many now see, GPT-5 shows decisively that scaling has lost steam.’]

Tracing Woods: Gary Marcus is a great demonstration of the power of finding a niche and sticking to it

He had the foresight to set himself up as an “AI is faltering” guy well in advance of the technology advancing faster than virtually anyone predicted, and now he’s the go-to

The thing I find most impressive about Gary Marcus is the way he accurately predicted AI would scale up to an IMO gold performance and then hit a wall (upcoming).

Gary Marcus was not happy about these responses, and doubled down on ‘but you implied it would be scaled up, no takesies backsies.’

Gary Marcus (replying to Shakeel directly): this is intellectually dishonest, at BEST it at least as big as 4.5 which was intended as 5 which was significantly larger than 4 it is surely scaled up compared to 4 which is what i compared it to.

Shakeel: we know categorically that it is not an OOM scale up vs. GPT-4, so … no. And there’s a ton of evidence that it’s smaller than 4.5.

Gary Marcus (QTing Shakeel): intellectually dishonest reply to my nytimes article.

openai implied implied repeatedly that GPT-5 was a scaled up model. it is surely scaled up relative to GPT-4.

it is possible – openAI has been closed mouth – that it is same size as 4.5 but 4.5 itself was surely scaled relative to 4, which is what i was comparing with.

amazing that after years of discussion of scaling the new reply is to claim 5 wasn’t scaled at all.

Note that if it wasn’t, contra all the PR, that’s even more reason to think that OpenAI knows damn well that is time for leaning on (neuro)symbolic tools and that scaling has reached diminishing returns.

JB: It can’t really be same in parameter count as gpt4.5 they really struggled serving that and it was much more expensive on the API to use

Gary Marcus: so a company valued at $300b that’s raised 10 of billions didn’t have the money to scale anymore even though there whole business plan was scaling? what does that tell you?

I am confused how one can claim Shakeel is being intellectually dishonest. His statement is flat out true. Yes, of course the decision not to scale

It tells me that they want to scale how much they serve the model and how much they do reasoning at inference time, and that this was the most economical solution for them at the time. JB is right that very, very obviously GPT-4.5 is a bigger model than GPT-5 and it is crazy to not realize this.

A post like this would be incomplete if I failed to address superforecasters.

I’ve been over this several times before, where superforecasters reliably have crazy slow projections for progress and even crazier predictions that when we do make minds smarter than ourselves that is almost certainly not an existential risk.

My coverage of this started way back in AI #14 and AI #9 regarding existential risk estimates, including Tetlock’s response to AI 2027. One common theme in such timeline projections is predicting Nothing Ever Happens even when this particular something has already happened.

Now that the dust settled on models getting IMO Gold in 2025, it is a good time to look back on the fact that domain experts expected less progress in math than we got, and superforecasters expected a lot less, across the board.

Forecasting Research Institute: Respondents—especially superforecasters—underestimated AI progress.

Participants predicted the state-of-the-art accuracy of ML models on the MATH, MMLU, and QuaLITY benchmarks by June 2025.

Domain experts assigned probabilities of 21.4%, 25%, and 43.5% to the achieved outcomes. Superforecasters assigned even lower probabilities: just 9.3%, 7.2%, and 20.1% respectively.

The International Mathematical Olympiad results were even more surprising. AI systems achieved gold-level performance at the IMO in July 2025. Superforecasters assigned this outcome just a 2.3% probability. Domain experts put it at 8.6%.

Garrison Lovely: This makes Yudkowsky and Paul Christiano’s predictions of IMO gold by 2025 look even more prescient (they also predicted it a ~year before this survey was conducted).

Note that even Yudkowsky and Christiano had only modest probability that the IMO would fall as early as 2025.

Andrew Critch: Yeah sorry forecasting fam, ya gotta learn some AI if you wanna forecast anything, because AI affects everything and if ya don’t understand it ya forecast it wrong.

Or, as I put it back in the unrelated-to-AI post Rock is Strong:

Everybody wants a rock. It’s easy to see why. If all you want is an almost always right answer, there are places where they almost always work.

The security guard has an easy to interpret rock because all it has to do is say “NO ROBBERY.” The doctor’s rock is easy too, “YOU’RE FINE, GO HOME.” This one is different, and doesn’t win the competitions even if we agree it’s cheating on tail risks. It’s not a coherent world model.

Still, on the desk of the best superforecaster is a rock that says “NOTHING EVER CHANGES OR IS INTERESTING” as a reminder not to get overexcited, and to not assign super high probabilities to weird things that seem right to them.

Thus:

Daniel Eth: In 2022, superforecasters gave only a 2.3% chance of an AI system achieving an IMO gold by 2025. Yet this wound up happening. AI progress keeps being underestimated by superforecasters.

I feel like superforecasters are underperforming in AI (in this case even compared to domain experts) because two reference classes are clashing:

• steady ~exponential increase in AI

• nothing ever happens.

And for some reason, superforecasters are reaching for the second.

Hindsight is hindsight, and yes you will get a 98th percentile result 2% of the time. But I think at 2.3% for 2025 IMO Gold, you are not serious people.

That doesn’t mean that being serious people was the wise play here. The incentives might well have been to follow the ‘nothing ever happens’ rock. We still have to realize this, as we can indeed smell what the rock is cooking.

A wide range of potential paths of AI progress are possible. There are a lot of data points that should impact the distribution of outcomes, and one must not overreact to any one development. One should especially not overreact to not being blown away by progress for a span of a few months. Consider your baseline that’s causing that.

My timelines for hitting various milestones, including various definitions of AGI, involve a lot of uncertainty. I think not having a lot of uncertainty is a mistake.

I especially think saying either ‘AGI almost certainly won’t happen within 5 years’ or ‘AGI almost certainly will happen within 15 years,’ would be a large mistake. There are so many different unknowns involved.

I can see treating full AGI in 2026 as effectively a Can’t Happen. I don’t think you can extend that even to 2027, although I would lay large odds against it hitting that early.

A wide range of medians seem reasonable to me. I can see defending a median as early as 2028, or one that extends to 2040 or beyond if you think it is likely that anything remotely like current approaches cannot get there. I have not put a lot of effort into picking my own number since the exact value currently lacks high value of information. If you put a gun to my head for a typical AGI definition I’d pick 2031, but with no ‘right to be surprised’ if it showed up in 2028 or didn’t show up for a while. Consider the 2031 number loosely held.

To close out, consider once again: Even if you we agreed with Gary Marcus and said 8-15 years, with median 2036? Take a step back and realize how soon and crazy that is.

Discussion about this post

Yes, AI Continues To Make Rapid Progress, Including Towards AGI Read More »

supreme-court-chief-justice-lets-trump-fire-ftc-democrat,-at-least-for-now

Supreme Court Chief Justice lets Trump fire FTC Democrat, at least for now

1935 Supreme Court is key precedent

The key precedent in the case is Humphrey’s Executor v. United States, a 1935 ruling in which the Supreme Court unanimously held that the president can only remove FTC commissioners for inefficiency, neglect of duty, or malfeasance in office. Trump’s termination notices to Slaughter and Bedoya said they were being fired simply because their presence on the commission “is inconsistent with my Administration’s priorities.”

The Trump administration argues that Humphrey’s Executor shouldn’t apply to the current version of the FTC because it exercises significant executive power. But the appeals court, in a 2-1 ruling, said “the present-day Commission exercises the same powers that the Court understood it to have in 1935 when Humphrey’s Executor was decided.”

“The government has no likelihood of success on appeal given controlling and directly on point Supreme Court precedent,” the panel majority said.

But while the government was found to have no likelihood of success in the DC Circuit appeals court, its chances are presumably much better in the Supreme Court. The Supreme Court previously stayed District Court decisions in cases involving Trump’s removal of Democrats from the National Labor Relations Board, the Merit Systems Protection Board, and the Consumer Product Safety Commission.

In a 2020 decision involving the Consumer Financial Protection Bureau, the court said in a footnote that its 1935 “conclusion that the FTC did not exercise executive power has not withstood the test of time.” If the Supreme Court ultimately rules in favor of Trump, it could throw out the Humphrey’s Executor ruling or clarify it in a way that makes it inapplicable to the FTC.

But Humphrey’s Executor is still a binding precedent, Slaughter’s opposition to the administrative stay said. “This Court should not grant an administrative stay where the court below simply ‘follow[ed] the case which directly controls,’ as it was required to do,” the Slaughter filing said.

Supreme Court Chief Justice lets Trump fire FTC Democrat, at least for now Read More »

f1-in-italy:-look-what-happens-when-the-downforce-comes-off

F1 in Italy: Look what happens when the downforce comes off

That was enough to allow Piastri past. However, the team instructed the championship leader to slow down and relinquish the position to Norris. It was a team mistake, not a driver mistake, and McLaren is doing everything in its power to ensure the eventual champion gets there because of their driving and not some external factor. Piastri didn’t sound exactly happy on the radio. But F1 is a team sport, and racing drivers are employees—when your boss gives you an order, it’s wise to do what they ask and argue about it after the fact, if continued employment is one of your goals.

Oscar Piastri (L) and Lando Norris (R) have a very 21st century relationship. Jakub Porzycki/NurPhoto via Getty Images

For many, a slow pit stop is just one of those things bestowed by the racing gods, and even Verstappen pointed that out when informed by his engineer of the change in positions behind him. After the race, Norris seemed a little embarrassed to have been given the place back, but the emerging consensus from former drivers was that, since Norris had been asked about pit stop priority, and had been undercut anyway, that was sufficient to excuse the request.

McLaren’s approach to handling its drivers is markedly different from the all-out war we saw when Lewis Hamilton and Fernando Alonso raced for it in 2007. Then, neither went home with the big trophy at the end of the year—their infighting allowed Kimi Raikkonen to take the title for Ferrari instead.

That won’t happen this year; either Norris or Piastri will be crowned at the end of the year, with the other having to wait at least another year. The pair have even been asked how they want the team to celebrate in the event the other driver wins—a sensitivity that feels refreshingly new for Formula 1.

Formula 1 heads to Azerbaijan in two weeks for another low-downforce race. Can we expect another Verstappen victory?

F1 in Italy: Look what happens when the downforce comes off Read More »

what-to-expect-(and-not-expect)-from-yet-another-september-apple-event

What to expect (and not expect) from yet another September Apple event


An all-new iPhone variant, plus a long list of useful (if predictable) upgrades.

Apple’s next product announcement is coming soon. Credit: Apple

Apple’s next product announcement is coming soon. Credit: Apple

Apple’s next product event is happening on September 9, and while the company hasn’t technically dropped any hints about what’s coming, anyone with a working memory and a sense of object permanence can tell you that an Apple event in the month of September means next-generation iPhones.

Apple’s flagship phones have changed in mostly subtle ways since 2022’s iPhone 14 Pro added the Dynamic Island and 2023’s refreshes switched from Lightning to USB-C. Chips get gradually faster, cameras get gradually better, but Apple hasn’t done a seismic iPhone X-style rethinking of its phones since, well, 2017’s iPhone X.

The rumor mill thinks that Apple is working on a foldable iPhone—and such a device would certainly benefit from years of investment in the iPad—but if it’s coming, it probably won’t be this year. That doesn’t mean Apple is totally done iterating on the iPhone X-style design, though. Let’s run down what the most reliable rumors have said we’re getting.

The iPhone 17

Last year’s iPhone 16 Pro bumped the screen sizes from 6.1 and 6.7 inches to 6.3 and 6.9 inches. This year’s iPhone 17 will allegedly get a 6.3-inch screen with a high-refresh-rate ProMotion panel, but the iPhone Plus is said to be going away. Credit: Apple

Apple’s vanilla one-size-fits-most iPhone is always the centerpiece of the lineup, and this year’s iteration is expected to bring the typical batch of gradual iterative upgrades.

The screen will supposedly be the biggest beneficiary, upgrading from 6.1 inches to 6.3 inches (the same size as the current iPhone 16 Pro) and adding a high-refresh-rate ProMotion screen that has typically been reserved for the Pro phones. Apple is always careful not to add too many “Pro”-level features to the entry-level iPhones, but this one is probably overdue—even less-expensive Android phones like the Pixel 9a ship often ship with 90 Hz or 120 Hz screens at this point. It’s not clear whether that will also enable the always-on display feature that has also historically been exclusive to the iPhone Pro, but the fluidity upgrade will be nice regardless.

Aside from that, there aren’t many specific improvements we’ve seen reported on, but there are plenty we can comfortably guess at. Improved front- and rear-facing cameras and a new Apple A19-series chip with at least the 8GB of RAM needed to support Apple Intelligence are both pretty safe bets.

But there’s one thing we supposedly won’t get, which is a new large-sized iPhone Plus. That brings us to our next rumor.

The “iPhone Air”

For the last few years, every new iPhone launch has actually brought us four iPhones—a regular iPhone in two different sizes and an iPhone Pro with a better camera, better screen, faster chip, and other improvements in a regular size and a large size.

It’s the second size of the regular iPhone that has apparently given Apple some trouble. It made a couple of generations of “iPhone mini,” an attempt to address a small-but-vocal contingent of Phones Are Just Too Big These Days people that apparently didn’t sell well enough to continue making. That was replaced by the iPhone Plus, aimed at people who wanted a bigger screen but who weren’t ready to pay for an iPhone Pro Max.

The Plus phones at least gave the iPhone lineup a nice symmetry—two tiers of phone, with a regular one and a big one at each tier—but rumors suggest that the Plus phone is also going away this year. Like the iPhone mini before it, it apparently just wasn’t selling well enough to be worth the continued effort.

That brings us to this year’s fourth iPhone: Apple is supposedly planning to release an “iPhone Air,” which will weigh less than the regular iPhone and is said to be 5.5 or 6 mm thick, depending on who you ask (the iPhone 16 is 7.8 mm).

A 6.3-inch ProMotion display and A19-series chip are also expected to be a part of the iPhone Air, but rather than try to squeeze every feature of the iPhone 17 into a thinner phone, it sounds like the iPhone 17 Air will cater to people who are willing to give a few things up in the interest of getting a thinner and lighter device. It will reportedly have worse battery life than the regular iPhone and just a single-lens camera setup (though the 48 MP sensors Apple has switched to in recent iPhones do make it easier to “fake” optical zoom features than it used to be).

We don’t know anything about the pricing for any of these phones, but Bloomberg’s Mark Gurman suggests that the iPhone Air will be positioned between the regular iPhone and the iPhone Pro—more like the iPad lineup, where the Air is the mid-tier choice, and less like the Mac, where the Air is the entry-level laptop.

iPhone 17 Pro

Apple’s Pro iPhones are generally “the regular iPhone, but more,” and sometimes they’re “what all iPhones will look like in a couple of years, but available right now for people who will pay more for it.” The new ones seem set to continue in that vein.

The most radical change will apparently be on the back—Apple is said to be switching to an even larger camera array that stretches across the entire top-rear section of the phone, an arrangement you’ll occasionally see in some high-end Android phones (Google’s Pixel 10 is one). That larger camera bump will likely enable a few upgrades, including a switch from a 12 MP sensor for the telephoto zoom lens to a 48 MP sensor. And it will also be part of a more comprehensive metal-and-glass body that’s more of a departure from the glass-backed-slab design Apple has been using since the iPhone 12.

A 48MP telephoto sensor could increase the amount of pseudo-optical zoom that the iPhone can offer. The main iPhones will condense a 48 MP photo down to 12 MP when you’re in the regular shooting mode, binning pixels to improve image quality. For zoomed-in photos, it can just take a 12 MP section out of the middle of the 48 MP image—you lose the benefit of pixel binning, but you’re still getting a “native resolution” photo without blurry digital zoom. With a better sensor, Apple could do exactly the same thing with the telephoto lens.

Apple reportedly isn’t planning any changes to screen size this year—still 6.3 inches for the regular Pro and 6.9 inches for the Max. But they are said to be getting new “A19 Pro” series chips that are superior to the regular A19 processors (though in what way, exactly, we don’t yet know). But it could shrink the amount of screen space dedicated to the Dynamic Island.

New Apple Watches

Apple Watch Series 10

The Apple Watch Series 10 from 2024. Credit: Apple

New iPhone announcements are usually paired with new Apple Watch announcements, though if anything, the Watch has changed even less than the iPhone has over the last few years.

The Apple Watch Series 11 won’t be getting a screen size increase—the Series 10 bumped things up a smidge just last year, from 41 and 45 mm to 42 and 46 mm. But the screen will apparently have a higher maximum brightness—always useful for outdoor visibility—and there will be a modestly improved Apple S11 chip on the inside.

The entry-level Apple Watch SE is also apparently due for an upgrade. The current second-generation SE still uses an Apple S8 chip, and Apple Watch Series 4-era 40 and 44 mm screens that don’t support always-on operation. In other words, there’s plenty that Apple could upgrade here without cannibalizing sales of the mainstream Series 11 watch.

Finally, after missing out on an update last year, Apple also reportedly plans to deliver a new Apple Watch Ultra, with the larger 46 mm screen from the Series 10/11 watches and the same updated S11 chip as the regular Apple Watch. The current Apple Watch Ultra 2 already has a brighter screen than the Series 10—3,000 nits, up from 2,000—so it’s not clear whether the Apple Watch Ultra 3’s screen would also get brighter or if the Series 11’s screen is just getting a brightness boost to match what the Ultra can do.

Smart home, TV, and audio

Though iPhones and Apple Watches are usually a lock for a September event, other products and accessory updates are also possible.

Of these, the most high-profile is probably a refresh for the Apple TV 4K streaming box, which would be its first update in three years. Rumors suggest that the main upgrade for a new model would be an Apple A17 Pro chip, introduced for the iPhone 15 Pro and also used in the iPad mini 7. The A17 Pro is paired with 8GB of RAM, which makes it Apple’s smallest and cheapest chip that’s capable of Apple Intelligence. Apple hasn’t done anything with Apple Intelligence on the Apple TV directly, but to date, that has been partly because none of the hardware is capable of it.

Also in the “possible but not guaranteed” column: new high-end AirPods Pro, the first-ever internal update to 2020’s HomePod Mini speaker, a new AirTag location tracker, and a straightforward internals-only refresh of the Vision Pro headset. Any, all, or none of these could break cover at the event next week, but Gurman claims they’re all “coming soon.”

New software updates

Devices running Apple’s latest beta operating systems. Credit: Apple

We know most of what there is to know about iOS 26, iPadOS 26, macOS 26, and Apple’s other software updates this year, thanks to a three-month-old WWDC presentation and months of public beta testing. There might be a feature or two exclusive to the newest iPhones, but that sort of thing is usually camera-related and usually pretty minor.

The main thing to expect will be release dates for the final versions of all of the updates. Apple usually releases a near-final release candidate build on the day of the presentation, gives developers a week or so to finalize and submit their updated apps for App Review, and then releases the updates after that. Expect to see them rolled out to everyone sometime the week of September 15th (though an earlier release is always a possibility).

What’s probably not happening

We’d be surprised to see anything related to the Mac or the iPad at the event next week, even though several models are in a window where the timing is about right for an Apple M5 refresh.

Macs and iPads have shared the stage with the iPhone before, but in more recent years, Apple has held these refreshes back for another, smaller event later in October or November. If Apple has new MacBook Pro or iPad Pro models slated for 2025, we’d expect to see them in a month or two.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

What to expect (and not expect) from yet another September Apple event Read More »

ignoring-trump-threats,-europe-hits-google-with-2.95b-euro-fine-for-adtech-monopoly

Ignoring Trump threats, Europe hits Google with 2.95B euro fine for adtech monopoly

Google may have escaped the most serious consequences in its most recent antitrust fight with the US Department of Justice (DOJ), but the European Union is still gunning for the search giant. After a brief delay, the European Commission has announced a substantial 2.95 billion euro ($3.45 billion) fine relating to Google’s anti-competitive advertising practices. This is not Google’s first big fine in the EU, and it probably won’t be the last, but it’s the first time European leaders could face blowback from the US government for going after Big Tech.

The case stems from a complaint made by the European Publishers Council in 2021. The ensuing EU investigation determined that Google illegally preferenced its own ad display services, which made its Google Ad Exchange (AdX) marketplace more important in the European ad space. As a result, the competition says Google was able to charge higher fees for its service, standing in the way of fair competition since at least 2014.

A $3.45 billion fine would be a staggering amount for most firms, but Google’s earnings have never been higher. In Q2 2025, Google had net earnings of over $28 billion on almost $100 billion in revenue. The European Commission isn’t stopping with financial penalties, though. Google has also been ordered to end its anti-competitive advertising practices and submit a plan for doing so within 60 days.

“Google must now come forward with a serious remedy to address its conflicts of interest, and if it fails to do so, we will not hesitate to impose strong remedies,” said European Commission Executive Vice President Teresa Ribera. “Digital markets exist to serve people and must be grounded in trust and fairness. And when markets fail, public institutions must act to prevent dominant players from abusing their power.”

Europe alleges Google’s control of AdX allowed it to overcharge and stymie competition.

Credit: European Commission

Europe alleges Google’s control of AdX allowed it to overcharge and stymie competition. Credit: European Commission

Google will not accept the ruling as it currently stands—company leadership believes that the commission’s decision is wrong, and they plan to appeal. “[The decision] imposes an unjustified fine and requires changes that will hurt thousands of European businesses by making it harder for them to make money,” said Google’s head of regulatory affairs, Lee-Anne Mulholland.

Harsh rhetoric from US

Since returning to the presidency, Donald Trump has taken a renewed interest in defending Big Tech, likely spurred by political support from heavyweights in AI and cryptocurrency. The administration has imposed hefty tariffs on Europe, and Trump recently admonished the EU for plans to place limits on the conduct of US technology firms. That hasn’t stopped the administration from putting US tech through the wringer at home, though. After publicly lambasting Intel’s CEO and threatening to withhold CHIPS and Science Act funding, the company granted the US government a 10 percent ownership stake.

Ignoring Trump threats, Europe hits Google with 2.95B euro fine for adtech monopoly Read More »

warner-bros.-sues-midjourney-to-stop-ai-knockoffs-of-batman,-scooby-doo

Warner Bros. sues Midjourney to stop AI knockoffs of Batman, Scooby-Doo


AI would’ve gotten away with it too…

Warner Bros. case builds on arguments raised in a Disney/Universal lawsuit.

DVD art for the animated movie Scooby-Doo & Batman: The Brave and the Bold. Credit: Warner Bros. Discovery

Warner Bros. hit Midjourney with a lawsuit Thursday, crafting a complaint that strives to shoot down defenses that the AI company has already raised in a similar lawsuit filed by Disney and Universal Studios earlier this year.

The big film studios have alleged that Midjourney profits off image generation models trained to produce outputs of popular characters. For Disney and Universal, intellectual property rights to pop icons like Darth Vader and the Simpsons were allegedly infringed. And now, the WB complaint defends rights over comic characters like Superman, Wonder Woman, and Batman, as well as characters considered “pillars of pop culture with a lasting impact on generations,” like Scooby-Doo and Bugs Bunny, and modern cartoon characters like Rick and Morty.

“Midjourney brazenly dispenses Warner Bros. Discovery’s intellectual property as if it were its own,” the WB complaint said, accusing Midjourney of allowing subscribers to “pick iconic” copyrighted characters and generate them in “every imaginable scene.”

Planning to seize Midjourney’s profits from allegedly using beloved characters to promote its service, Warner Bros. described Midjourney as “defiant and undeterred” by the Disney/Universal lawsuit. Despite that litigation, WB claimed that Midjourney has recently removed copyright protections in its supposedly shameful ongoing bid for profits. Nothing but a permanent injunction will end Midjourney’s outputs of allegedly “countless infringing images,” WB argued, branding Midjourney’s alleged infringements as “vast, intentional, and unrelenting.”

Examples of closely matching outputs include prompts for “screencaps” showing specific movie frames, a search term that at least one artist, Reid Southen, had optimistically predicted Midjourney would block last year, but it apparently did not.

Here are some examples included in WB’s complaint:

Midjourney’s output for the prompt, “Superman, classic cartoon character, DC comics.”

Midjourney could face devastating financial consequences in a loss. At trial, WB is hoping discovery will show the true extent of Midjourney’s alleged infringement, asking the court for maximum statutory damages, at $150,000 per infringing output. Just 2,000 infringing outputs unearthed could cost Midjourney more than its total revenue for 2024, which was approximately $300 million, the WB complaint said.

Warner Bros. hopes to hobble Midjourney’s best defense

For Midjourney, the WB complaint could potentially hit harder than the Disney/Universal lawsuit. WB’s complaint shows how closely studios are monitoring AI copyright litigation, likely choosing ideal moments to strike when studios feel they can better defend their property. So, while much of WB’s complaint echoes Disney and Universal’s arguments—which Midjourney has already begun defending against—IP attorney Randy McCarthy suggested in statements provided to Ars that WB also looked for seemingly smart ways to potentially overcome some of Midjourney’s best defenses when filing its complaint.

WB likely took note when Midjourney filed its response to the Disney/Universal lawsuit last month, arguing that its system is “trained on billions of publicly available images” and generates images not by retrieving a copy of an image in its database but based on “complex statistical relationships between visual features and words in the text-image pairs are encoded within the model.”

This defense could allow Midjourney to avoid claims that it copied WB images and distributes copies through its models. But hoping to dodge this defense, WB didn’t argue that Midjourney retains copies of its images. Rather, the entertainment giant raised a more nuanced argument that:

Midjourney used software, servers, and other technology to store and fix data associated with Warner Bros. Discovery’s Copyrighted Works in such a manner that those works are thereby embodied in the model, from which Midjourney is then able to generate, reproduce, publicly display, and distribute unlimited “copies” and “derivative works” of Warner Bros. Discovery’s works as defined by the Copyright Act.”

McCarthy noted that WB’s argument pushes the court to at least consider that even though “Midjourney does not store copies of the works in its model,” its system “nonetheless accesses the data relating to the works that are stored by Midjourney’s system.”

“This seems to be a very clever way to counter MJ’s ‘statistical pattern analysis’ arguments,” McCarthy said.

If it’s a winning argument, that could give WB a path to wipe Midjourney’s models. WB argued that each time Midjourney provides a “substantially new” version of its image generator, it “repeats this process.” And that ongoing activity—due to Midjourney’s initial allegedly “massive copying” of WB works—allows Midjourney to “further reproduce, publicly display, publicly perform, and distribute image and video outputs that are identical or virtually identical to Warner Bros. Discovery’s Copyrighted Works in response to simple prompts from subscribers.”

Perhaps further strengthening the WB’s argument, the lawsuit noted that Midjourney promotes allegedly infringing outputs on its 24/7 YouTube channel and appears to have plans to compete with traditional TV and streaming services. Asking the court to block Midjourney’s outputs instead, WB claims it’s already been “substantially and irreparably harmed” and risks further damages if the AI image generator is left unchecked.

As alleged proof that the AI company knows its tool is being used to infringe WB property, WB pointed to Midjourney’s own Discord server and subreddit, where users post outputs depicting WB characters and share tips to help others do the same. They also called out Midjourney’s “Explore” page, which allows users to drop a WB-referencing output into the prompt field to generate similar images.

“It is hard to imagine copyright infringement that is any more willful than what Midjourney is doing here,” the WB complaint said.

WB and Midjourney did not immediately respond to Ars’ request to comment.

Midjourney slammed for promising “fewer blocked jobs”

McCarthy noted that WB’s legal strategy differs in other ways from the arguments Midjourney’s already weighed in the Disney/Universal lawsuit.

The WB complaint also anticipates Midjourney’s likely defense that users are generating infringing outputs, not Midjourney, which could invalidate any charges of direct copyright infringement.

In the Disney/Universal lawsuit, Midjourney argued that courts have recently found that AI tools referencing copyrighted works is “a quintessentially transformative fair use,” accusing studios of trying to censor “an instrument for user expression.” They claim that Midjourney cannot know about infringing outputs unless studios use the company’s DMCA process, while noting that subscribers have “any number of legitimate, noninfringing grounds to create images incorporating characters from popular culture,” including “non-commercial fan art, experimentation and ideation, and social commentary and criticism.”

To avoid losing on that front, the WB complaint doesn’t depend on a ruling that Midjourney directly infringed copyrights. Instead, the complaint “more fully” emphasizes how Midjourney may be “secondarily liable for infringement via contributory, inducement and/or vicarious liability by inducing its users to directly infringe,” McCarthy suggested.

Additionally, WB’s complaint “seems to be emphasizing” that Midjourney “allegedly has the technical means to prevent its system from accepting prompts that directly reference copyrighted characters,” and “that would prevent infringing outputs from being displayed,” McCarthy said.

The complaint noted that Midjourney is in full control of what outputs can be generated. Noting that Midjourney “temporarily refused to ‘animate'” outputs of WB characters after launching video generations, the lawsuit appears to have been filed in response to Midjourney “deliberately” removing those protections and then announcing that subscribers would experience “fewer blocked jobs.”

Together, these arguments “appear to be intended to lead to the inference that Midjourney is willfully enticing its users to infringe,” McCarthy said.

WB’s complaint details simple user prompts that generate allegedly infringing outputs without any need to manipulate the system. The ease of generating popular characters seems to make Midjourney a destination for users frustrated by other AI image generators that make it harder to generate infringing outputs, WB alleged.

On top of that, Midjourney also infringes copyrights by generating WB characters, “even in response to generic prompts like ‘classic comic book superhero battle.'” And while Midjourney has seemingly taken steps to block WB characters from appearing on its “Explore” page, where users can find inspiration for prompts, these guardrails aren’t perfect, but rather “spotty and suspicious,” WB alleged. Supposedly, searches for correctly spelled character names like “Batman” are blocked, but any user who accidentally or intentionally mispells a character’s name like “Batma” can learn an easy way to work around that block.

Additionally, WB alleged, “the outputs often contain extensive nuance and detail, background elements, costumes, and accessories beyond what was specified in the prompt.” And every time that Midjourney outputs an allegedly infringing image, it “also trains on the outputs it has generated,” the lawsuit noted, creating a never-ending cycle of continually enhanced AI fakes of pop icons.

Midjourney could slow down the cycle and “minimize” these allegedly infringing outputs, if it cannot automatically block them all, WB suggested. But instead, “Midjourney has made a calculated and profit-driven decision to offer zero protection for copyright owners even though Midjourney knows about the breathtaking scope of its piracy and copyright infringement,” WB alleged.

Fearing a supposed scheme to replace WB in the market by stealing its best-known characters, WB accused Midjourney of willfully allowing WB characters to be generated in order to “generate more money for Midjourney” to potentially compete in streaming markets.

Midjourney will remove protections “on a whim”

As Midjourney’s efforts to expand its features escalate, WB claimed that trust is lost. Even if Midjourney takes steps to address rightsholders’ concerns, WB argued, studios must remain watchful of every upgrade, since apparently, “Midjourney can and will remove copyright protection measures on a whim.”

The complaint noted that Midjourney just this week announced “plans to continue deploying new versions” of its image generator, promising to make it easier to search for and save popular artists’ styles—updating a feature that many artists loathe.

Without an injunction, Midjourney’s alleged infringement could interfere with WB’s licensing opportunities for its content, while “illegally and unfairly” diverting customers who buy WB products like posters, wall art, prints, and coloring books, the complaint said.

Perhaps Midjourney’s strongest defense could be efforts to prove that WB benefits from its image generator. In the Disney/Universal lawsuit, Midjourney pointed out that studios “benefit from generative AI models,” claiming that “many dozens of Midjourney subscribers are associated with” Disney and Universal corporate email addresses. If WB corporate email addresses are found among subscribers, Midjourney could claim that WB is trying to “have it both ways” by “seeking to profit” from AI tools while preventing Midjourney and its subscribers from doing the same.

McCarthy suggested it’s too soon to say how the WB battle will play out, but Midjourney’s response will reveal how it intends to shift tactics to avoid courts potentially picking apart its defense of its training data, while keeping any blame for copyright-infringing outputs squarely on users.

“As with the Disney/Universal lawsuit, we need to wait to see how Midjourney answers these latest allegations,” McCarthy said. “It is definitely an interesting development that will have widespread implications for many sectors of our society.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Warner Bros. sues Midjourney to stop AI knockoffs of Batman, Scooby-Doo Read More »

openai-links-up-with-broadcom-to-produce-its-own-ai-chips

OpenAI links up with Broadcom to produce its own AI chips

OpenAI is set to produce its own artificial intelligence chip for the first time next year, as the ChatGPT maker attempts to address insatiable demand for computing power and reduce its reliance on chip giant Nvidia.

The chip, co-designed with US semiconductor giant Broadcom, would ship next year, according to multiple people familiar with the partnership.

Broadcom’s chief executive Hock Tan on Thursday referred to a mystery new customer committing to $10 billion in orders.

OpenAI’s move follows the strategy of tech giants such as Google, Amazon and Meta, which have designed their own specialised chips to run AI workloads. The industry has seen huge demand for the computing power to train and run AI models.

OpenAI planned to put the chip to use internally, according to one person close to the project, rather than make them available to external customers.

Last year it began an initial collaboration with Broadcom, according to reports at the time, but the timeline for mass production of a successful chip design had previously been unclear.

On a call with analysts, Tan announced that Broadcom had secured a fourth major customer for its custom AI chip business, as it reported earnings that topped Wall Street estimates.

Broadcom does not disclose the names of these customers, but people familiar with the matter confirmed OpenAI was the new client. Broadcom and OpenAI declined to comment.

OpenAI links up with Broadcom to produce its own AI chips Read More »

rocket-report:-neutron’s-pad-opens-for-business;-spacex-gets-falcon-9-green-light

Rocket Report: Neutron’s pad opens for business; SpaceX gets Falcon 9 green light


All the news that’s fit to lift

“Nobody’s waving the white flag here until the last hour of the last day.”

Image of a Starlink launch on Falcon 9 this week. Credit: SpaceX

Welcome to Edition 8.09 of the Rocket Report! The biggest news of the week happened inside the Beltway rather than on a launch pad somewhere. In Washington, DC, Congress has pushed back on the Trump administration’s plan to stop flying the Space Launch System rocket after Artemis III. Congress made it clear that it wants to keep the booster in business for a long time. The big question now is whether the Trump White House will blink.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Israel launches SAR satellite. The Israel Ministry of Defense, Israel Defense Forces, and Israel Aerospace Industries successfully launched the Ofek 19 satellite on Tuesday from the Palmachim Airbase. The launch was carried out by the country’s solid-propellant Shavit 2 rocket. Ofek 19 is a synthetic aperture radar observation satellite with enhanced capabilities, 7 Israel National News reports.

A unique launch posture … This was the seventh launch of the Shavit-2 vehicle, which made its debut in June 2007. The most recent launch prior to this week occurred in March 2023. Because of its geographic location and difficult relations with surrounding countries, Israel launches its rockets to the west, over the Mediterranean Sea. (submitted by MarkW98)

Canadian launch firm invests in launch site. Earlier this summer, Reaction Dynamics, an Ontario-based launch company, closed on a Series A funding round worth $10 million. This will support the next phase of development of the Canadian company’s hybrid propulsion system, of which an initial suborbital demonstration flight is planned for this winter. Now the company has taken some of this funding and invested in a launch site in Nova Scotia, SpaceQ reports.

Getting in on the ground floor … In a transaction worth $1.2 million, Reaction Dynamics is investing in Maritime Launch Services, which is developing the Spaceport Nova Scotia facility. Reaction Dynamics intends to launch its Aurora-8 rocket from the Canadian launch site. Bachar Elzein, the CEO of Reaction Dynamics, said the move made sense for two reasons. The first is that it secures “a spot to launch our very first orbital rocket,” with Elzein adding, “we believe in their vision,” and thus wanted to invest. That second factor had to do with all the work, the heavy lifting, MLS has done to date, to build a spaceport from the ground up. (submitted by JoeyS)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

MaiaSpace completes tank tests. French rocket builder and ArianeGroup subsidiary MaiaSpace announced the completion of a monthslong test campaign that subjected several subscale prototypes of its propellant tanks to high-pressure burst tests, European Spaceflight reports. Over the course of six months, the company conducted 15 “burst” tests of subscale propellant tanks. Burst tests push tanks to failure to assess their structural limits and ensure they can safely withstand pressures well beyond normal operating conditions.

Working toward space … The data collected will be used to validate mechanical models that will inform the final design of the full-scale propellant tanks. The tests come as MaiaSpace continues to work toward the debut flight of its Maia rocket, which could take place in 2027 from French Guiana. At present, the company intends the rocket to have a lift capacity of 1.5 metric tons to low-Earth orbit.

Orienspace secures B+ round funding. Chinese commercial rocket company Orienspace has raised tens of millions of dollars in Series B+ financing as it moves towards a key test flight, Space News reports. Orienspace secured funding of between $27 million and $124 million, according to the Chinese-language Taibo Network. The capital will be used mainly for the follow-up development and mass production of the Gravity-2 medium-lift liquid launch vehicle.

Not a small rocket … The company will soon begin comprehensive ground verification tests for the Gravity-2 and is scheduled to carry out its first flight test by the end of this year. In July, Orienspace successfully conducted a hot fire test of a Gravity-2 kerosene-liquid oxygen first-stage engine, including gimbal and valve system evaluations. Gravity-2 is expected to lift on the order of 20 metric tons to low-Earth orbit.

Rocket Lab unveils Neutron launch complex. As Rocket Lab prepares to roll out its new Neutron, the firm recently unveiled the launch complex from which the vehicle will fly, DefenseNews reports. Located within the Virginia Space Authority’s Mid-Atlantic Regional Spaceport in Wallops Island, the facility, dubbed Launch Complex 3, will support testing, launch, and return missions for the reusable rocket. Rocket Lab sees Neutron as a contender to help ease the bottleneck in demand from both commercial and military customers for a ride to space. Today, that demand is largely being met by a single provider in the medium-lift market, SpaceX.

A launch this year? … It sounds unlikely. During the event, Rocket Lab founder Peter Beck said that although he believes the company’s plan to launch this year is within reach, the schedule is aggressive with no margin for error. Speaking with reporters at the launch site, Beck said the company has some key testing in the coming months to qualify key stages of the rocket, which will give it a better idea of whether it can meet that 2025 timeline. “Nobody’s waving the white flag here until the last hour of the last day,” he said. This one is unlikely to break Berger’s Law, however.

SpaceX obtains approval to ramp up Falcon 9 cadence. The Federal Aviation Administration issued a record of decision on Wednesday approving SpaceX’s plan to more than double the number of Falcon 9 launches from Space Launch Complex-40 (SLC-40), the busiest of the company’s four operational launch pads. The FAA concluded that the proposed launch rate “would not significantly impact the quality of the human environment,” Ars reports.

Reaching ludicrous speed … The environmental review paves the way for SpaceX to launch up to 120 Falcon 9 rockets per year from SLC-40, an increase from 50 launches covered in a previous FAA review in 2020. Since then, the FAA has issued SpaceX temporary approval to go beyond 50 launches from SLC-40. For example, SpaceX launched 62 of its overall 132 Falcon 9 flights last year from SLC-40. SpaceX’s goal for this year is 170 Falcon 9 launches, and the company is on pace to come close to this target.

NASA sets date for science mission. NASA said Thursday that a trio of spacecraft to study the Sun will launch no earlier than September 23, on a Falcon 9 rocket. The missions include NASA’s IMAP (Interstellar Mapping and Acceleration Probe), NASA’s Carruthers Geocorona Observatory, and NOAA’s SWFO-L1 (Space Weather Follow On-Lagrange 1) spacecraft. After launching from Kennedy Space Center, the spacecraft will travel together to their destination at the first Earth-Sun Lagrange point (L1), around 1 million miles from Earth toward the Sun.

Fun in the Sun … The missions will each focus on different effects of the solar wind and space weather, from their origins at the Sun to their farthest reaches billions of miles away at the edge of our Solar System. Research and observations from the missions will help us better understand the Sun’s influence on Earth’s habitability, map our home in space, and protect satellites and voyaging astronauts and airline crews from space weather impacts.

Starship’s heat shield shows promise. One of the key issues ahead of last week’s test of SpaceX’s Starship vehicle was the performance of the upper stage heat shield, Ars reports. When the vehicle landed in the Indian Ocean, it had a decidedly orange tint. So what gives? SpaceX founder Elon Musk provided some clarity after the flight, saying, “Worth noting that the heat shield tiles almost entirely stayed attached, so the latest upgrades are looking good! The red color is from some metallic test tiles that oxidized and the white is from insulation of areas where we deliberately removed tiles.”

A step toward the goal … The successful test and additional information from Musk suggest that SpaceX is making progress on developing a heat shield for Starship. This really is the key technology to make an upper stage rapidly reusable—NASA’s space shuttle orbiters were reusable but required a standing army to refurbish the vehicle between flights. To unlock Starship’s potential, SpaceX wants to be able to refly Starships within 24 hours.

Ted Cruz emerges as key SLS defender. All of the original US senators who created and sustained NASA’s Space Launch System rocket over the last 15 years—Bill Nelson, Kay Bailey Hutchison, and Richard Shelby—have either retired or failed to win reelection. However, Ars reports that a new champion has emerged to continue the fight: Texas Republican Ted Cruz. As part of its fiscal year 2026 budget, the White House sought to end funding for the Space Launch System rocket after the Artemis III mission, and also cancel the Lunar Gateway, an orbital space station that provides a destination for the rocket.

Money for future missions … However, Cruz subsequently crafted a NASA provision tacked onto President Trump’s “One Big, Beautiful Bill.” The Cruz addendum provided $6.7 billion in funding for two additional SLS missions, Artemis IV and Artemis V, and to continue Gateway construction. In several hearings this year, Cruz has made it clear that his priorities for human spaceflight are to beat China back to the Moon and maintain a presence there. However, it is now increasingly clear that he views this as only being possible through continued use of NASA’s SLS rocket.

SpaceX seeks to solve Starship prop demands. If SpaceX is going to fly Starships as often as it wants to, it’s going to take more than rockets and launch pads. Tanker trucks have traditionally delivered rocket propellant to launch pads at America’s busiest spaceports in Florida and California. SpaceX has used the same method of bringing propellant for the first several years of operations at Starbase. But a reusable Starship’s scale dwarfs that of other rockets. It stands more than 400 feet tall, with a capacity for more than a million gallons of super-cold liquid methane and liquid oxygen propellants.

That’s a lot of gas … SpaceX also uses large quantities of liquid nitrogen to chill and purge the propellant loading system for Starship. It takes more than 200 tanker trucks traveling from distant refineries to deliver all of the methane, liquid oxygen, and liquid nitrogen for a Starship launch. SpaceX officials recognize this is not an efficient means of conveying these commodities to the launch pad. It takes time, emits pollution, and clogs roadways. SpaceX’s solution to some of these problems is to build its own plants to generate cryogenic fluids. In a new report, Ars explains how the company plans to do this.

Next three launches

September 5: Falcon 9 | Starlink 10-57 | Kennedy Space Center Florida | 11: 29 UTC

September 5: Ceres 1 | Unknown payload | Jiuquan Satellite Launch Center, China | 11: 35 UTC

September 6: Falcon 9 | Starlink 17-9 | Vandenberg Space Force Base, California | 15: 45 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: Neutron’s pad opens for business; SpaceX gets Falcon 9 green light Read More »

philips-introduces-budget-friendly-hue-bulbs-as-part-of-major-lineup-overhaul

Philips introduces budget-friendly Hue bulbs as part of major lineup overhaul

The standard Hue bulbs are also getting an upgrade. The new models can dim to as low as 0.2 percent (compared to 2 percent for the Essential bulbs), and Philips claims they can produce the full range of visible color temperatures (from 1,000 to 20,000 K, compared to the 2,200 to 6,500 K range of the Essential bulbs).

The next-most-important addition to the ecosystem is probably the new Hue Bridge Pro, the Hue ecosystem’s first new bridge accessory in a decade. In addition to being faster, the new $99 bridge supports “150+ lights” and “50+ accessories,” at least triple the number supported by the previous Bridge (which is still sticking around at $66 as a lower-cost accessory for smaller setups). Users with multiple Hue bridges will eventually be able to replace them all with a single Bridge Pro, but Hueblog reports that this capability won’t be available until “later this year.”

Other new Hue products include indoor and outdoor strip lights, including cheaper models under the new Essential brand umbrella; Festavia-branded outdoor string lights; and a $170 Hue Secure video doorbell. The new hub and most of the new bulbs are being launched in North America starting today, but the doorbell won’t launch until October, and many of the strip and string lights will be released in November or December. Signify’s press release has specific pricing and availability information for all the accessories.

Philips introduces budget-friendly Hue bulbs as part of major lineup overhaul Read More »

lull-in-falcon-heavy-missions-opens-window-for-spacex-to-build-new-landing-pads

Lull in Falcon Heavy missions opens window for SpaceX to build new landing pads

SpaceX’s goal for this year is 170 Falcon 9 launches, and the company is on pace to come close to this target. Most Falcon 9 launches carry SpaceX’s own Starlink broadband satellites into orbit. The FAA’s environmental approval opens the door for more flights from SpaceX’s busiest launch pad.

But launch pad availability is not the only hurdle limiting how many Falcon 9 flights can take off in a year. There’s also the rate of production for Falcon 9 upper stages, which are new on each flight, and the time it takes for each vessel in SpaceX’s fleet of drone ships (one in California, two in Florida) to return to port with a recovered booster and redeploy back to sea again for the next mission. SpaceX lands Falcon 9 boosters on offshore drone ships after most of its launches and only brings the rocket back to an onshore landing on missions carrying lighter payloads to orbit.

When a Falcon 9 booster does return to landing on land, it targets one of SpaceX’s recovery zones at military-run spaceports in Florida and California. SpaceX’s landing zone at Vandenberg Space Force Base in California is close to the Falcon 9 launch pad there.

The Space Force wants SpaceX, and potentially other future reusable rocket companies, to replicate the side-by-side launch and landing pads at Cape Canaveral.

To do that, the FAA also gave the green light Wednesday for SpaceX to construct and operate a new rocket landing zone at SLC-40 and conduct up to 34 first-stage booster landings there each year. The landing zone will consist of a 280-foot diameter concrete pad surrounded by a 60-foot-wide gravel apron. The landing zone’s broadest diameter, including the apron, will measure 400 feet.

The location of SpaceX’s new rocket landing pad is shown with the red circle, approximately 1,000 feet northeast of the Falcon 9 rocket’s launch pad at Space Launch Complex-40. Credit: Google Maps/Ars Technica

SpaceX is in an earlier phase of planning for a Falcon landing pad at historic Launch Complex-39A at NASA’s Kennedy Space Center, just a few miles north of SLC-40. SpaceX uses LC-39A as a launch pad for most Falcon 9 crew launches, all Falcon Heavy missions, and, in the future, flights of the company’s gigantic next-generation rocket, Starship. SpaceX foresees Starship as a replacement for Falcon 9 and Falcon Heavy, but the company’s continuing investment in Falcon-related infrastructure shows the workhorse rocket will stick around for a while.

Lull in Falcon Heavy missions opens window for SpaceX to build new landing pads Read More »

fcc-chair-teams-up-with-ted-cruz-to-block-wi-fi-hotspots-for-schoolkids

FCC chair teams up with Ted Cruz to block Wi-Fi hotspots for schoolkids

“Chairman Carr’s moves today are very unfortunate as they further signal that the Commission is no longer prioritizing closing the digital divide,” Schwartzman said. “In the 21st Century, education doesn’t stop when a student leaves school and today’s actions could lead to many students having a tougher time completing homework assignments because their families lack Internet access.”

Biden FCC expanded school and library program

Under then-Chairwoman Jessica Rosenworcel, the FCC expanded its E-Rate program in 2024 to let schools and libraries use Universal Service funding to lend out Wi-Fi hotspots and services that could be used off-premises. The FCC previously distributed Wi-Fi hotspots and other Internet access technology under pandemic-related spending authorized by Congress in 2021, but that program ended. The new hotspot lending program was supposed to begin this year.

Carr argues that when the Congressionally approved program ended, the FCC lost its authority to fund Wi-Fi hotspots for use outside of schools and libraries. “I dissented from both decisions at the time, and I am now pleased to circulate these two items, which will end the FCC’s illegal funding [of] unsupervised screen time for young kids,” he said.

Under Rosenworcel, the FCC said the Communications Act gives it “broad and flexible authority to establish rules governing the equipment and services that will be supported for eligible schools and libraries, as well as to design the specific mechanisms of support.”

The E-Rate program can continue providing telecom services to schools and libraries despite the hotspot component being axed. E-Rate disbursed about $1.75 billion in 2024, but could spend more based on demand because it has a funding cap of about $5 billion per year. E-Rate and other Universal Service programs are paid for through fees imposed on phone companies, which typically pass the cost on to consumers.

FCC chair teams up with Ted Cruz to block Wi-Fi hotspots for schoolkids Read More »