Author name: Mike M.

150-million-year-old-pterosaur-cold-case-has-finally-been-solved

150 million-year-old pterosaur cold case has finally been solved

Smyth thinks that so few adults show up on the fossil record in this region not only because they were more likely to survive, but also because those that couldn’t were not buried as quickly. Carcasses would float on the water anywhere from days to weeks. As they decomposed, parts would fall to the lagoon bottom. Juveniles were small enough to be swept under and buried quickly by sediments that would preserve them.

Cause of death

The humerus fractures found in Lucky I and Lucky II were especially significant because forelimb injuries are the most common among existing flying vertebrates. The humerus attaches the wing to the body and bears most flight stress, which makes it more prone to trauma. Most humerus fractures happen in flight as opposed to being the result of a sudden impact with a tree or cliff. And these fractures were the only skeletal trauma seen in any of the juvenile pterosaur specimens from Solnhofen.

Evidence suggesting the injuries to the two fledgling pterosaurs happened before death includes the displacement of bones while they were still in flight (something recognizable from storm deaths of extant birds and bats) and the smooth edges of the break, which happens in life, as opposed to the jagged edges of postmortem breaks. There were also no visible signs of healing.

Storms disproportionately affected flying creatures at Solnhofen, which were often taken down by intense winds. Many of Solnhofen’s fossilized vertebrates were pterosaurs and other winged species such as bird ancestor Arachaeopteryx. Flying invertebrates were also doomed.

Even marine invertebrates and fish were threatened by storm conditions, which churned the lagoons and brought deep waters with higher salt levels and low oxygen to the surface. Anything that sank to the bottom was exceptionally preserved because of these same conditions, which were too harsh for scavengers and paused decomposition. Mud kicked up by the storms also helped with the fossilization process by quickly covering these organisms and providing further protection from the elements.

“The same storm events responsible for the burial of these individuals also transported the pterosaurs into the lagoonal basins and were likely the primary cause of their injury and death,” Smyth concluded.

Although Lucky I and Lucky II were decidedly unlucky, the exquisite preservation of their skeletons that shows how they died has finally allowed researchers to solve a case that went cold for over a hundred thousand years.

Current Biology, 2025. DOI: 10.1016/j.cub.2025.08.006

150 million-year-old pterosaur cold case has finally been solved Read More »

the-current-war-on-science,-and-who’s-behind-it

The current war on science, and who’s behind it


A vaccine developer and a climate scientist walk into a bar write a book.

Fighting against the anti-science misinformation can feel like fighting a climate-driven wildfire. Credit: Anadolu

We’re about a quarter of the way through the 21st century.

Summers across the global north are now defined by flash floods, droughts, heat waves, uncontainable wildfires, and intensifying named storms, exactly as predicted by Exxon scientists back in the 1970s. The United States secretary of health and human services advocates against using the most effective tool we have to fight the infectious diseases that have ravaged humanity for millennia. People are eagerly lapping up the misinformation spewed and disseminated by AI chatbots, which are only just getting started.

It is against this backdrop that a climate scientist and a vaccine developer teamed up to write Science Under Siege. It is about as grim as you’d expect.

Michael Mann is a climate scientist at the University of Pennsylvania who, in 1998, developed the notorious hockey stick graph, which demonstrated that global surface temperatures were roughly flat until around the year 1900, when they started rising precipitously (and have not stopped). Peter Hotez is a microbiologist and pediatrician at Baylor College of Medicine whose group developed a low-cost, patent-free COVID-19 vaccine using public funds (i.e., not from a pharmaceutical company) and distributed it to almost a hundred million people in India and Indonesia.

Unlikely crusaders

Neither of them anticipated becoming crusaders for their respective fields—and neither probably anticipated that their respective fields would ever actually need crusaders. But they each have taken on the challenge, and they’ve been rewarded for their trouble with condemnation and harassment from Congress and death threats from the public they are trying to serve. In this book, they hope to take what they’ve learned as scientists and science communicators in our current world and parlay that into a call to arms.

Mann and Hotez have more in common than being pilloried all over the internet. Although they trained in disparate disciplines, their fields are now converging (as if they weren’t each threatening enough on their own). Climate change is altering the habitats, migrations, and reproductive patterns of pathogen-bearing wildlife like bats, mosquitoes, and other insects. It is causing the migration of humans as well. Our increasing proximity to these species in both space and time can increase the opportunities for us to catch diseases from them.

Yet Mann and Hotez insist that a third scourge is even more dangerous than these two combined. In their words:

It is currently impossible for global leaders to take the urgent actions necessary to respond to the climate crisis and pandemic threats because they are thwarted by a common enemy—antiscience—that is politically and ideologically motivated opposition to any science that threatens powerful special interests and their political agendas. Unless we find a way to overcome antiscience, humankind will face its gravest threat yet—the collapse of civilization as we know it.

And they point to an obvious culprit: “There is, unquestionably, a coordinated, concerted attack on science by today’s Republican Party.”

They’ve helpfully characterized “the five principal forces of antiscience “ into alliterative groups: (1) plutocrats and their political action committees, (2) petrostates and their politicians and polluters, (3) fake and venal professionals—physicians and professors, (4) propagandists, especially those with podcasts, and (5) the press. The general tactic is that (1) and (2) hire (3) to generate deceitful and inflammatory talking points, which are then disseminated by all-too-willing members of (4) and (5).

There is obviously a lot of overlap among these categories; Elon Musk, Vladimir Putin, Rupert Murdoch, and Donald Trump can all jump between a number of these bins. As such, the ideas and arguments presented in the book are somewhat redundant, as are the words used. Far too many things are deemed “ironic” (i.e., the same people who deny and dismiss the notion of human-caused climate change claimed that Democrats generated hurricanes Helene and Milton to target red states in October 2024) or “risible” (see Robert F. Kennedy Jr.’s claim that Dr. Peter Hotez sought to make it a felony to criticize Anthony Fauci).

A long history

Antiscience propaganda has been used by authoritarians for over a century. Stalin imprisoned physicists and attacked geneticists while famously enacting the nonsensical agricultural ideas of Trofim Lysenko, who thought genes were a “bourgeois invention.” This led to the starvation of millions of people in the Soviet Union and China.

Why go after science? The scientific method is the best means we have of discovering how our Universe works, and it has been used to reveal otherwise unimaginable facets of reality. Scientists are generally thought of as authorities possessing high levels of knowledge, integrity, and impartiality. Discrediting science and scientists is thus an essential first step for authoritarian regimes to then discredit any other types of learning and truth and destabilize their societies.

The authors trace the antiscience messaging on COVID, which followed precisely the same arc as that on climate change except condensed into a matter of months instead of decades. The trajectory started by maintaining that the threat was not real. When that was no longer tenable, it quickly morphed into “OK, this is happening, and it may actually get pretty bad for some subset of people, but we should definitely not take collective action to address it because that would be bad for the economy.”

It finally culminated in preying upon people’s understandable fears in these very scary times by claiming that this is all the fault of scientists who are trying to take away your freedom, be that bodily autonomy and the ability to hang out with your loved ones (COVID) or your plastic straws, hamburgers, and SUVs (climate change).

This mis- and disinformation has prevented us from dealing with either catastrophe by misleading people about the seriousness, or even existence, of the threats and/or harping on their hopeless nature, sapping us of the will to do anything to counter them. These tactics also sow division among people, practically ensuring that we won’t band together to take the kind of collective action essential to addressing enormous, complex problems. It is all quite effective. Mann and Hotez conclude that “the future of humankind and the health of our planet now depend on surmounting the dark forces of antiscience.”

Why, you might wonder, would the plutocrats, polluters, and politicians of the Republican Party be so intent on undermining science and scientists, lying to the public, fearmongering, and stoking hatred among their constituents? The same reason as always: to hold onto their money and power. The means to that end is thwarting regulations. Yes, it’s nefarious, but also so disappointingly… banal.

The authors are definitely preaching exclusively to the converted. They are understandably angry at what has been done to them and somewhat mocking of those who don’t see things their way. They end by trying to galvanize their followers into taking action to reverse the current course.

They advise that the best—really, the only—thing we can do now to effect change is to vote and hope for favorable legislation. “Only political change, including massive turnout to support politicians who favor people over plutocrats, can ultimately solve this larger systemic problem,” they write. But since our president and vice president don’t even believe in or acknowledge “systemic problems,” the future is not looking too bright.

The current war on science, and who’s behind it Read More »

sinclair-gets-nothing-it-asked-for,-puts-jimmy-kimmel-back-on-anyway

Sinclair gets nothing it asked for, puts Jimmy Kimmel back on anyway

Conservative broadcaster Sinclair is putting Jimmy Kimmel Live! back on the air. In a statement today, Sinclair said it will end its preemption of the show on its ABC affiliates starting tonight, even though ABC and owner Disney haven’t accepted its request for an ombudsman and other changes.

Facing the threat of lost advertising dollars, Sinclair said it “received thoughtful feedback from viewers, advertisers, and community leaders representing a wide range of perspectives.” Nexstar separately announced an end to its blackout of Kimmel shortly after this article published.

Sinclair said its decision to preempt Kimmel “was independent of any government interaction or influence.” Sinclair’s preempting of Kimmel last week came just as Federal Communications Commission Chairman Brendan Carr said TV station owners that didn’t preempt the show could lose their FCC licenses.

Sinclair last week said it wouldn’t air Kimmel on its stations “until formal discussions are held with ABC regarding the network’s commitment to professionalism and accountability.” Sinclair at the time praised Carr for his stance against Kimmel and urged the FCC to “take immediate regulatory action to address control held over local broadcasters by the big national networks.”

Sinclair also announced it would air a special in remembrance of Kirk in Kimmel’s time slot, but then decided to put it on YouTube instead.

Ombudsman and other requests “not yet adopted”

Sinclair said it didn’t get anything it asked for in its discussions with ABC. The company’s statement today said:

In our ongoing and constructive discussions with ABC, Sinclair proposed measures to strengthen accountability, viewer feedback, and community dialogue, including a network-wide independent ombudsman. These proposals were suggested as collaborative efforts between the ABC affiliates and the ABC network. While ABC and Disney have not yet adopted these measures, and Sinclair respects their right to make those decisions under our network affiliate agreements, we believe such measures could strengthen trust and accountability.

Our decision to preempt this program was independent of any government interaction or influence. Free speech provides broadcasters with the right to exercise judgment as to the content on their local stations. While we understand that not everyone will agree with our decisions about programming, it is simply inconsistent to champion free speech while demanding that broadcasters air specific content.

Sinclair’s request for an ombudsman is reminiscent of Carr requiring an ombudsman at CBS in exchange for a merger approval. Carr described the CBS ombudsman as a “bias monitor.”

Sinclair gets nothing it asked for, puts Jimmy Kimmel back on anyway Read More »

apple-demands-eu-repeal-the-digital-markets-act

Apple demands EU repeal the Digital Markets Act

In June, the company announced changes to its app store policy in an attempt to avoid being further penalized by Brussels.

Apple argues the bloc’s digital rules have made it harder to do business in Europe and worsened consumers’ experience.

In a post on Thursday, the company said the DMA was leaving European consumers with fewer choices and creating an unfair competitive landscape—contrary to the law’s own goals.

For example, Apple said it had had to delay certain features, such as live translation via its AirPods, to make sure they complied with the DMA’s requirement for “interoperability.” The EU rules specify that apps and devices made by one company need to work with those made by competitors.

“Despite our concerns with the DMA, teams across Apple are spending thousands of hours to bring new features to the European Union while meeting the law’s requirements. But it’s become clear that we can’t solve every problem the DMA creates,” the company said.

A European Commission spokesperson said it was normal that companies sometimes “need more time to make their products compliant” and that the commission was helping companies to do so.

The spokesperson also said that “DMA compliance is not optional, it’s an obligation.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Apple demands EU repeal the Digital Markets Act Read More »

astra’s-chris-kemp-woke-up-one-recent-morning-and-chose-violence

Astra’s Chris Kemp woke up one recent morning and chose violence

SpaceX

Kemp generally praises SpaceX for leading the way with iterative design and founder Elon Musk’s willingness to fail publicly in order to move fast. However, in seeking to appeal to interns, he suggested that Astra offered a better working environment than SpaceX’s Starbase factory in South Texas.

“It’s more fun than SpaceX, because we’re not on the border of Mexico where they’ll chop your head off if you accidentally take a left turn,” he said. “And you don’t have to live in a trailer. And we don’t make you work six and a half days a week, 12 hours a day. It’s appreciated if you do, but not required.”

For the record, no SpaceX interns have been beheaded. And honestly, Chris, that is just a really crass thing to say.

Rocket Lab

Kemp’s longest and oldest rival in the launch industry is Rocket Lab and its founder, Peter Beck. This was especially apparent in a recent documentary that covered the rise of both Astra and Rocket Lab, called Wild Wild West. Kemp did not take any direct shots at Beck during his Berkeley speech.

However, in the late 2010s both Astra and Rocket Lab were racing to develop a small-lift rocket capable of lifting dozens to a few hundred kilograms to orbit, Rocket 3 and Electron. In hindsight, Kemp said, these rockets were not large enough to serve the market for satellites. There just were not enough CubeSats to go around.

“That little rocket is too small,” Kemp said in Berkeley about Rocket 3. “And so is Electron.”

A size comparison between Rocket 3, right, and Rocket 4.

Credit: Astra

A size comparison between Rocket 3, right, and Rocket 4. Credit: Astra

Electron may be small, but it has launched more than 70 times. It could generate as much as $200 million in revenue for Rocket Lab this year. And it has provided an excellent test bed for Rocket Lab as it seeks to build the much larger Neutron vehicle, with a reusable first stage.

Overall, Kemp’s talk is insightful, offering thoughtful commentary on Astra’s history and vision for the future. The company is a startup again, now focusing on building a mobile, tactical rocket that could serve national defense interests. Instead of focusing on reuse, the company wants to build a lot of rockets cheaply. It has built a large factory in California to accomplish this.

Also, after nine years in the launch industry, Kemp seems to have finally learned an important lesson about rockets: reliability matters.

“Rocket 3 was the cowboy rocket,” he said, noting the company has worked hard to improve its practices and manufacturing to build vehicles that won’t fail anymore. “The big idea was, you can’t get to scale without reliability.”

Astra’s Chris Kemp woke up one recent morning and chose violence Read More »

anti-vaccine-allies-cheer-as-trump-claims-shots-have-“too-much-liquid”

Anti-vaccine allies cheer as Trump claims shots have “too much liquid”


Why babies don’t pop like water balloons when they get vaccines—and other info for Trump.

President Donald Trump, flanked by senior health officials, speaks during a news conference on September 22, 2025 inside the Roosevelt Room at The White House in Washington. Credit: Getty | Tom Brenner

When the bar is set at suggesting that people inject bleach into their veins, it’s hard to reach a new low. But in a deranged press event on autism Monday evening, President Trump seemed to go for it—sharing “rumors” and his “strong feelings” not just on Tylenol but also his bonkers views on childhood vaccines.

Trump was there with his health secretary, anti-vaccine activist Robert F. Kennedy Jr., to link autism to the use of Tylenol (acetaminophen) during pregnancy. While medical experts condemn the claim as unproven and dangerous (which it is), Kennedy’s anti-vaccine followers decried it as a distraction from their favored false and dangerous explanation—that vaccines cause autism (which they don’t).

Pinning the blame on Tylenol instead of vaccines enraged Kennedy’s own anti-vaccine organization, Children’s Health Defense. In the run-up to the event Monday evening, CHD retweeted an all-caps defense of Tylenol, and CHD President Mary Holland called the announcement a “sideshow” in an interview with Steve Bannon.

But fear not. The rift was short-lived, as their big feelings were soothed mere minutes into Monday’s event. After smearing Tylenol, the president’s unscripted remarks quickly veered into an incoherent rant linking vaccines to autism as well.

At one point in his comments, he rattled off a list of anti-vaccine activists’ most vilified vaccine components (mercury and aluminum). But his attack largely ignored the content of vaccines and instead surprisingly focused on volume. Overall, his comments were incoherent, but again and again, he seemed to swirl back to this bizarre concern.

Wut?

If you piece together Trump’s sentence- and thought-fragments, his comments created a horrifying picture of what he thinks childhood vaccinations look like:

They pump so much stuff into those beautiful little babies. It’s a disgrace. I don’t see it. I think it is very bad. They’re pumping. It looks like they’re pumping into a horse. You have a little child, little fragile child, and you get a vat of 80 different vaccines, I guess, 80 different blends and they pump it in.

It seemed that Trump’s personal solution to this imagined problem is to space out and delay vaccines so they are not given at one time:

Break it up because it’s too much liquid. Too many different things are going into that baby at too big a number. The size of this thing, when you look at it, it’s like 80 different vaccines and beyond vaccines and 80. Then you give that to a little kid.

From Trump’s loony descriptions, you might be imagining an evil cartoon doctor wielding a bazooka-sized syringe and cackling maniacally while injecting a baby with a vat’s worth of 80 different vaccines until it inflates like a water balloon ready to burst.

But this cuckoo take is not how childhood vaccinations go in routine well-baby doctor’s visits. First, most vaccines have a volume of 0.5 milliliters, which is about a tenth of a teaspoon. And babies and children do not get 80 different vaccines ever, let alone at one time. In fact, no recommendations would see anyone get 80 different types of vaccines cumulatively.

By age 18, it’s recommended that people get vaccinated against 17 diseases, including seasonal flu and COVID-19. And some vaccines are combination shots, knocking out three or four diseases with one injection, such as the measles, mumps, rubella (MMR) vaccine or the Diphtheria, tetanus, & acellular pertussis (DTaP) vaccine. And again, even those combination shots are 0.5 mL total.

Modern vaccines

Trump’s claim of 80 vaccines doesn’t even stand up when you count vaccine doses rather than different vaccines. Some childhood vaccines require multiple doses—MMR is given in two doses, and DTaP is a five-dose series, for example. According to current recommendations, by age 18, kids should have 36 vaccine doses against childhood diseases. If you add in a flu shot every year, that’s 54 doses. If you add in a COVID-19 vaccine every year, that’s 72.

While 72 might seem like a big number, again, that’s spread out over 18 years and includes seasonal shots. And medical experts point to another key fact—the vaccines that children get today are much more streamlined and efficient than vaccines of yore. A helpful myth-busting info sheet from experts with Yale’s School of Public Health points out that in the mid-1980s, children under age 2 were vaccinated against seven diseases, but those old-school vaccines included more than 3,000 germ components that can spur immune responses (aka antigens). Today, children under age 2 get vaccinated against 15 diseases, but today’s more sophisticated vaccine designs include just 180 antigens, making the protection more targeted and reducing the risk of errant immune responses.

In all, the facts should dash any worries of nefarious doctors inflating children with vast volumes of noxious concoctions. But for those who may hew closely to the cautionary principle, Trump’s “space the shots out” plan may still seem reasonable. It’s not.

At most, children might get five or six vaccines at one time. But again, the number of antigens in those shots is far lower than those in vaccines children received decades ago. And the number of antigens in those vaccines is just a fraction of the number kids are exposed to every day just from their environments. If you’ve ever watched a kindergartener touch every surface and object in a classroom and then shove their fingers in their nose and mouth, you understand the point.

Vaccinations don’t overwhelm children’s immune systems. And there’s no evidence that spacing them out avoids any of the very small risks they pose.

Data against dogma

After Trump shared his personal feelings about vaccines, the American Academy of Pediatrics rushed to release a statement, first refuting any link between vaccines and autism and then warning against spacing out vaccine doses.

“Pediatricians know firsthand that children’s immune systems perform better after vaccination against serious, contagious diseases like polio, measles, whooping cough, and hepatitis B,” the AAP said. “Spacing out or delaying vaccines means children will not have immunity against these diseases at times when they are most at risk.”

Such messages make no impact on the impervious dogma of anti-vaccine activists, of course. While medical experts and organizations like AAP scrambled to combat the misinformation and assure pregnant people and parents that Tylenol was still safe and vaccines don’t cause autism, anti-vaccine activists cheered Trump’s comments.

“We knew today was going to be about acetaminophen,” CHD President Mary Holland said, speaking on Bannon’s podcast again after the event. “We didn’t know if he’d touch on vaccines—and he was all over it. It was an amazing, amazing speech.

“I’m happy to say he basically gave parents permission not to vaccinate their kids—and definitely not to take Tylenol.”

In a new pop-up message on Tylenol’s website, the maker of the common pain reliever and fever reducer pushed back on Trump’s feelings.

Tylenol is one of the most studied medications in history–and is safe when used as directed by expecting mothers, infants, and children.

The facts remain unchanged: over a decade of rigorous research, endorsed by leading medical professionals, confirm there is no credible evidence linking acetaminophen to autism.

The same is true for vaccines.

Photo of Beth Mole

Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes.

Anti-vaccine allies cheer as Trump claims shots have “too much liquid” Read More »

more-reactions-to-if-anyone-builds-it,-everyone-dies

More Reactions to If Anyone Builds It, Everyone Dies

Previously I shared various reactions to If Anyone Builds It Everyone Dies, along with my own highly positive review.

Reactions continued to pour in, including several impactful ones. There’s more.

Any further reactions will have a higher bar, and be included in weekly posts unless one is very high quality and raises important new arguments.

IABIED gets to #8 in Apple Books nonfiction, #2 in UK. It has 4.8 on Amazon and 4.3 on Goodreads. The New York Times bestseller list lags by two weeks so we don’t know the results there yet but it is expected to make it.

David Karsten suggests you read the book, while noting he is biased. He reminds us that, like any other book, most conversations you have about the book will be with people who did not read the book.

David Manheim: “An invaluable primer on one side of the critical and underappreciated current debate between those dedicated to building this new technology, and those who have long warned against it. If you haven’t already bought the book, you really should.”

Kelsey Piper offers a (gated) review of IABIED, noting that the correct focus is on whether the title’s claim is true.

Steven Byrnes reviews the book, says he agrees with ~90% of it, disagreeing with some of the reasoning steps but emphasizes that the conclusions are overdetermined.

Michael Nielsen gives the book five stars on Goodreads and recommends it, as it offers a large and important set of largely correct arguments, with the caveat that there he sees significant holes in the central argument. His actual top objection is that even if we do manage to get a controlled and compliant ASI, that is still extremely destabilizing at best and fatal at worst. I agree with him this is not a minor quibble, and I worry about that scenario a lot whereas the book’s authors seem to consider it a happy problem they’d love to have. If anything it makes the book’s objections to ‘building it’ even stronger. He notes he does not have a good solution.

Romy Mireo thought the book was great for those thinking the problem through, but would have preferred the book offer a more streamlined way for a regular person to help, in a Beware Trivial Inconveniences kind of way. The book attempts to provide this in its online resources.

Nomads Vagabonds (in effect) endorses the first 2 parts of the book, but strongly opposes the policy asks in part 3 as unreasonable asks, demanding that if you are alarmed you need to make compromises to ‘help you win,’ if you think the apocalypse is coming you only propose things inside the Overton Window, that ‘even a 1% reduction in risk is worth pursuing.’

That only makes sense if and only if solutions inside the Overton Window substantially improve your chances or outlook, and thus the opportunity outweighs the opportunity cost. The reason (as Nomads quotes) Ezra Klein says Democrats should compromise on various issues if facing what they see as a crisis, is this greatly raises the value of ‘win the election with a compromise candidate’ relative to losing. There’s tons of value to protect even if you give up ground on some issues.

Whereas the perspective of Yudkowsky and Soares is that measures ‘within the Overton Window’ matter very little in terms of prospective outcomes. So you’d much rather take a chance on convincing people to do what would actually work. It’s a math problem, including the possibility that busting the Overton Window helps the world achieve other actions you aren’t yourself advocating for.

The central metaphor Nomads uses is protecting against dragons, where the watchman insists upon only very strong measures and rejects cheaper ones entirely. Well, that depends on if the cheaper solutions actually protect you from dragons.

If you believe – as I do! – that lesser changes can make a bigger difference, then you should say so, and try to achieve those lesser changes while also noting you would support the larger ask. If you believe – as the authors do – that the lesser changes can’t make much difference, then you say that instead.

I would also note that yes, there are indeed many central historical examples of asking for things outside the Overton Window being the ultimately successful approach to creating massive social change. This ranges from the very good (such as abolishing slavery) to the very bad (such as imposing Communism or Fascism).

People disagree on a lot on whether the book is well or poorly written, in various ways, at various different points in the book. Kelsey Piper thinks the writing in the first half is weak, the second half is stronger, whereas Timothy Lee (a skeptic of the book’s central arguments) thought the book was surprisingly well-written and the first few chapters were its favorites.

Peter Wildeford goes over the book’s central claims, finding them overconfident and warning about the downsides and costs of shutting AI development down. He’s a lot ‘less doomy’ than the book on many fronts. I’d consider this his central takeaway:

Peter Wildeford: Personally, I’m very optimistic about AGI/ASI and the future we can create with it, but if you’re not at least a little ‘doomer’ about this, you’re not getting it. You need profound optimism to build a future, but also a healthy dose of paranoia to make sure we survive it. I’m worried we haven’t built enough of this paranoia yet, and while Yudkowsky’s and Soares’s book is very depressing, I find it to be a much-needed missing piece.

This seems like a reasonable take.

Than Ruthenis finds the book weaker than he’d hoped, he finds section 3 strong but the first two weak. Darren McKee finds it a decent book, but is sad it was not better written and did not address some issues, and thus does not see it as an excellent book.

Buck is a strong fan of the first two sections and liked the book far more than he expected, calling them the best available explanation of the basic AI misalignment risk case for a general audience. He caveats that the book does not address counterarguments Buck thinks are critical, and that he would tell people to skip section 3. Buck’s caveats come down to expecting important changes between the world today and the world where ASI is developed, that potentially change whether everyone would probably die. I don’t see him say what he expects these differences to be? And the book does hold out the potential for us to change course, and indeed find such changes, it simply says this won’t happen soon or by default, which seems probably correct to me.

Nostream addresses the book’s arguments, agreeing that existential risk is present but making several arguments that the probability is much lower, they estimate 10%-20%, with the claim that if you buy any of the ‘things are different than the book says’ arguments, that would be sufficient to lower your risk estimate. This feels like a form of the conjunction fallacy fallacy, where there is a particular dangerous chain and thus breaking the chain at any point breaks a lot of the danger and returns us to things probably being fine.

The post focuses on the threat model of ‘an AI turns on humanity per se’ treating that as load bearing, which it isn’t, and treats the ‘alignment’ of current models as meaningful in ways I think are clearly wrong, and in general tries to draw too many conclusions from the nature of current LLMs. I consider all the arguments here, in the forms expressed, not new and well-answered by the book’s underlying arguments, although not always in a form as easy to pick up on as one might hope in the book text alone (book is non-technical and length-limited, and people understandably have cached thoughts and anchor on examples).

So overall I wished the post was better and made stronger arguments, including stronger forms of its arguments, but this is the right approach to take of laying out specific arguments and objections, including laying out up front that their own view includes unacceptably high levels of existential risk and also dystopia risk or catastrophic risk short of that. If I believed what Nostream believed I’d be in favor of not building the damn thing for a while, if there was a way to not build it for a while.

Gary Marcus offered his (gated) review, which he kindly pointed out I had missed in Wednesday’s roundup. He calls the book deeply flawed, but with much that is instructive and worth heeding.

Despite the review’s flaws and incorporation of several common misunderstandings, this was quite a good review, because Gary Marcus focuses on what is important, and his reaction to the book is similar to mine to his review. He notices that his disagreements with the book, while important and frustrating, should not be allowed to interfere with the central premise, which is the important thing to consider.

He starts off with this list of key points where he agrees with the authors:

  1. Rogue AI is a possibility that we should not ignore. We don’t know for sure what future AI will do and we cannot rule out the possibility that it will go rogue.

  2. We currently have no solution to the “alignment problem” of making sure that machines behave in human-compatible ways.

  3. Figuring out solutions to the alignment problem is really, really important.

  4. Figuring out a solution to the alignment problem is really, really hard.

  5. Superintelligence might come relatively soon, and it could be dangerous.

  6. Superintelligence could be more consequential than any other trend.

  7. Governments should be more concerned.

  8. The short-term benefits of AI (eg in terms of economics and productivity) may not be worth the long-term risks.

Noteworthy, none of this means that the title of the book — If Anyone Builds It, Everyone Dies — literally means everyone dies. Things are worrying, but not nearly as worrying as the authors would have you believe.

It is, however, important to understand that Yudkowsky and Soares are not kidding about the title

Gary Marcus does a good job here of laying out the part of the argument that should be uncontroversial. I think all seven points are clearly true as stated.

Gary Marcus: Specifically, the central argument of the book is as follows:

Premise 1. Superintelligence will inevitably come — and when it does, it will inevitably be smarter than us.

Premise 2. Any AI that is smarter than any human will inevitably seek to eliminate all humans.

Premise 3. There is nothing that humans can do to stop this threat, aside from not building superintelligence in the first place.

The good news [is that the second and third] premisses are not nearly as firm as the authors would have it.

I’d importantly quibble here, my version of the book’s argument would be, most importantly on the second premise although it’s good to be precise everywhere:

  1. Superintelligence is a real possibility that could happen soon.

  2. If built with anything like current techniques, any sufficiently superintelligent AI will effectively maximize some goal and seek to rearrange the atoms to that end, in ways unlikely to result in the continued existence of humans.

  3. Once such a superintelligent AI is built, if we messed up, it will be too late to stop this from happening. So for now we have to not build such an AI.

He mostly agrees with the first premise and has a conservative but reasonable estimation of how soon it might arrive.

For premise two, the quibble matters because Gary’s argument focuses on whether the AI will have malice, pointing out existing AIs have not shown malice. Whereas the book does not see this as requiring the AI to have malice, nor does it expect malice. It expects merely indifference, or simply a more important priority, the same way humans destroy many things we are indifferent to, or that we actively like but are in the way. For any given specified optimization target, given sufficient optimization power, the chance the best solution involves humans is very low. This has not been an issue with past AIs because they lacked sufficient optimization power, so such solutions were not viable.

This is a very common misinterpretation, both of the authors and the underlying problems. The classic form of the explanation is ‘The AI does not love you, the AI does not hate you, but you are composed of atoms it can use for something else.’

His objection to the third premise is that in a conflict, the ASI’s victory over humans seems possible but not inevitable. He brings a standard form of this objection, including quoting Moltke’s famous ‘no plan survives contact with the enemy’ and modeling this as the AI getting to make some sort of attack, which might not work, after which we can fight back. I’ve covered this style of objection many times, comprising a substantial percentage of words written in weekly updates, and am very confident it is wrong. The book also addresses it at length.

I will simply note that in context, even if on this point you disagree with me and the book, and agree with Gary Marcus, it does not change the conclusion, so long as you agree (as Marcus does) that such an ASI has a large chance of winning. That seems sufficient to say ‘well then we definitely should not build it.’ Similarly, while I disagree with his assumption we would unite and suddenly act ruthlessly once we knew the ASI was against us, I don’t see the need to argue that point.

Marcus does not care for the story in Part 2, or the way the authors use colloquial language to describe the AI in that story.

I’d highlight another common misunderstanding, which is important enough that the response bears repeating:

Gary Marcus: A separate error is statistical. Over and again, the authors describe scenarios that multiply out improbabilities: AIs decide to build biological weapons; they blackmail everyone in their way; everyone accepts that blackmail; the AIs do all this somehow undetected by the authorities; and so on. But when you string together a bunch of improbabilities, you wind up with really long odds.

The authors are very much not doing this. This is not ‘steps in the chain’ where the only danger is that the AI succeeds at every step, whereas if it ever fails we are safe. They bend over backwards, in many places, to describe how the AI uses forking paths, gaming out and attempting different actions, having backup options, planning for many moves not succeeding and so on.

The scenario also does not presuppose that no one realizes or suspects, in broad terms, what is happening. They are careful to say this is one way things might play out, and any given particular story you tell must necessarily involve a lot of distinct things happening.

If you think that at the first sign of trouble all the server farms get turned off? I do not believe you are paying enough attention to the world in 2025. Sorry, no, not even if there wasn’t an active AI effort to prevent this, or an AI predicting what actions would cause what reactions, and choosing the path that is most likely to work, which is one of the many benefits of superintelligence.

Several people have pointed out that the actual absurdity is that the AI in the story has to work hard and use the virus to justify it taking over. In the real world, we will probably put it in charge ourselves, without the AI even having to push us to do so. But in order to be more convincing, the story makes the AI’s life harder here and in many other places than it actually would be.

Gary Marcus closes with discussion of alternative solutions to building AIs that he wished the book explored more, including alternatives to LLMs. I’d say that this was beyond the scope of the book, and that ‘stop everyone from racing ahead with LLMs’ would be a likely necessary first step before we can pursue such avenues in earnest. Thus I won’t go into details here, other than to note that ‘axioms such as ‘avoid causing harm to humans’’ are known to not work, which was indeed consistently the entire point of Asimov’s robot novels and other stories, where he explores some but not all of the reasons why this is the case.

Again, I very much appreciated this review, which focused on what actually matters and clearly is stating what Marcus actually believes. More like this, please.

John Pressman agrees with most of the book’s individual statements and choices on how to present arguments, but disagrees with the thesis and many editorial and rhetorical choices. Ultimately his verdict is that the book is ‘okay.’ He states up top as ‘obvious’ many ‘huge if true’ statements about the world that I do not think are correct and definitely are not obvious. There’s also a lot of personal animus going on here, has been for years. I see two central objections to the book from Pressman:

  1. If you use parables then You Are Not Serious People, as in ‘Bluntly: A real urgent threat that demands attention does not begin with Once Upon a Time.’

    1. Which he agrees is a style issue.

    2. There are trade-offs here. It is hard for someone like Pressman to appreciate the challenges in engaging with average people who know nothing about these issues or facing off the various stupid objections those people have (that John mostly agrees are deeply stupid).

  2. He thinks the alignment problems involved are much easier than Yudkowsky and Soares believe, including that we have ‘solved the human values loading problem,’ although he agrees that we are still very much on track to fail bigly.

    1. I think (both here and elsewhere where he goes into more detail) he both greatly overstates his case and also deliberately presents the case in a hostile and esoteric manner. That makes engagement unnecessarily difficult.

    2. I also think there’s important real things here and in related clusters of arguments, and my point estimate of difficulty is lower than the book’s, largely for reasons that are related to what Pressman is attempting to say.

    3. As Pressman points out, even if he’s fully right, it looks pretty grim anyway.

Aella (to be fair a biased source here) offers a kind of meta-review.

Aella: man writes book to warn the public about asteroid heading towards earth. his fellow scientists publish thinkpieces with stuff like ‘well his book wasn’t very good. I’m not arguing about the trajectory but he never addresses my objection that the asteroid is actually 20% slower’

I think I’d feel a lot better if more reviews started with “first off, I am also very worried about the asteroid and am glad someone is trying to get this to the public. I recognize I, a niche enthusiast, am not the target of this book, which is the general public and policy makers. Regardless of how well I think the book accomplishes its mission, it’s important we all band together and take this very seriously. That being said, here’s my thoughts/criticisms/etc.

Raymond Arnold responds to various complaints about the book, especially:

  1. Claims the book is exaggerating. It isn’t.

    1. There is even an advertising campaign whose slogan is ‘we wish we were exaggerating,’ and I verify they do wish this.

    2. It is highly reasonable to describe the claims as overstated or false, indeed in some places I agree with you. But as Kelsey says, focus on: What is true?

  2. Claims the authors are overconfident, including claims this is bad for credibility.

    1. I agree that the authors are indeed overconfident. I hope I’ve made that clear.

    2. However I think these are reasonable mistakes in context, that the epistemic standards here are much higher than those of most critics, and also that people should say what they believe, and that the book is careful not to rely upon this overconfidence in its arguments.

    3. Fears about credibility or respectability, I believe, motivate many attacks on the book. I would urge everyone attacking the book for this reason (beyond noting the specific worry) to stop and take a long, hard look in the mirror.

  3. Complaints that it sucks that ‘extreme’ or eye-catching claims will be more visible. Whereas claims one thinks are more true are less visible and discussed.

    1. Yeah, sorry, world works the way it does.

    2. The eye-catching claims open up the discussion, where you can then make clear you disagree with the full claim but endorse a related other claim.

    3. As Raymond says, this is a good thing to do, if you think the book is wrong about something write and say why and about how you think about it.

He provides extensive responses to the arguments and complaints he has seen, using his own framings, in the extended thread.

Then he fleshes out his full thoughts in a longer LessWrong post, The Title Is Reasonable, that lays out these questions at length. It also contains some good comment discussion, including by Nate Soares.

I agree with him that it is a reasonable strategy to have those who make big asks outside the Overton Window because they believe those asks are necessary, and also those who make more modest asks because they feel those are valuable and achievable. I also agree with him that this is a classic often successful strategy.

Yes, people will try to tar all opposition with the most extreme view and extreme ask. You see it all the time in anything political, and there are plenty of people doing this already. But it is not obvious this works, and in any case it is priced in. One can always find such a target, or simply lie about one or make one up. And indeed, this is the path a16z and other similar bad faith actors have chosen time and again.

James Miller: I’m an academic who has written a lot of journalistic articles. If Anyone Builds It has a writing style designed for the general public, not LessWrong, and that is a good thing and a source of complaints against the book.

David Manheim notes that the book is non-technical by design and does not attempt to bring readers up to speed on the last decade of literature. As David notes, the book reflects on the new information but is not in any position to cover or discuss all of that in detail.

Dan Schwartz offers arguments for using evolution-based analogies for deep learning.

Clara Collier of Asterisk notes that She Is Not The Target for the book so it is hard to evaluate its effectiveness on its intended audience, but she is disappointed in what she calls the authors failures to update.

I would turn this back at Clara here, and say she seems to have failed to update her priors on what the authors are saying. I found her response very disappointing.

I don’t agree with Rob Bensinger’s view that this was one of the lowest-quality reviews. It’s more that it was the most disappointing given the usual quality and epistemic standards of the source. The worst reviews were, as one would expect, from sources that are reliably terrible in similar spots, but that is easy to shrug off.

  1. The book very explicitly does not depend on the foom, there only being one AI, or other elements she says it depends on. Indeed in the book’s example story the AI does not foom. It feels like arguing with a phantom.

    1. Also see Max Harms responding on this here, where he lays out the foom is not a ‘key plank’ in the MIRI story of why AI is dangerous.

    2. He also points out that there being multiple similarly powerful AIs by default makes the situation more dangerous rather than less dangerous, including in AI 2027, because it reduces margin for error and ability to invest in safety. If you’ve done a tabletop exercise based on AI 2027, this becomes even clearer.

  2. Her first charge is that a lot of AI safety people disagree with the MIRI perspective on the problem, so the subtext of the book is that the authors must think all those other people are idiots. This seems like a mix of an argument for epistemic modesty, or a claim that Yudkowsky and Soares haven’t considered these other opinions properly, and are disrespecting those who hold them? I would push back strongly on all of that.

  3. She raises the objection that the book has an extreme ask that ‘distracts’ from other asks, a term also used by MacAskill. Yes, the book asks for an extreme thing that I don’t endorse, but that’s what they think is necessary, so they should say so.

  4. She asks why the book doesn’t spend more time explaining why an intelligence explosion is likely to occur. The answer is the book is explicitly arguing a conditional, what happens if it does occur, and acknowledges that it may or may not occur, or occur on any given time frame. She also raises the ‘but progress has been continuous which argues against an intelligent explosion’ argument, except continuous progress does not argue against a future explosion. Extend curves.

  5. She objects that the authors reach the same conclusions about LLMs that they previously reached about other AI systems. Yes, they do. But, she says, these things are different. Yes, they are, but not in ways that change the answer, and the book (and supplementary material, and their other writings) explain why they believe this. Reasonable people can disagree. I don’t think, from reading Clara’s objections along these lines, that she understands the central arguments being made by the authors (over the last 20 years) on these points.

  6. She says if progress will be ‘slow and continuous’ we will have more than one shot on the goal. For sufficiently generous definitions of slow and continuous this might be true, but there has been a lot of confusion about ‘slow’ versus ‘fast.’ Current observed levels of ‘slow’ are still remarkably fast, and effectively mean we only get one shot, although we are getting tons of very salient and clear warning shots and signs before we take that one shot. Of course, we’re ignoring the signs.

  7. She objections that ‘a future full of flourishing people is not the best, most efficient way to fulfill strange alien purposes’ is stated as a priori obvious. But it is very much a priori obvious, I will bite this bullet and die on this hill.

There are additional places one can quibble, another good response is the full post from Max Harms, which includes many additional substantive criticisms. One key additional clarification is that the book is not claiming that what we will fail to learn a lot about AI from studying current AI, only that this ‘a lot’ will be nothing like sufficient. It can be difficult to reconcile ‘we will learn a lot of useful things’ with ‘we will predictably learn nothing like enough of the necessary things.’

Clara responded in the comments, and stands by the claim that the book’s claims require a major discontinuity in capabilities, and that gradualism would imply multiple meaningfully distinct shots on goal.

Jeffrey Ladish also has a good response to some of Collier’s claims, which he finishes like this:

Jeffrey Ladish: I do think you’re nearby to a criticism that I would make about Eliezer’s views / potential failures to update: Which is the idea that the gap between a village idiot and Einstein is small and we’ll blow through it quite fast. I think this was an understandable view at the time and has turned out to be quite wrong. And an implication of this is that we might be able to use agents that are not yet strongly superhuman to help us with interpretability / alignment research / other useful stuff to help us survive.

Anyway, I appreciate you publishing your thoughts here, but I wanted to comment because I didn’t feel like you passed the authors ITT, and that surprised me.

Peter Wildeford: I agree that the focus on FOOM in this review felt like a large distraction and missed the point of the book.

The fact that we meaningfully do get a meaningful amount of time with AIs one could think of as between village idiots and Einsteins is indeed a major source of hope, although I understand why Soares and Yudkowsky do not see as much hope here, and that would be a good place to poke. The gap argument was still more correct than incorrect, in the sense that we will likely only get a period of years in that window rather than decades, and most people are making various forms of the opposite mistake and not understanding that above-Einstein levels of intelligence are Coming Soon. But years or even months can do a lot for you if you use them well.

Will MacAskill offers a negative review, criticizing the arguments and especially parallels to evolution as quite bad, although he praises the book for plainly saying what its authors actually believe, and for laying out ways in which the tech we saw was surprising, and for the quality of the analogies.

I found Will’s quite disappointing but unsurprising. Here are my responses:

  1. The ones about evolution parallels seem robustly answered by the book and also repeatedly elsewhere by many.

  2. Will claims the book is relying on a discontinuity of capability. It isn’t. They are very clear that it isn’t. The example story containing a very mild form of [X] does not mean one’s argument relies on [X], although it seems impossible for there not to be some amount of jumps in capability, and we have indeed seen such jumps.

  3. The ones about ‘types of misalignment’ seems at best deeply confused, I think he’s saying humans will stay in control because AIs will be risk averse, so they’ll be happy to settle for a salary rather than take over and thus make this overwhelmingly pitiful deal with us? Whereas the fact that imperfect alignment is indeed catastrophic misalignment in the context of a superhuman AI is the entire thesis of the book, covered extensively, in ways Will doesn’t engage with here?

    1. Intuition pump that might help: Are humans risk averse?

  4. Criticism of the proposal as unlikely to happen. Well, not with that attitude. If one thinks that this is what it takes, one should say so. If not, not.

  5. Criticism of the proposal as unnecessary or even unhelpful, and ‘distracting’ from other things we could do. Standard arguments here. Not much to say, other than to note that the piecemail ban proposals he suggests don’t technically work.

  6. Criticism of the use of fiction and parables ‘as a distraction.’ Okie dokie.

  7. Claim that the authors should have updated more in light of developments in ML. This is a reasonable argument one can make, but people who (including Clair and also Tyler below) that the book’s arguments are ‘outdated’ are simply incorrect. The arguments and authors do take such information into account, and have updated in some ways, but do not believe the new developments change the central arguments or likely future ultimate path. And they explain why. You are encouraged to disagree with their reasoning if you find it wanting.

Zack Robinson of Anthropic’s Long Term Benefit Trust has a quite poor Twitter thread attacking the book, following the MacAskill style principle of attacking those whose tactics and messages are not cooperating with the EA-brand-approved messages designed to seek movement growth, respectability and power and donations, which in my culture we call a strategy of instrumental convergence.

The particular arguments in the thread are quite bad and mischaracterize the book. He says the book presents doom as a foregone conclusion, which is not how contingent predictions work. He uses pure modesty and respectability arguments for ‘accepting uncertainty’ in order to ‘leave room for AI’s transformative benefits,’ which is simply a non-sequitur. He warns this ‘blinds us to other series risks,’ the ‘your cause of us all not dying is a distraction from the real risks’ argument, which again simply is not true, there is no conflict here. His statement in this linked Tweet about the policy debate being only a binary false choice is itself clearly false, and he must know this.

The whole thing is in what reads as workshopped corporate speak.

The responses to the thread are very on point, and Rob Benginger in particular pulls this very important point of emphasis:

Like Buck and those who responded to expressions their disappointment, I find this thread unbecoming of someone on the LTBT and as long as he remains there I can no longer consider him a meaningful voice for existential risk worries there, which substantially lowers my estimate of Anthropic’s likely behaviors.

I join several respondents in questioning whether Zack has read the book, a question he did not respond to. One can even ask if he has logically parsed its title.

Similarly, given he is the CEO of the Center for Effective Altruism, I must adjust there, as well, especially as this is part of a pattern of similar strategies.

I think this line is telling in multiple ways:

Zack Robinson: I grew up participating in debate, so I know the importance of confidence. But there are two types: epistemic (based on evidence) and social (based on delivery). Despite being expressed in self-assured language, the evidence for imminent existential risk is far from airtight.

  1. Zack is debating in this thread, as in trying to make his position win and get ahead via his arguments, rather than trying to improve our epistemics.

  2. Contrary to the literal text, Zack is looking at the social implications of appearing confident, and disapproving, rather than primarily challenging the epistemics. Indeed, the only arguments he makes against confidence here are from modesty:

    1. Argument from consequences: Thinking confidently causes you to get dismissed (maybe) or to ignore other risks (no) and whatnot.

    2. Argument from consensus: Others believe the risk is lower. Okie dokie. Noted.

Tyler Cowen does not offer a review directly, instead in an assorted links post strategically linking in a dismissive (of the book that he does not name) fashion to curated negative reviews and comments, including MacAskill and Collier above.

As you would expect, many others not named in this post are trotting out all the usual Obvious Nonsense arguments about why superintelligence would definitely turn out fine for the humans, such as that ASIs would ‘freely choose to love humans.’ Do not be distracted by noise.

Peter Wildeford: They made the movie but in real life.

Simeon: To be fair on the image they’re not laughing etc.

Jacques: tbf, he did not ask, “but what does this mean for jobs?”

Will: You know it’s getting real when Yud has conceded the issue of hats.

Discussion about this post

More Reactions to If Anyone Builds It, Everyone Dies Read More »

h1-b-and-the-$100k-fee

H1-B And The $100k Fee

The Trump Administration is attempting to put a $100k fee on future H1-B applications, including those that are exempt from the lottery and cap, unless of course they choose to waive it for you. I say attempting because Trump’s legal ability to do this appears dubious.

This post mostly covers the question of whether this is a reasonable policy poorly implemented, or a terrible policy poorly implemented.

Details offered have been contradictory, with many very confused about what is happening, including those inside the administration. The wording of the Executive Order alarmingly suggested that any H1-B visa holders outside the country even one day later would be subject to the fee, causing mad scrambles until this was ‘clarified.’ There was chaos.

Those so-called ‘clarifications’ citing ‘facts’ gaslit the public, changing key aspects of the policy, and I still do not remain confident on what the Trump administration will actually do here, exactly.

Trump announced he is going to charge a $100,000 fee for H-1B visas, echoing his suspension of such visas back in 2020 (although a court overruled him on this, and is likely to overrule the new fee as well). It remains remarkable that some, such as those like Jason and the All-In Podcast, believed or claimed to believe that Trump would be a changed man on this.

But what did this announcement actually mean? Annual or one-time? Existing ones or only new ones? Lottery only or also others? On each re-entry to the country? What the hell is actually happening?

I put it that way because, in addition to the constant threat that Trump will alter the deal and the suggestion that you pray he does not alter it any further, there was a period where no one could even be confident what the announced policy was, and the written text of the executive order did not line up with what administration officials were saying. It is entirely plausible that they themselves did not know which policy they were implementing, or different people in the administration had different ideas about what it was.

It would indeed be wise to not trust the exact text of the Executive Order, on its own, to be reflective of future policy in such a spot. Nor would it be wise to trust other statements that contradict the order. It is chaos, and causes expensive chaos.

The exact text is full of paranoid zero-sum logic. It treats it as a problem that we have 2.5 million foreign STEM workers in America, rather than the obvious blessing that it is. It actively complains that the unemployment rate in ‘computer occupations’ has risen from 1.98% to (gasp!) 3.02%. Even if we think that has zero to do with AI, or the business cycle or interest rates, isn’t that pretty low? Instead, ‘a company hired H1-B workers and also laid off other workers’ is considered evidence of widespread abuse of the system, when there is no reason to presume these actions are related, or that these firms reduced the number of American jobs.

As I read the document, in 1a it says entry for such workers is restricted ‘except for those aliens whose petitions are accompanied or supplemented by a payment of $100k,’ which certainly sounds like it applies to existing H1-Bs. Then in section 3 it clarifies that this restriction applies after the date of this proclamation – so actual zero days of warning – and declines another clear opportunity to clarify that this does not apply to existing H1-B visas.

I don’t know why anyone reading this document would believe that this would not apply to existing H1-B visa holders attempting re entry. GPT-5 Pro agrees with this and my other interpretations (that I made on my own first).

Distinctly, 1b then says they won’t consider applications that don’t come with a payment, also note that the payment looks to be due on application not on acceptance. Imagine paying $100k and then getting your application refused.

There is then the ability in 1c for the Secretary of Homeland Security to waive the fee, setting up an obvious opportunity for playing favorites and for outright corruption, the government picking winners and losers. Why pay $100k to the government when you can instead pay $50k to someone else? What happens when this decision is then used as widespread leverage?

Justin Wolfers: Critical part of the President’s new $100,000 charge for H1-B visas: The Administration can also offer a $100,000 discount to any person, company, or industry that it wants. Replacing rules with arbitrary discretion.

Want visas? You know who to call and who to flatter.

Here’s some people being understandably very confused.

Sheel Mohnot (7.6m views): $100k fee for H-1B’s is reasonable imo [link goes to Bloomberg story that said it was per application].

Generates revenue & reduces abuse for the H-1B program and talented folks can still come via other programs.

Sheel Mohnot (8.1m views): I initially thought the H-1B reform was reasonable but it’s $100k per year, not per visa.

That is too much, will definitely have a negative effect on innovation in the US. It’s basically only for people who make >$400k now.

Luba Lesiva: It’s per grant and per renewal – but are renewals actually annual? I thought it was every 3 years

Sheel Mohnot: Lutnick says per year but the text says per grant and wouldn’t apply to renewals.

Garrett Jones: They keep Lutnick out of the loop.

Daniel: It’s fun that this executive order was so poorly written that members of the administration are publicly disagreeing about what it means.

Trump EOs are like koans. Meditate on them and you will suddenly achieve enlightenment.

Here’s some people acting highly sensibly in response to what was happening, even if they had closely read the Executive Order:

Gabbar Singh: A flight from US to India. People boarded the flight. Found out about the H1B news. Immediately disembarked fearing they won’t be allowed re-entry.

Quoted Text Message: Experiencing immediate effect of the H1B rukes. Boarded Emirates flight to Dubai/ Mumbai. Flight departure time 5.05 pm. Was held up because a passenger or family disembarked and they had to remove checked in luggage. Then looked ready to leave and another passenger disembarks. Another 20 minutes and saw about 10–15 passengers disembarking. Worried about reentry. Flight still on ground after 2 hours. Announcement from cabin crew saying if you wish to disembark please do so now. Cabin is like an indian train. Crazy.

Rohan Paul: From Microsoft to Facebook, all tech majors ask H-1B visa holders to return back to the US in less than 24 hours.

Starting tomorrow (Sunday), H1B holders can’t reenter the US without paying $100K.

The number of last-minute flight bookings to the US have seen massive spike.

4chan users are blocking India–US flights by holding tickets at checkout so Indian H1B holders can’t book before the deadline. 😯

Typed Female: the people on here deriving joy from these type of posts are very sick

James Blunt: Very sad.

Giving these travelers even a grace period until October 1st would have been the human thing to do.

Instead, people are disembarking from flights out of fear they won’t be let back into the country they’ve lived, worked, and paid taxes in for years.

Cruelty isn’t policy, it’s just cruelty.

Any remotely competent drafter of the Executive Order, or anyone with access to a good LLM to ask obvious questions about likely response, would have known the chaos and harm this would cause. One can only assume they knew and did it anyway.

The White House later attempted to clear this up as a one-time fee on new visas only.

Aaron Reichlin-Melnick: Oh my GOD. This is the first official confirmation that the new H-1B entry ban does not apply to people with current visas, (despite the text not containing ANY exception for people with current visas), and it’s not even something on an official website, it’s a tweet!

Incredible level of incompetence to write a proclamation THIS unclear and then not release official guidance until 24 hours later!

It is not only incompetence, it is very clearly gaslighting everyone involved, saying that the clarifications were unnecessary when not only are they needed they contract the language in the original document.

Rapid Response 47 (2.1m views, an official Twitter account, September 20, 2: 58pm): Corporate lawyers and others with agendas are creating a lot of FAKE NEWS around President Trump’s H-1B Proclamation, but these are FACTS:

  1. The Proclamation does not apply to anyone who has a current visa.

  2. The Proclamation only applies to future applicants in the February lottery who are currently outside the U.S. It does not apply to anyone who participated in the 2025 lottery.

  3. The Proclamation does not impact the ability of any current visa holder to travel to/from the U.S.

That post has the following highly accurate community note:

Karoline Leavitt (White House Press Secretary, via Twitter, September 20, 5: 11pm):

To be clear:

  1. This is NOT an annual fee. It’s a one-time fee that applies only to the petition.

  2. Those who already hold H-1B visas and are currently outside of the country right now will NOT be charged $100,000 to re-enter. H-1B visa holders can leave and re-enter the country to the same extent as they normally would; whatever ability they have to do that is not impacted by yesterday’s proclamation.

  3. This applies only to new visas, not renewals, and not current visa holders. It will first apply in the next upcoming lottery cycle.

CBP (Official Account, on Twitter, 5: 22pm): Let’s set the record straight: President Trump’s updated H-1B visa requirement applies only to new, prospective petitions that have not yet been filed. Petitions submitted prior to September 21, 2025 are not affected. Any reports claiming otherwise are flat-out wrong and should be ignored.

[Mirrored by USCIS at 5:35pm]

None of these three seem, to me, to have been described in the Executive Order. Nor would I trust Leavitt’s statements here to hold true, where they contradict the order.

Shakeel Hashim: Even if this is true, the fact they did not think to clarify this at the time of the announcement is a nice demonstration of this administration’s incompetence.

There’s also the issue that this is unlikely to stand up in court when challenged.

Meanwhile, here’s the biggest voice of support, which the next day on the 21st believed that this was a yearly tax. Even the cofounder of Netflix is deeply confused.

Reed Hastings (CEO Powder, Co-Founder Netflix): I’ve worked on H1-B politics for 30 years. Trump’s $100k per year tax is a great solution. It will mean H1-B is used just for very high value jobs, which will mean no lottery needed, and more certainty for those jobs.

The issued FAQ came out on the 21st, two days after some questions had indeed been highly frequently asked. Is this FAQ binding? Should we believe it? Unclear.

Here’s what it says:

  1. $100k is a one-time new application payment.

  2. This does not apply to already filed petitions.

  3. This does apply to cap-exempt applications, including national labs, nonprofit research organizations and research universities, and there is no mention here of an exemption for hospitals.

  4. In the future the prevailing wage level will be raised.

  5. In the future there will be a rule to prioritize high-skilled, high-paid aliens in the H1-B lottery over those at lower wage levels (!).

  6. This does not do anything else, such as constrain movement.

If we are going to ‘prioritize high-skilled, high-paid aliens’ in the lottery, that’s great, lose the lottery entirely and pick by salary even, or replace the cap with a minimum salary and see what happens. But then why do we need the fee?

If they had put out this FAQ as the first thing, and it matched the language in the Executive Order, a lot of tsouris could have been avoided, and we could have a reasonable debate on whether the new policy makes sense, although it still is missing key clarifications, especially whether it applies to shifts out of J-1 or L visas.

Instead, the Executive Order was worded to cause panic, in ways that were either highly incompetent, malicious and intentional, or both. Implementation is already, at best, a giant unforced clusterfuck. If this was not going to apply to previously submitted petitions, there was absolutely zero reason to have this take effect with zero notice.

There is no reason to issue an order saying one (quite terrible) set of things only to ‘clarify’ into a different set of things a day later, in a way that still leaves everyone paranoid at best. There is now once again a permanent increase in uncertainty and resulting costs and frictions, especially since we have no reason to presume they won’t simply start enforcing illegal actions.

If this drives down the number of H1-B visas a lot, that would be quite bad, including to the deficit because the average H1-B visa holder contributes ~$40k more in taxes than they use in benefits, and creates a lot of economic value beyond that.

Could this still be a good idea, or close enough to become a good idea, if done well?

Maybe. If it was actually done well and everyone had certainty. At least, it could be better than the previous situation. The Reed Hastings theory isn’t crazy, and our tax revenue has to come from somewhere.

Caleb Watney (IFP): Quick H1-B takes:

Current effort is (probably) not legally viable without Congress.

There’s a WORLD of difference between applying a fee to applicants who are stuck in the lottery (capped employers) and to those who aren’t (research nonprofits and universities). If we apply this rule to the latter, we will dramatically reduce the number of international scientists working in the U.S.

In theory, a fee just on *cappedemployers can help differentiate between high-value and low-value applications, but 100k seems too high (especially if it’s per year).

A major downside of a fee vs a compensation rank is that it creates a distortion for skilled talent choosing between the US and other countries. If you have the choice between a 300k offer in London vs a 200k offer in New York…

I would be willing to bite the bullet and say that every non-fraudulent H1-B visa issued under the old policy was a good H1-B visa worth issuing, and we should have raised or eliminated the cap, but one can have a more measured view and the system wasn’t working as designed or intended.

Jeremy Neufeld: The H-1B has huge problems and high-skilled immigration supporters shouldn’t pretend it doesn’t.

It’s been used far too much for middling talent, wage arbitrage, and downright fraud.

Use for middling talent and wage arbitrage (at least below some reasonable minimum that still counts as middling here) is not something I would have a problem with if we could issue unlimited visas, but I understand that others do have a problem with it, and also that there is no willingness to issue unlimited visas.

The bigger problem was the crowding out via the lottery. Given the cap in slots, every mediocre talent was taking up a slot that could have gone to better talent.

This meant that the best talent would often get turned down. It also meant that no one could rely or plan upon an H-1B. If I need to fill a position, why would I choose someone with a greater than 70% chance of being turned down?

Whereas if you impose a price per visa, you can clear the market, and ensure we use it for the highest value cases.

If the price and implementation were chosen wisely, this would be good on the margin. Not a first best solution, because there should not be a limit on the number of H-1B visas, but a second or third best solution. This would have three big advantages:

  1. The H1-B visas go where they are most valuable.

  2. The government will want to issue more visas to make more money, or at least a lot of the pressure against the H1-B would go away.

  3. This provides a costly signal that no, you couldn’t simply hire an American, and if you’re not earning well over $100k you can feel very safe.

The central argument against the H1-B is that you’re taking a job away from an American to save money. A willingness to pay $100k is strong evidence that there really was a problem finding a qualified domestic applicant for the job.

You still have to choose the right price. When I asked GPT-5 Pro, it estimated that the market clearing rate was only about $30k one time fee per visa, and thus a $100k fee even one time would collapse demand to below supply.

I press X to doubt that. Half of that is covered purely by savings on search costs, and there is a lot of talk that annual salaries for such workers are often $30k or more lower than American workers already.

I think we could, if implemented well, charge $100k one time, raise $2 billion or so per year, and still clear the market. Don’t underestimate search and uncertainty costs. There is a market at Manifold, as well as another on whether the $100k will actually get charged, and this market on net fees collected (although it includes a bunch of Can’t Happen options, so ignore the headline number).

I say second or third best because another solution is to award H1-Bs by salary.

Ege Erdil: some people struggle to understand the nuanced position of

– trump’s H-1B order is illegal & will be struck down

– but broadly trump’s attempts to reform the program are good

– a policy with $100k/yr fees is probably bad, a policy with $100k one-time fee would probably be good.

Maxwell Tabarrok: A massive tax on skilled foreign labor would not be good even if it corrects somewhat for the original sin of setting H1-B up as a lottery.

Just allocate the visas based on the wage offered to applicants.

Awarding by salary solves a lot of problems. It mostly allocates to the highest value positions. It invalidates the ‘put in tons of applications’ strategy big companies use. It invalidates the ‘bring them in to undercut American workers’ argument. It takes away the uncertainty in the lottery. IFP estimates this would raise the value of the H1-B program by 48% versus baseline.

Indeed, the Trump Administration appears to be intent on implementing this change, and prioritizing high skilled and highly paid applications in the lottery. Which is great, but again it makes the fee unnecessary.

There is a study that claims that winning the H1-B lottery is amazingly great for startups, so amazingly great it seems impossible.

Alex Tabarrok: The US offers a limited number of H1-B visas annually, these are temporary 3-6 year visas that allow firms to hire high-skill workers. In many years, the demand exceeds the supply which is capped at 85,000 and in these years USCIS randomly selects which visas to approve. The random selection is key to a new NBER paper by Dimmock, Huang and Weisbenner (published here). What’s the effect on a firm of getting lucky and wining the lottery?

The Paper: We find that a firm’s win rate in the H-1B visa lottery is strongly related to the firm’s outcomes over the following three years. Relative to ex ante similar firms that also applied for H-1B visas, firms with higher win rates in the lottery are more likely to receive additional external funding and have an IPO or be acquired.

Firms with higher win rates also become more likely to secure funding from high-reputation VCs, and receive more patents and more patent citations. Overall, the results show that access to skilled foreign workers has a strong positive effect on firm-level measures of success.

Alex Tabarrok: Overall, getting (approximately) one extra high-skilled worker causes a 23% increase in the probability of a successful IPO within five years (a 1.5 percentage point increase in the baseline probability of 6.6%). That’s a huge effect.

Remember, these startups have access to a labor pool of 160 million workers. For most firms, the next best worker can’t be appreciably different than the first-best worker. But for the 2000 or so tech-startups the authors examine, the difference between the world’s best and the US best is huge. Put differently on some margins the US is starved for talent.

Roon: this seems hard to believe – how does one employee measurably improve the odds of IPO? but then this randomization is basically the highest standard of evidence.

Alec Stapp: had the same thought, but the paper points out that this would include direct effects and indirect effects, like increasing the likelihood of obtaining VC funding. Also noteworthy that they find a similar effect on successful exits.

Max Del Blanco: Sounds like it would be worth 100k!

Charles: There’s just no way this effect size is real tbh. When I see an effect this size I assume there’s a confounder I’m unaware of.

This is a double digit percentage increase in firm value, worth vastly more than $100k for almost any startup that is hiring. It should be easy to reclaim the $100k via fundraising more money on better terms, including in advance since VCs can be forward looking, even for relatively low-value startups, and recruitment will be vastly easier without a lottery in the way. For any companies in the AI space, valuations even at seed are now often eight or nine figures, so it is disingenuous to say this is not affordable.

If we assume the lottery is fully random, suppose the effect size were somehow real. How would we explain it?

My guess is that this would be because a startup, having to move quickly, would mostly only try to use the H1-B lottery when given access to exceptional talent, or for a role they flat out cannot otherwise fill. Roles at startups are not fungible and things are moving quickly, so the marginal difference is very high.

This would in turn suggest that the change is, if priced and implemented correctly, very good policy for the most important startups, and yes they would find a way to pay the $100k. However, like many others I doubt the study’s findings because the effect size seems too big.

A $100k annual fee, if that somehow happened and survived contact with the courts? Yeah, that would probably have done it, and would clearly have been intentional.

AP (at the time Lutnick thought the fee was annual): Lutnick said the change will likely result in far fewer H-1B visas than the 85,000 annual cap allows because “it’s just not economic anymore.”

Even then, I would have taken the over on the number of visas that get issued, versus others expectations or those of GPT-5. Yes, $100k per year is quite a lot, but getting the right employee can be extremely valuable.

In many cases you really can’t find an American to do the job at a price you can socially pay, and there are a lot of high value positions where given the current lottery you wouldn’t bother trying for an H1-B at all.

Consider radiologists, where open positions have often moved into the high six figures, and there are many qualified applicants overseas that can’t otherwise get paid anything like that.

Consider AI as well. A traditional ‘scrappy’ startup can’t pay $100k per year, but when seed rounds are going for tens or hundreds of millions, and good engineers are getting paid hundreds of thousands on the regular, then suddenly yes, yes you can pay.

I think the argument ‘all startups are scrappy and can’t afford such fees’ simply flies in the face of current valuations in the AI industry, where even seed stage raises often have valuations in the tens to hundreds of millions.

The ‘socially pay’ thing matters a lot. You can easily get into a pickle, where the ‘standard’ pay for something is let’s say $100k, but price to get a new hire is $200k.

If you paid the new hire $200k and anyone finds out then your existing $100k employees will go apocalyptic unless you bump them up to $200k. Relative pay has to largely match social status within the firm.

Whereas in this case, you’d be able to (for example) pay $100k in salary to an immigrant happy to take it, and a $100k visa annual fee, without destroying the social order. It also gives you leverage over the employee.

If you change to a one-time fee, the employer doesn’t get to amortize it that much, but it is a lot less onerous. Is the fee too high even at a one time payment?

One objection here and elsewhere is ‘this is illegal and won’t survive in court’ but for now let’s do the thought experiment where it survives, perhaps by act of Congress.

Jeremy Neufeld: That doesn’t make the $100k fee a good solution.

  1. Outsourcers can just avoid the fee by bringing their people in on L visas and then enter them in the lottery.

  2. US companies face a competitive disadvantage in recruiting real talent from abroad since they’ll have to lower their compensation offers to cover the $100k.

  3. Research universities will recruit fewer foreign-trained scientists.

  4. It’s likely to get overturned in court so the long term effect is just signaling uncertainty and unpredictability to talent.

Another issue is that they are applying the fee to cap-exempt organizations, which seems obviously foolish.

This new FAQ from the White House makes it clear the $100k fee does apply to cap-exempt organizations.

That includes national labs and other government R&D, nonprofit research orgs, and research universities.

Big threat to US scientific leadership.

Nothing wrong in principle with tacking on a large fee to cap-subject H-1Bs to prioritize top talent but it needs a broader base (no big loophole for L to H-1B changes) and a lower rate.

(Although for better or for worse, Congress needs to do it.)

But the fee on cap-exempt H-1Bs is just stupid.

Presumably they won’t allow the L visa loophole or other forms of ‘already in the country,’ and would refuse to issue related visas without fees. Padme asks, they’re not so literally foolish as to issue such visas but stop the workers at the border anyway, and certainly not letting them take up lottery slots, are they? Are they?

On compensation, yes presumably they will offer somewhat lower compensation than they would have otherwise, but also they can offer a much higher chance of a visa. It’s not obvious where this turns into a net win, and note that it costs a lot more than $100k to pay an employee $100k in salary, and the social dynamics discussed above. I’m not convinced the hit here will be all that big.

I certainly would bet against hyperbolic claims like this one from David Bier at Cato, who predicts that this will ‘effectively end’ the H-1B visa category.

David Bier: This fee would effectively end the H‑1B visa category by making it prohibitive for most businesses to hire H‑1B workers. This would force leading technology companies out of the United States, reduce demand for US workers, reduce innovation, have severe second-order economic effects, and lower the supply of goods and services in everything from IT and education to manufacturing and medicine.

Research universities will recruit a lot fewer foreign-trained scientists if and only if both of the following are true:

  1. The administration does not issue waivers for research scientists, despite this being clearly in the public interest.

  2. The perceived marginal value of the research scientist is not that high, such that universities decline to pay the application fee.

That does seem likely to happen often. It also seems like a likely point of administration leverage, as they are constantly looking for leverage over universities.

American intentionally caps the number of doctors we train. The last thing we want to do is make our intentionally created doctor shortage even worse. A lot of people warned that this will go very badly, and it looks like Trump is likely to waive the fee for doctors.

Note that according to GPT-5-Pro, residencies mostly don’t currently use H-1B, rather they mostly use J-1. The real change would be if they cut off shifting to H1-B, which would prevent us from retaining those residents once they finish. Which would still be a very bad outcome, if the rules were sustained for that long. That would in the long term be far worse than having our medical schools expand. This is one of the places where yes, Americans very much want these jobs and could become qualified for them.

Of course the right answer here was always to open more slots and train more doctors.

Our loss may partly be the UK’s gain.

Alec Stapp: My feed is full of smart people in the UK pouncing on the opportunity to poach more global talent for their own country.

We are making ourselves weaker and poorer by turning away scientists, engineers, and technologists who want to contribute to the US.

Alex Cheema: If you’re a talented engineer affected by the H-1B changes, come build with us in London @exolabs

– SF-level comp (270K-360K base + equity)

– Best talent from Europe

– Hardcore build culture

– Build something important with massive distribution

Email jobs at exolabs dot net

Discussion about this post

H1-B And The $100k Fee Read More »

anti-vaccine-groups-melt-down-over-reports-rfk-jr.-to-link-autism-to-tylenol

Anti-vaccine groups melt down over reports RFK Jr. to link autism to Tylenol

Results of this study indicate that the association between acetaminophen use during pregnancy and neurodevelopmental disorders is a noncausal association. Birthing parents with higher acetaminophen use differed in many aspects from those with lower use or no use. Results suggested that there was not one single “smoking gun” confounder, but rather that multiple birthing parents’ health and sociodemographic characteristics each explained at least part of the apparent association. The null results of the sibling control analyses indicate that shared familial confounders were involved, but do not identify the specific confounding factors.

Critical factors

Another factor to consider is that untreated fevers, and/or prolonged fevers during pregnancy—reasons to take Tylenol in the first place—are linked to increased risks of autism. And, as the Society for Maternal-Fetal Medicine pointed out earlier this month, untreated fever and pain during pregnancy carry other significant risks for both the mother and the pregnancy.

“Untreated fever, particularly in the first trimester, increases the risk of miscarriage, birth defects, and premature birth, and untreated pain can lead to maternal depression, anxiety, and high blood pressure,” SMFM noted.

With no clear evidence supporting a link between acetaminophen and autism, doctors highlight another fold in the issue: Acetaminophen is considered the safest pain reliever/fever-reducer during pregnancy. Nonsteroidal anti-inflammatory medications (also called NSAIDS), such as ibuprofen (Advil) and aspirin, can cause reduced blood flow, heart problems, and kidney problems in a fetus.

After The Wall Street Journal’s report of Kennedy’s plans, the American College of Obstetricians and Gynecologists (ACOG) reiterated its guidance for acetaminophen during pregnancy, writing on social media:

Acetaminophen remains a safe, trusted option for pain relief during pregnancy. Despite recent unfounded claims, there’s no clear evidence linking prudent use to issues with fetal development. ACOG’s guidance remains the same. When pain relief is needed during pregnancy, acetaminophen should be used in moderation, and after consulting your doctor.

Christopher Zahn, ACOG’s chief of clinical practice, put it more plainly, saying: “Pregnant patients should not be frightened away from the many benefits of acetaminophen, which is safe and one of the few options pregnant people have for pain relief.”

Anti-vaccine groups melt down over reports RFK Jr. to link autism to Tylenol Read More »

three-crashes-in-the-first-day?-tesla’s-robotaxi-test-in-austin.

Three crashes in the first day? Tesla’s robotaxi test in Austin.

These days, Austin, Texas feels like ground zero for autonomous cars. Although California was the early test bed for autonomous driving tech, the much more permissive regulatory environment in the Lone Star State, plus lots of wide, straight roads and mostly good weather, ticked enough boxes to see companies like Waymo and Zoox set up shop there. And earlier this summer, Tesla added itself to the list. Except things haven’t exactly gone well.

According to Tesla’s crash reports, spotted by Brad Templeton over at Forbes, the automaker experienced not one but three crashes, all apparently on its first day of testing on July 1. And as we learned from Tesla CEO Elon Musk later in July during the (not-great) quarterly earnings call, by that time, Tesla had logged a mere 7,000 miles in testing.

By contrast, Waymo’s crash rate is more than two orders of magnitude lower, with 60 crashes logged over 50 million miles of driving. (Waymo has now logged more than 96 million miles.)

Two of the three Tesla crashes involved another car rear-ending the Model Y, and at least one of these crashes was almost certainly not the Tesla’s fault. But the third crash saw a Model Y—with the required safety operator on board—collide with a stationary object at low speed, resulting in a minor injury. Templeton also notes that there was a fourth crash that occurred in a parking lot and therefore wasn’t reported. Sadly, most of the details in the crash reports have been redacted by Tesla.

Three crashes in the first day? Tesla’s robotaxi test in Austin. Read More »

a-history-of-the-internet,-part-3:-the-rise-of-the-user

A history of the Internet, part 3: The rise of the user


the best of times, the worst of times

The reins of the Internet are handed over to ordinary users—with uneven results.

Everybody get together. Credit: D3Damon/Getty Images

Everybody get together. Credit: D3Damon/Getty Images

Welcome to the final article in our three-part series on the history of the Internet. If you haven’t already, catch up with part one and part two.

As a refresher, here’s the story so far:

The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country. It later evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol. By the late 1980s, a small group of academics and a few curious consumers connected to each other on the Internet, which was still mostly text-based.

In 1991, Tim Berners-Lee invented the World Wide Web, an Internet-based hypertext system designed for graphical interfaces. At first, it ran only on the expensive NeXT workstation. But when Berners-Lee published the web’s protocols and made them available for free, people built web browsers for many different operating systems. The most popular of these was Mosaic, written by Marc Andreessen, who formed a company to create its successor, Netscape. Microsoft responded with Internet Explorer, and the browser wars were on.

The web grew exponentially, and so did the hype surrounding it. It peaked in early 2001, right before the dotcom collapse that left most web-based companies nearly or completely bankrupt. Some people interpreted this crash as proof that the consumer Internet was just a fad. Others had different ideas.

Larry Page and Sergey Brin met each other at a graduate student orientation at Stanford in 1996. Both were studying for their PhDs in computer science, and both were interested in analyzing large sets of data. Because the web was growing so rapidly, they decided to start a project to improve the way people found information on the Internet.

They weren’t the first to try this. Hand-curated sites like Yahoo had already given way to more algorithmic search engines like AltaVista and Excite, which both started in 1995. These sites attempted to find relevant webpages by analyzing the words on every page.

Page and Brin’s technique was different. Their “BackRub” software created a map of all the links that pages had to each other. Pages on a given subject that had many incoming links from other sites were given a higher ranking for that keyword. Higher-ranked pages could then contribute a larger score to any pages they linked to. In a sense, this was a like a crowdsourcing of search: When people put “This is a good place to read about alligators” on a popular site and added a link to a page about alligators, it did a better job of determining that page’s relevance than simply counting the number of times the word appeared on a page.

Step 1 of the simplified BackRub algorithm. It also stores the position of each word on a page, so it can make a further subset for multiple words that appear next to each other. Jeremy Reimer.

Creating a connected map of the entire World Wide Web with indexes for every word took a lot of computing power. The pair filled their dorm rooms with any computers they could find, paid for by a $10,000 grant from the Stanford Digital Libraries Project. Many were cobbled together from spare parts, including one with a case made from imitation LEGO bricks. Their web scraping project was so bandwidth-intensive that it briefly disrupted the university’s internal network. Because neither of them had design skills, they coded the simplest possible “home page” in HTML.

In August 1996, BackRub was made available as a link from Stanford’s website. A year later, Page and Brin rebranded the site as “Google.” The name was an accidental misspelling of googol, a term coined by a mathematician’s young son to describe a 1 with 100 zeros after it. Even back then, the pair was thinking big.

Google.com as it appeared in 1998. Credit: Jeremy Reimer

By mid-1998, their prototype was getting over 10,000 searches a day. Page and Brin realized they might be onto something big. It was nearing the height of the dotcom mania, so they went looking for some venture capital to start a new company.

But at the time, search engines were considered passée. The new hotness was portals, sites that had some search functionality but leaned heavily into sponsored content. After all, that’s where the big money was. Page and Brin tried to sell the technology to AltaVista for $1 million, but its parent company passed. Excite also turned them down, as did Yahoo.

Frustrated, they decided to hunker down and keep improving their product. Brin created a colorful logo using the free GIMP paint program, and they added a summary snippet to each result. Eventually, the pair received $100,000 from angel investor Andy Bechtolsheim, who had co-founded Sun Microsystems. That was enough to get the company off the ground.

Page and Brin were careful with their money, even after they received millions more from venture capitalist firms. They preferred cheap commodity PC hardware and the free Linux operating system as they expanded their system. For marketing, they relied mostly on word of mouth. This allowed Google to survive the dotcom crash that crippled its competitors.

Still, the company eventually had to find a source of income. The founders were concerned that if search results were influenced by advertising, it could lower the usefulness and accuracy of the search. They compromised by adding short, text-based ads that were clearly labeled as “Sponsored Links.” To cut costs, they created a form so that advertisers could submit their own ads and see them appear in minutes. They even added a ranking system so that more popular ads would rise to the top.

The combination of a superior product with less intrusive ads propelled Google to dizzying heights. In 2024, the company collected over $350 billion in revenue, with $112 billion of that as profit.

Information wants to be free

The web was, at first, all about text and the occasional image. In 1997, Netscape added the ability to embed small music files in the MIDI sound format that would play when a webpage was loaded. Because the songs only encoded notes, they sounded tinny and annoying on most computers. Good audio or songs with vocals required files that were too large to download over the Internet.

But this all changed with a new file format. In 1993, researchers at the Fraunhofer Institute developed a compression technique that eliminated portions of audio that human ears couldn’t detect. Suzanne Vega’s song “Tom’s Diner” was used as the first test of the new MP3 standard.

Now, computers could play back reasonably high-quality songs from small files using software decoders. WinPlay3 was the first, but WinAmp, released in 1997, became the most popular. People started putting links to MP3 files on their personal websites. Then, in 1999, Shawn Fanning released a beta of a product he called Napster. This was a desktop application that relied on the Internet to let people share their MP3 collection and search everyone else’s.

Napster as it would have appeared in 1999. Credit: Jeremy Reimer

Napster almost immediately ran into legal challenges from the Recording Industry Association of America (RIAA). It sparked a debate about sharing things over the Internet that persists to this day. Some artists agreed with the RIAA that downloading MP3 files should be illegal, while others (many of whom had been financially harmed by their own record labels) welcomed a new age of digital distribution. Napster lost the case against the RIAA and shut down in 2002. This didn’t stop people from sharing files, but replacement tools like eDonkey 2000, Limewire, Kazaa, and Bearshare lived in a legal gray area.

In the end, it was Apple that figured out a middle ground that worked for both sides. In 2003, two years after launching its iPod music player, Apple announced the Internet-only iTunes Store. Steve Jobs had signed deals with all five major record labels to allow legal purchasing of individual songs—astoundingly, without copy protection—for 99 cents each, or full albums for $10. By 2010, the iTunes Store was the largest music vendor in the world.

iTunes 4.1, released in 2003. This was the first version for Windows and introduced the iTunes Store to a wider world. Credit: Jeremy Reimer

The Web turns 2.0

Tim Berners-Lee’s original vision for the web was simply to deliver and display information. It was like a library, but with hypertext links. But it didn’t take long for people to start experimenting with information flowing the other way. In 1994, Netscape 0.9 added new HTML tags like FORM and INPUT that let users enter text and, using a “Submit” button, send it back to the web server.

Early web servers didn’t know what to do with this text. But programmers developed extensions that let a server run programs in the background. The standardized “Common Gateway Interface” (CGI) made it possible for a “Submit” button to trigger a program (usually in a /cgi-bin/ directory) that could do something interesting with the submission, like talking to a database. CGI scripts could even generate new webpages dynamically and send them back to the user.

This intelligent two-way interaction changed the web forever. It enabled things like logging into an account on a website, web-based forums, and even uploading files directly to a web server. Suddenly, a website wasn’t just a page that you looked at. It could be a community where groups of interested people could interact with each other, sharing both text and images.

Dynamic webpages led to the rise of blogging, first as an experiment (some, like Justin Hall’s and Dave Winer’s, are still around today) and then as something anyone could do in their spare time. Websites in general became easier to create with sites like Geocities and Angelfire, which let people build their own personal dream house on the web for free. A community-run dynamic linking site, webring.org, connected similar websites together, encouraging exploration.

Webring.org was a free, community-run service that allowed dynamically updated webrings. Credit: Jeremy Reimer

One of the best things to come out of Web 2.0 was Wikipedia. It arose as a side project of Nupedia, an online encyclopedia founded by Jimmy Wales, with articles written by volunteers who were subject matter experts. This process was slow, and the site only had 21 articles in its first year. Wikipedia, in contrast, allowed anyone to contribute and review articles, so it quickly outpaced its predecessor. At first, people were skeptical about letting random Internet users edit articles. But thanks to an army of volunteer editors and a set of tools to quickly fix vandalism, the site flourished. Wikipedia far surpassed works like the Encyclopedia Britannica in sheer numbers of articles while maintaining roughly equivalent accuracy.

Not every Internet innovation lived on a webpage. In 1988, Jarkko Oikarinen created a program called Internet Relay Chat (IRC), which allowed real-time messaging between individuals and groups. IRC clients for Windows and Macintosh were popular among nerds, but friendlier applications like PowWow (1994), ICQ (1996), and AIM (1997) brought messaging to the masses. Even Microsoft got in on the act with MSN Messenger in 1999. For a few years, this messaging culture was an important part of daily life at home, school, and work.

A digital recreation of MSN Messenger from 2001. Sadly, Microsoft shut down the servers in 2014. Credit: Jeremy Reimer

Animation, games, and video

While the web was evolving quickly, the slow speeds of dial-up modems limited the size of files you could upload to a website. Static images were the norm. Animation only appeared in heavily compressed GIF files with a few frames each.

But a new technology blasted past these limitations and unleashed a torrent of creativity on the web. In 1995, Macromedia released Shockwave Player, an add-on for Netscape Navigator. Along with its Director software, the combination allowed artists to create animations based on vector drawings. These were small enough to embed inside webpages.

Websites popped up to support this new content. Newgrounds.com, which started in 1995 as a Neo-Geo fan site, started collecting the best animations. Because Director was designed to create interactive multimedia for CD-ROM projects, it also supported keyboard and mouse input and had basic scripting. This meant that people could make simple games that ran in Shockwave. Newgrounds eagerly showcased these as well, giving many aspiring artists and game designers an entry point into their careers. Super Meat Boy, for example, was first prototyped on Newgrounds.

Newgrounds as it would have appeared circa 2003. Credit: Jeremy Reimer

Putting actual video on the web seemed like something from the far future. But the future arrived quickly. After the dotcom crash of 2001, there were many unemployed web programmers with a lot of time on their hands to experiment with their personal projects. The arrival of broadband with cable modems and digital subscriber lines (DSL), combined with the new MPEG4 compression standard, made a lot of formerly impossible things possible.

In early 2005, Chad Hurley, Steve Chen, and Jawed Karim launched Youtube.com. Initially, it was meant to be an online dating site, but that service failed. The site, however, had great technology for uploading and playing videos. It used Macromedia’s Flash, a new technology so similar to Shockwave that the company marketed it as Shockwave Flash. YouTube allowed anybody to upload videos up to ten minutes in length for free. It became so popular that Google bought it a year later for $1.65 billion.

All these technologies combined to provide ordinary people with the opportunity, however brief, to make an impact on popular culture. An early example was the All Your Base phenomenon. An animated GIF of an obscure, mistranslated Sega Genesis game inspired indie musicians The Laziest Men On Mars to create a song and distribute it as an MP3. The popular humor site somethingawful.com picked it up, and users in the Photoshop Friday forum thread created a series of humorous images to go along with the song. Then in 2001, the user Bad_CRC took the song and the best of the images and put them together in an animation they shared on Newgrounds. The YouTube version gained such wide popularity that it was reported on by USA Today.

You have no chance to survive make your time.

Media goes social

In the early 2000s, most websites were either blogs or forums—and frequently both. Forums had multiple discussion boards, both general and specific. They often leaned into a specific hobby or interest, and anyone with that interest could join. There were also a handful of dating websites, like kiss.com (1994), match.com (1995), and eHarmony.com (2000), that specifically tried to connect people who might have a romantic interest in each other.

The Swedish Lunarstorm was one of the first social media websites. Credit: Jeremy Reimer

The road to social media was a hazy and confusing merging of these two types of websites. There was classmates.com (1995) that served as a way to connect with former school chums, and the following year, the Swedish site lunarstorm.com opened with this mission:

Everyone has their own website called Krypin. Each babe [this word is an accurate translation] has their own Krypin where she or he introduces themselves, posts their diaries and their favorite files, which can be anything from photos and their own songs to poems and other fun stuff. Every LunarStormer also has their own guestbook where you can write if you don’t really dare send a LunarEmail or complete a Friend Request.

In 1997, sixdegrees.com opened, based on the truism that everyone on earth is connected with six or fewer degrees of separation. Its About page said, “Our free networking services let you find the people you want to know through the people you already know.”

By the time friendster.com opened its doors in 2002, the concept of “friending” someone online was already well established, although it was still a niche activity. LinkedIn.com, launched the following year, used the excuse of business networking to encourage this behavior. But it was MySpace.com (2003) that was the first to gain significant traction.

MySpace was initially a Friendster clone written in just ten days by employees at eUniverse, an Internet marketing startup founded by Brad Greenspan. It became the company’s most successful product. MySpace combined the website-building ability of sites like GeoCities with social networking features. It took off incredibly quickly: in just three years, it surpassed Google as the most visited website in the United States. Hype around MySpace reached such a crescendo that Rupert Murdoch purchased it in 2005 for $580 million.

But a newcomer to the social media scene was about to destroy MySpace. Just as Google crushed its competitors, this startup won by providing a simpler, more functional, and less intrusive product. TheFaceBook.com began as Mark Zuckerberg and his college roommate’s attempt to replace their college’s online directory. Zuckerberg’s first student website, “Facemash,” had been created by breaking into Harvard’s network, and its sole feature was to provide “Hot or Not” comparisons of student photos. Facebook quickly spread to other universities, and in 2006 (after dropping the “the”), it was opened to the rest of the world.

“The” Facebook as it appeared in 2004. Credit: Jeremy Reimer

Facebook won the social networking wars by focusing on the rapid delivery of new features. The company’s slogan, “Move fast and break things,” encouraged this strategy. The most prominent feature, added in 2006, was the News Feed. It generated a list of posts, selected out of thousands of potential updates for each user based on who they followed and liked, and showed it on their front page. Combined with a technique called “infinite scrolling,” first invented for Microsoft’s Bing Image Search by Hugh E. Williams in 2005, it changed the way the web worked forever.

The algorithmically generated News Feed created new opportunities for Facebook to make profits. For example, businesses could boost posts for a fee, which would make them appear in news feeds more often. These blurred the lines between posts and ads.

Facebook was also successful in identifying up-and-coming social media sites and buying them out before they were able to pose a threat. This was made easier thanks to Onavo, a VPN that monitored its users’ activities and resold the data. Facebook acquired Onavo in 2013. It was shut down in 2019 due to continued controversy over the use of private data.

Social media transformed the Internet, drawing in millions of new users and starting a consolidation of website-visiting habits that continues to this day. But something else was about to happen that would shake the Internet to its core.

Don’t you people have phones?

For years, power users had experimented with getting the Internet on their handheld devices. IBM’s Simon phone, which came out in 1994, had both phone and PDA features. It could send and receive email. The Nokia 9000 Communicator, released in 1996, even had a primitive text-based web browser.

Later phones like the Blackberry 850 (1999), the Nokia 9210 (2001), and the Palm Treo (2002), added keyboards, color screens, and faster processors. In 1999, the Wireless Application Protocol (WAP) was released, which allowed mobile phones to receive and display simplified, phone-friendly pages using WML instead of the standard HTML markup language.

Browsing the web on phones was possible before modern smartphones, but it wasn’t easy. Credit: James Cridland (Flickr)

But despite their popularity with business users, these phones never broke into the mainstream. That all changed in 2007 when Steve Jobs got on stage and announced the iPhone. Now, every webpage could be viewed natively on the phone’s browser, and zooming into a section was as easy as pinching or double-tapping. The one exception was Flash, but a new HTML 5 standard promised to standardize advanced web features like animation and video playback.

Google quickly changed its Android prototype from a Blackberry clone to something more closely resembling the iPhone. Android’s open licensing structure allowed companies around the world to produce inexpensive smartphones. Even mid-range phones were still much cheaper than computers. This technology allowed, for the first time, the entire world to become connected through the Internet.

The exploding market of phone users also propelled the massive growth of social media companies like Facebook and Twitter. It was a lot easier now to snap a picture of a live event with your phone and post it instantly to the world. Optimists pointed to the remarkable events of the Arab Spring protests as proof that the Internet could help spread democracy and freedom. But governments around the world were just as eager to use these new tools, except their goals leaned more toward control and crushing dissent.

The backlash

Technology has always been a double-edged sword. But in recent years, public opinion about the Internet has shifted from being mostly positive to increasingly negative.

The combination of mobile phones, social media algorithms, and infinite scrolling led to the phenomenon of “doomscrolling,” where people spend hours every day reading “news” that is tuned for maximum engagement by provoking as many people as possible. The emotional toil caused by doomscrolling has been shown to cause real harm. Even more serious is the fallout from misinformation and hate speech, like the genocide in Myanmar that an Amnesty International report claims was amplified on Facebook.

As companies like Google, Amazon, and Facebook grew into near-monopolies, they inevitably lost sight of their original mission in favor of a never-ending quest for more money. The process, dubbed enshittification by Cory Doctorow, shifts the focus first from users to advertisers and then to shareholders.

Chasing these profits has fueled the rise of generative AI, which threatens to turn the entire Internet into a sea of soulless gray soup. Google is now forcing AI summaries at the top of web searches, which reduce traffic to websites and often provide dangerous misinformation. But even if you ignore the AI summaries, the sites you find underneath may also be suspect. Once-trusted websites have laid off staff and replaced them with AI, generating an endless series of new articles written by nobody. A web where AIs comment on AI-generated Facebook posts that link to AI-generated articles, which are then AI-summarized by Google, seems inhuman and pointless.

A search for cute baby peacocks on Bing. Some of them are real, and some aren’t. Credit: Jeremy Reimer

Where from here?

The history of the Internet can be roughly divided into three phases. The first, from 1969 to 1990, was all about the inventors: people like Vint Cerf, Steve Crocker, and Robert Taylor. These folks were part of a small group of computer scientists who figured out how to get different types of computers to talk to each other and to other networks.

The next phase, from 1991 to 1999, was a whirlwind that was fueled by entrepreneurs, people like Jerry Yang and Jeff Bezos. They latched on to Tim Berners-Lee’s invention of the World Wide Web and created companies that lived entirely in this new digital landscape. This set off a manic phase of exponential growth and hype, which peaked in early 2001 and crashed a few months later.

The final phase, from 2000 through today, has primarily been about the users. New companies like Google and Facebook may have reaped the greatest financial rewards during this time, but none of their successes would have been possible without the contributions of ordinary people like you and me. Every time we typed something into a text box and hit the “Submit” button, we created a tiny piece of a giant web of content. Even the generative AIs that pretend to make new things today are merely regurgitating words, phrases, and pictures that were created and shared by people.

There is a growing sense of nostalgia today for the old Internet, when it felt like a place, and the joy of discovery was around every corner. “Using the old Internet felt like digging for treasure,” said YouTube commenter MySoftCrow. “Using the current Internet feels like getting buried alive.”

Ars community member MichaelHurd added his own thoughts: “I feel the same way. It feels to me like the core problem with the modern Internet is that websites want you to stay on them for as long as possible, but the World Wide Web is at its best when sites connect to each other and encourage people to move between them. That’s what hyperlinks are for!”

Despite all the doom surrounding the modern Internet, it remains largely open. Anyone can pay about $5 per month for a shared Linux server and create a personal website containing anything they can think of, using any software they like, even their own. And for the most part, anyone, on any device, anywhere in the world, can access that website.

Ultimately, the fate of the Internet depends on the actions of every one of us. That’s why I’m leaving the final words in this series of articles to you. What would your dream Internet of the future look and feel like? The comments section is open.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 3: The rise of the user Read More »

starship-will-soon-fly-over-towns-and-cities,-but-will-dodge-the-biggest-ones

Starship will soon fly over towns and cities, but will dodge the biggest ones


Starship’s next chapter will involve launching over Florida and returning over Mexico.

SpaceX’s Starship vehicle is encased in plasma as it reenters the atmosphere over the Indian Ocean on its most recent test flight in August. Credit: SpaceX

Some time soon, perhaps next year, SpaceX will attempt to fly one of its enormous Starship rockets from low-Earth orbit back to its launch pad in South Texas. A successful return and catch at the launch tower would demonstrate a key capability underpinning Elon Musk’s hopes for a fully reusable rocket.

In order for this to happen, SpaceX must overcome the tyranny of geography. Unlike launches over the open ocean from Cape Canaveral, Florida, rockets departing from South Texas must follow a narrow corridor to steer clear of downrange land masses.

All 10 of the rocket’s test flights so far have launched from Texas toward splashdowns in the Indian or Pacific Oceans. On these trajectories, the rocket never completes a full orbit around the Earth, but instead flies an arcing path through space before gravity pulls it back into the atmosphere.

If Starship’s next two test flights go well, SpaceX will likely attempt to send the soon-to-debut third-generation version of the rocket all the way to low-Earth orbit. The Starship V3 vehicle will measure 171 feet (52.1 meters) tall, a few feet more than Starship’s current configuration. The entire rocket, including its Super Heavy booster, will have a height of 408 feet (124.4 meters).

Starship, made of stainless steel, is designed for full reusability. SpaceX has already recovered and reflown Super Heavy boosters, but won’t be ready to recover the rocket’s Starship upper stage until next year, at the soonest.

That’s one of the next major milestones in Starship’s development after achieving orbital flight. SpaceX will attempt to bring the ship home to be caught back at the launch site by the launch tower at Starbase, Texas, located on the southernmost section of the Texas Gulf Coast near the US-Mexico border.

It was always evident that flying a Starship from low-Earth orbit back to Starbase would require the rocket to fly over Mexico and portions of South Texas. The rocket launches to the east over the Gulf of Mexico, so it must approach Starbase from the west when it comes in for a landing.

New maps published by the Federal Aviation Administration show where the first Starships returning to Texas may fly when they streak through the atmosphere.

Paths to and from orbit

The FAA released a document Friday describing SpaceX’s request to update its government license for additional Starship launch and reentry trajectories. The document is a draft version of a “tiered environmental assessment” examining the potential for significant environmental impacts from the new launch and reentry flight paths.

The federal regulator said it is evaluating potential impacts in aviation emissions and air quality, noise and noise-compatible land use, hazardous materials, and socioeconomics. The FAA concluded the new flight paths proposed by SpaceX would have “no significant impacts” in any of these categories.

SpaceX’s Starship rocket shortly before splashing into the Indian Ocean in August. Credit: SpaceX

The environmental review is just one of several factors the FAA considers when deciding whether to approve a new commercial launch or reentry license. According to the FAA, the other factors are public safety issues (such as overflight of populated areas and payload contents), national security or foreign policy concerns, and insurance requirements.

The FAA didn’t make a statement on any public safety and foreign policy concerns with SpaceX’s new trajectories, but both issues may come into play as the company seeks approval to fly Starship over Mexican towns and cities uprange from Starbase.

The regulator’s licensing rules state that a commercial launch and reentry should each pose no greater than a 1 in 10,000 chance of harming or killing a member of the public not involved in the mission. The risk to any individual should not exceed 1 in 1 million.

So, what’s the danger? If something on Starship fails, it could disintegrate in the atmosphere. Surviving debris would rain down to the ground, as it did over the Turks and Caicos Islands after two Starship launch failures earlier this year. Two other Starship flights ran into problems once in space, tumbling out of control and breaking apart during reentry over the Indian Ocean.

The most recent Starship flight last month was more successful, with the ship reaching its target in the Indian Ocean for a pinpoint splashdown. The splashdown had an error of just 3 meters (10 feet), giving SpaceX confidence in returning future Starships to land.

This map shows Starship’s proposed reentry corridor. Credit: Federal Aviation Administration

One way of minimizing the risk to the public is to avoid flying over large metropolitan areas, and that’s exactly what SpaceX and the FAA are proposing to do, at least for the initial attempts to bring Starship home from orbit. A map of a “notional” Starship reentry flight path shows the vehicle beginning its reentry over the Pacific Ocean, then passing over Baja California and soaring above Mexico’s interior near the cities of Hermosillo and Chihuahua, each with a population of roughly a million people.

The trajectory would bring Starship well north of the Monterrey metro area and its 5.3 million residents, then over the Rio Grande Valley near the Texas cities of McAllen and Brownsville. During the final segment of Starship’s return trajectory, the vehicle will begin a vertical descent over Starbase before a final landing burn to slow it down for the launch pad’s arms to catch it in midair.

In addition to Monterrey, the proposed flight path dodges overflights of major US cities like San Diego, Phoenix, and El Paso, Texas.

Let’s back up

Setting up for this reentry trajectory requires SpaceX to launch Starship into an orbit with exactly the right inclination, or angle to the equator. There are safety constraints for SpaceX and the FAA to consider here, too.

All of the Starship test flights to date have launched toward the east, threading between South Florida and Cuba, south of the Bahamas, and north of Puerto Rico before heading over the North Atlantic Ocean. For Starship to target just the right orbit to set up for reentry, the rocket must fly in a slightly different direction over the Gulf.

Another map released by the FAA shows two possible paths Starship could take. One of the options goes to the southeast between Mexico’s Yucatan Peninsula and the western tip of Cuba, then directly over Jamaica as the rocket accelerated into orbit over the Caribbean Sea. The other would see Starship departing South Texas on a northeasterly path and crossing over North Florida before reaching the Atlantic Ocean.

While both trajectories fly over land, they avoid the largest cities situated near the flight path. For example, the southerly route misses Cancun, Mexico, and the northerly path flies between Jacksonville and Orlando, Florida. “Orbital launches would primarily be to low inclinations with flight trajectories north or south of Cuba that minimize land overflight,” the FAA wrote in its draft environmental assessment.

The FAA analyzed two launch trajectory options for future orbital Starship test flights. Credit: Federal Aviation Administration

The proposed launch and reentry trajectories would result in temporary airspace closures, the FAA said. This could force delays or rerouting of anywhere from seven to 400 commercial flights for each launch, according to the FAA’s assessment.

Launch airspace closures are already the norm for Starship test flights. The FAA concluded that the reentry path over Mexico would require the closure of a swath of airspace covering more than 4,200 miles. This would affect up to 200 more commercial airplane flights during each Starship mission. Eventually, the FAA aims to shrink the airspace closures as SpaceX demonstrates improved reliability with Starship test flights.

Eventually, SpaceX will move some flights of Starship to Florida’s Space Coast, where rockets can safely launch in many directions over the Atlantic. By then, SpaceX aims to be launching Starships at a regular cadence—first, multiple flights per month, then per week, and then per day.

This will enable all of the things SpaceX wants to do with Starship. Chief among these goals is to fly Starships to Mars. Before then, SpaceX must master orbital refueling. NASA also has a contract with SpaceX to build Starships to land astronauts on the Moon’s south pole.

But all of that assumes SpaceX can routinely launch and recover Starships. That’s what engineers hope to soon prove they can do.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Starship will soon fly over towns and cities, but will dodge the biggest ones Read More »