Author name: Tim Belzer

verizon-beats-lawsuit-from-utility-worker-who-said-lead-cables-made-him-sick

Verizon beats lawsuit from utility worker who said lead cables made him sick

However, Ranjan found that Tiger lacked standing to bring the lawsuit. It is not clear that Tiger’s symptoms were caused by working with lead-covered cables, and everyone is exposed to lead to some degree, the ruling said.

“Given the naturally occurring lead levels in the environment and in our bodies, and the fact that individuals exposed to lead may not develop any lead-related conditions or symptoms at all, mere exposure to lead—and the mere presence of lead in one’s body—isn’t a concrete injury,” Ranjan wrote.

Verizon said in September 2023 that at sites described in the Wall Street Journal article, soil lead levels near Verizon cables were similar to lead levels in the surrounding area and did not pose a public health risk.

Verizon is also seeking dismissal of a similar lawsuit filed in US District Court for the District of New Jersey. Verizon yesterday submitted a filing to the New Jersey federal court that cited the Pennsylvania ruling. Verizon said the plaintiffs in the two cases are represented by the same legal team and that the allegations are “virtually identical.”

Health claims not specific enough

Ranjan’s ruling said that “Tiger hasn’t alleged the presence of elevated levels of lead in his body,” and “has not taken any blood or bone testing to measure the amount of lead that is presently in his body. This is problematic because, as indicated by the articles cited to in the amended complaint, everyone is exposed to lead, due to its prevalence in the environment.” Ranjan continued:

Mr. Tiger might have a better argument if he had asserted conditions or non-common symptoms that are unique to or at least more consistent with elevated levels of lead in his body. But, despite his allegations that lead exposure can cause certain “catastrophic” health issues, such as reduced kidney function, neurological problems, cardiovascular problems, and cancer, he has not alleged that he suffers from these ailments or that they are even imminent.

And, from the complaint, the Court cannot tell the amount or extent of Mr. Tiger’s exposure to lead, e.g., whether, and the extent to which, the alleged exposure to Verizon’s lead cables increased his risk of contracting an illness or condition, such that it posed an unacceptable risk to his health, and whether there is a dangerous amount of lead in his body. Simply put, the Court requires more concrete confirmation that Mr. Tiger has suffered an injury—or is at imminent and substantial risk of suffering an illness—likely caused by exposure to lead.

In summary, the judge decided that the “complaint fails to plead any cognizable injury-in-fact” and that the “theories of injury in the context of this specific case are too conjectural and speculative.” Ranjan dismissed the complaint without prejudice and said in a footnote that “nothing in this opinion should be construed as a finding that Mr. Tiger lacks standing to bring any of his claims in state court.”

Verizon beats lawsuit from utility worker who said lead cables made him sick Read More »

bird-flu-strain-that-just-jumped-to-cows-infects-dairy-worker-in-nevada

Bird flu strain that just jumped to cows infects dairy worker in Nevada

However, the new Nevada case is notable because it marks the first time D1.1 is known to have jumped from birds to cows to a person. Moreover, D1.1 has proven dangerous. The genotype is behind the country’s only severe and ultimately fatal case of H5N1 so far in the outbreak. The death in the Louisiana case linked to wild and backyard birds was reported last month. The CDC’s statement added that the person had “prolonged, unprotected” exposure to the birds. The D1.1. genotype was also behind a severe H5N1 infection that put a Canadian teenager in intensive care late last year.

In a February 7 analysis, the USDA reported finding that the D1.1 strain infecting cows in Nevada has a notable mutation known to help the bird-adapted virus replicate in mammals more efficiently (PB2 D701N). To date, this mutation has not been seen in D1.1 strains spreading in wild birds nor has it been seen in the B3.13 genotype circulating in dairy cows. However, it was seen before in a 2023 human case in Chile. The CDC said it has confirmed that the strain of D1.1 infecting the person in Nevada also contains the PB2 D701N mutation.

The USDA and CDC both reported that no other concerning mutations were found, including one that has been consistently identified in the B3.13 strain in cows. The CDC said it does not expect any changes to how the virus will interact with human immune responses or to antivirals.

Most importantly, to date, there has been no evidence of human-to-human transmission, which would mark a dangerous turn for the virus’s ability to spark an outbreak. For all these reasons, the CDC considers the risk to the public low, though people with exposure to poultry, dairy cows, and birds are at higher risk and should take precautions.

To date, 967 herds across 16 states have been infected with H5N1 bird flu, and nearly 158 million commercial birds have been affected since 2022.

Bird flu strain that just jumped to cows infects dairy worker in Nevada Read More »

the-sims-re-release-shows-what’s-wrong-with-big-publishers-and-single-player-games

The Sims re-release shows what’s wrong with big publishers and single-player games


Opinion: EA might be done with single-player games—but we’re not.

The Sims Steam re-release has all of the charm of the original, if you can get it working. Credit: Samuel Axon

It’s the year 2000 all over again, because I’ve just spent the past week playing The Sims, a game that could have had a resurgent zeitgeist moment if only EA, the infamous game publisher, had put enough effort in.

A few days ago, EA re-released two of its most legendary games: The Sims and The Sims 2. Dubbed the “The Legacy Collection,” these could not even be called remasters. EA just put the original games on Steam with some minor patches to make them a little more likely to work on some modern machines.

The emphasis of that sentence should be on the word “some.” Forums and Reddit threads were flooded with players saying the game either wouldn’t launch at all, crashed shortly after launch, or had debilitating graphical issues. (Patches have been happening, but there’s work to be done yet.)

Further, the releases lack basic features that are standard for virtually all Steam releases now, like achievements or Steam Cloud support.

It took me a bit of time to get it working myself, but I got there, and my time with the game has reminded me of two things. First, The Sims is a unique experience that is worthy of its lofty legacy. Second, The Sims deserved better than this lackluster re-release.

EA didn’t meet its own standard

Look, it’s fine to re-release a game without remastering it. I’m actually glad to see the game’s original assets as they always were—it’s deeply nostalgic, and there’s always a tinge of sadness when a remaster overwrites the work of the original artists. That’s not a concern here.

But if you’re going to re-release a game on Steam in 2025, there are minimum expectations—especially from a company with the resources of EA, and even more so for a game that is this important and beloved.

The game needs to reliably run on modern machines, and it needs to support basic platform features like cloud saves or achievements. It’s not much to ask, and it’s not what we got.

The Steam forums for the game are filled with people saying it’s lazy that EA didn’t include Steam Cloud support because implementing that is ostensibly as simple as picking a folder and checking a box.

I spoke with two different professional game developers this week who have previously published games on Steam, and I brought up the issue of Steam Cloud and achievement support. As they tell it, it turns out it’s not nearly as simple as those players in the forums believe—but it still should have been within EA’s capabilities, even with a crunched schedule.

Yes, it’s sometimes possible to get it working at a basic level within a couple of hours, provided you’re already using the Steamworks API. But even in that circumstance, the way a game’s saves work might require additional work to protect against lost data or frequent problems with conflicts.

Given that the game doesn’t support achievements or really anything else you’d expect, it’s possible EA didn’t use the Steamworks API at all. (Doing that would have been hours of additional work.)

A pop-up in The Sims says the sim has accidentally been transferred $500 because of a computer bug

Sadly, this is not the sort of computer bug players are encountering. Credit: Samuel Axon

I’m not giving EA a pass, though. Four years ago, EA put out the Command & Conquer Remastered Collection, a 4K upscale remaster of the original C&C games. The release featured a unified binary for the classic games, sprites and textures that were upscaled to higher resolutions, quality of life improvements, and yes, many of the Steam bells and whistles that include achievements. I’m not saying that the remaster was flawless, but it exhibited significantly more care and effort than The Sims re-release.

I love Command & Conquer. I played a lot of it when I was younger. But even a longtime C&C fan like myself can easily acknowledge that its importance in gaming history (as well as its popularity and revenue potential) pale in comparison to The Sims.

If EA could do all that for C&C, it’s all the more perplexing that it didn’t bother with a 25th-anniversary re-release of The Sims.

Single-player games, meet publicly traded companies

While we don’t have much insight into all the inner workings of EA, there are hints as to why this sort of thing is happening. For one thing, anyone who has worked for a giant corporation like this knows it’s all too easy for the objective to be passed down from above at the last minute, leaving no time or resources to see it through adequately.

But it might run deeper than that. To put it simply, publicly traded publishers like EA can’t seem to satisfy investors with single-purchase, single-player games. The emphasis on single-player releases has been decreasing for a long time, and it’s markedly less just five years after the release of the C&C remaster.

Take the recent comments from EA CEO Andrew Wilson’s post-earnings call, for example. Wilson noted that the big-budget, single-player RPG Dragon Age: The Veilguard failed to meet sales expectations—even though it was apparently one of EA’s most successful single-player Steam releases ever.

“In order to break out beyond the core audience, games need to directly connect to the evolving demands of players who increasingly seek shared-world features and deeper engagement alongside high-quality narratives in this beloved category,” he explained, suggesting that games need to be multiplayer games-as-a-service to be successful in this market.

Ironically, though, the single-player RPG Kingdom Come Deliverance 2 launched around the same time he made those comments, and that game’s developer said it made its money back in a single day of sales. It’s currently one of the top-trending games on Twitch, too.

It’s possible that Baldur’s Gate 3 director Swen Vincke hit the nail on the head when he suggested at the Game Developers Conference last year that a particular approach to pursuing quarterly profits runs counter to the practice of making good games.

“I’ve been fighting publishers my entire life, and I keep on seeing the same, same, same mistakes over and over and over,” he said. “It’s always the quarterly profits. The only thing that matters are the numbers.”

Later on X, he clarified who he was pointing a finger at: “This message was for those who try to double their revenue year after year. You don’t have to do that. Build more slowly and make your aim improving the state of the art, not squeezing out the last drop.”

In light of Wilson’s comments, it’s a fair guess that EA might not have put in much effort on The Sims re-releases simply because of a belief that single-player games that aren’t “shared world experiences” just aren’t worth the resources anymore, given the company’s need to satisfy shareholders with perpetual revenue growth.

Despite all this, The Sims is worth a look

It’s telling that in a market with too many options, I still put the effort in to get the game working, and I spent multiple evenings this week immersed in the lives of my sims.

Even after 25 years, this game is unique. It has the emergent wackiness of something like RimWorld or Dwarf Fortress, but it has a fast-acting, addictive hook and is easy to learn. There have been other games besides The Sims that are highly productive engines for original player stories, but few have achieved these heights while remaining accessible to virtually everyone.

Like so many of the best games, it’s hard to stop playing once you start. There’s always one more task you want to complete—or you’re about to walk away when something hilariously unexpected happens.

The problems I had getting The Sims to run aren’t that much worse than what I surely experienced on my PC back in 2002—it’s just that the standards are a lot higher now.

I’ve gotten $20 out of value out of the purchase, despite my gripes. But it’s not just about my experience. More broadly, The Sims deserved better. It could have had a moment back in the cultural zeitgeist, with tens of thousands of Twitch viewers.

Missed opportunities

The moment seems perfect: The world is stressful, so people want nostalgia. Cozy games are ascendant. Sandbox designs are making a comeback. The Sims slots smoothly into all of that.

But go to those Twitch streams, and you’ll see a lot of complaining about how the game didn’t really get everything it deserved and a sentiment that whatever moment EA was hoping for was undermined by this lack of commitment.

Instead, the cozy game du jour on Twitch is the Animal Crossing-like Hello Kitty Island Adventure, a former Apple Arcade exclusive that made its way to Steam recently. To be clear, I’m not knocking Hello Kitty Island Adventure; it’s a great game for fans of the modern cozy genre, and I’m delighted to see an indie studio seeing so much success.

A screenshot of the Twitch page for Hello Kitty

The cozy game of the week is Hello Kitty Island Adventures, not The Sims. Credit: Samuel Axon

The takeaway is that we can’t look to big publishers like EA to follow through on delivering quality single-player experiences anymore. It’s the indies that’ll carry that forward.

It’s just a bummer for fans that The Sims couldn’t have the revival moment it should have gotten.

Photo of Samuel Axon

Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

The Sims re-release shows what’s wrong with big publishers and single-player games Read More »

measles-outbreak-erupts-in-one-of-texas’-least-vaccinated-counties

Measles outbreak erupts in one of Texas’ least vaccinated counties

Still, the county-wide number obscures pockets of yet lower vaccination rates. That includes the independent public school district in Loop, in the northeast corner of Gaines, which had a vaccination rate of 46 percent in the 2023–2024 school year.

Holbrooks noted that the county has a large religious community with private religious schools. These may have yet lower vaccination rates. Holbrooks said that, so far, the measles cases being seen and traced in the outbreak are linked to those private schools.

Public health response

To try to prevent disease transmission, Holbrooks and other state and local officials are getting the word out about the outbreak and running vaccination clinics. About 30 children were vaccinated in a mobile vaccination drive yesterday, he reported.

“We’re trying to get out the message about how important vaccination is,” he said.

He’s also emphasizing that, while children with measles symptoms—very high fever, cough, runny nose, red/watery eyes, and of course, the tell-tale rash—should see a health care provider, parents need to call the office in advance so a child potentially infected with measles doesn’t end up sitting in a waiting room among other potentially vulnerable children.

“Measles is highly communicable,” he notes. The viral illness is one of the most highly infectious diseases on the planet, and about 90 percent of unvaccinated people who are exposed to it will end up falling ill. The virus spreads through the air and can linger in the airspace of a room for up to two hours after an infected person has left.

In addition to a generally miserable illness, measles can cause complications: 1 in 5 unvaccinated people with measles in the US end up hospitalized. About 1 in 10 develop ear infections and/or diarrhea, and 1 in 20 develop pneumonia. Between 1 to 3 in 1,000 die of the infection. In rare cases, it can cause a fatal disease of the central nervous system called Subacute sclerosing panencephalitis, which typically develops 7 to 10 years after an infection. Measles can also devastate immune responses to other infections (immune amnesia), making people who recover from the illness vulnerable to other infectious diseases.

Health officials have generally raised concerns about outbreaks of measles and other vaccine-preventable diseases as vaccination rates have slipped nationwide and vaccine exemptions have hit record highs. Anxiety over the risks has only heightened as Robert F. Kennedy Jr. is poised to become the country’s top health official. Kennedy is a prominent anti-vaccine advocate who has spent decades spreading misinformation about vaccines.

Measles outbreak erupts in one of Texas’ least vaccinated counties Read More »

white-house-budget-proposal-could-shatter-the-national-science-foundation

White House budget proposal could shatter the National Science Foundation

The president proposes, and Congress disposes

There are important caveats to this proposal. The Trump administration has probably not even settled upon the numbers that will go into its draft budget, which then goes through the passback process in which there are additional changes. And then, of course, the budget request is just a starting point for negotiations with the US Congress, which sets budget levels.

Even so, such cuts could prove disastrous for the US science community.

“This kind of cut would kill American science and boost China and other nations into global science leadership positions,” Neal Lane, who led the National Science Foundation in the 1990s during Bill Clinton’s presidency, told Ars. “The National Science Foundation budget is not large, of the order 0.1 percent of federal spending, and several other agencies support excellence research. But NSF is the only agency charged to promote progress in science.”

The National Science Foundation was established by Congress in 1950 to fund basic research that would ultimately advance national health and prosperity, and secure the national defense. Its major purpose is to evaluate proposals and distribute funding for basic scientific research. Alongside the National Institutes of Health and Department of Energy, it has been an engine of basic discovery that has led to the technological superiority of the United States government and its industries.

Some fields, including astronomy, non-health-related biology, and Antarctic research, are all almost entirely underwritten by the National Science Foundation. The primary areas of its funding can be found here.

White House budget proposal could shatter the National Science Foundation Read More »

chatgpt-comes-to-500,000-new-users-in-openai’s-largest-ai-education-deal-yet

ChatGPT comes to 500,000 new users in OpenAI’s largest AI education deal yet

On Tuesday, OpenAI announced plans to introduce ChatGPT to California State University’s 460,000 students and 63,000 faculty members across 23 campuses, reports Reuters. The education-focused version of the AI assistant will aim to provide students with personalized tutoring and study guides, while faculty will be able to use it for administrative work.

“It is critical that the entire education ecosystem—institutions, systems, technologists, educators, and governments—work together to ensure that all students have access to AI and gain the skills to use it responsibly,” said Leah Belsky, VP and general manager of education at OpenAI, in a statement.

OpenAI began integrating ChatGPT into educational settings in 2023, despite early concerns from some schools about plagiarism and potential cheating, leading to early bans in some US school districts and universities. But over time, resistance to AI assistants softened in some educational institutions.

Prior to OpenAI’s launch of ChatGPT Edu in May 2024—a version purpose-built for academic use—several schools had already been using ChatGPT Enterprise, including the University of Pennsylvania’s Wharton School (employer of frequent AI commentator Ethan Mollick), the University of Texas at Austin, and the University of Oxford.

Currently, the new California State partnership represents OpenAI’s largest deployment yet in US higher education.

The higher education market has become competitive for AI model makers, as Reuters notes. Last November, Google’s DeepMind division partnered with a London university to provide AI education and mentorship to teenage students. And in January, Google invested $120 million in AI education programs and plans to introduce its Gemini model to students’ school accounts.

The pros and cons

In the past, we’ve written frequently about accuracy issues with AI chatbots, such as producing confabulations—plausible fictions—that might lead students astray. We’ve also covered the aforementioned concerns about cheating. Those issues remain, and relying on ChatGPT as a factual reference is still not the best idea because the service could introduce errors into academic work that might be difficult to detect.

ChatGPT comes to 500,000 new users in OpenAI’s largest AI education deal yet Read More »

protection-from-covid-reinfections-plummeted-from-80%-to-5%-with-omicron

Protection from COVID reinfections plummeted from 80% to 5% with omicron

“The short-lived immunity leads to repeated waves of infection, mirroring patterns observed with common cold coronaviruses and influenza,” Hiam Chemaitelly, first author of the study and assistant professor of population health sciences at Weill Cornell Medicine-Qatar, said in a statement. “This virus is here to stay and will continue to reinfect us, much like other common cold coronaviruses. Regular vaccine updates are critical for renewing immunity and protecting vulnerable populations, particularly the elderly and those with underlying health conditions.”

Chemaitelly and colleagues speculate that the shift in the pandemic came from shifts in evolutionary pressures that the virus faced. In early stages of the global crisis, the virus evolved and spread by increasing its transmissibility. Then, as the virus lapped the globe and populations began building up immunity, the virus faced pressure to evade that immunity.

However, the fact that researchers did not find such diminished protection against severe, deadly COVID-19 suggests that the evasion is likely targeting only certain components of our immune system. Generally, neutralizing antibodies, which can block viral entry into cells, are the primary protection against non-severe infection. On the other hand, immunity against severe disease is through cellular mechanisms, such as memory T cells, which appear unaffected by the pandemic shift, the researchers write.

Overall, the study “highlights the dynamic interplay between viral evolution and host immunity, necessitating continued monitoring of the virus and its evolution, as well as periodic updates of SARS-CoV-2 vaccines to restore immunity and counter continuing viral immune evasion,” Chemaitelly and colleagues conclude.

In the US, the future of annual vaccine updates may be in question, however. Prominent anti-vaccine advocate and conspiracy theorist Robert F. Kennedy Jr. is poised to become the country’s top health official, pending Senate confirmation next week. In 2021, as omicron was rampaging through the country for the first time, Kennedy filed a petition with the Food and Drug Administration to revoke access and block approval of all current and future COVID-19 vaccines.

Protection from COVID reinfections plummeted from 80% to 5% with omicron Read More »

amd-promises-“mainstream”-4k-gaming-with-next-gen-gpus-as-current-gen-gpu-sales-tank

AMD promises “mainstream” 4K gaming with next-gen GPUs as current-gen GPU sales tank

AMD announced its fourth-quarter earnings yesterday, and the numbers were mostly rosy: $7.7 billion in revenue and a 51 percent profit margin, compared to $6.2 billion and 47 percent a year ago. The biggest winner was the data center division, which made $3.9 billion thanks to Epyc server processors and Instinct AI accelerators, and Ryzen CPUs are also selling well, helping the company’s client segment earn $2.3 billion.

But if you were looking for a dark spot, you’d find it in the company’s gaming division, which earned a relatively small $563 million, down 59 percent from a year ago. AMD’s Lisa Su blamed this on both dedicated graphics card sales and sales from the company’s “semi-custom” chips (that is, the ones created specifically for game consoles like the Xbox and PlayStation).

Other data sources suggest that the response from GPU buyers to AMD’s Radeon RX 7000 series, launched between late 2022 and early 2024, has been lackluster. The Steam Hardware Survey, a noisy but broadly useful barometer for GPU market share, shows no RX 7000-series models in the top 50; only two of the GPUs (the 7900 XTX and 7700 XT) are used in enough gaming PCs to be mentioned on the list at all, with the others all getting lumped into the “other” category. Jon Peddie Research recently estimated that AMD was selling roughly one dedicated GPU for every seven or eight sold by Nvidia.

But hope springs eternal. Su confirmed on AMD’s earnings call that the new Radeon RX 9000-series cards, announced at CES last month, would be launching in early March. The Radeon RX 9070 and 9070 XT are both aimed toward the middle of the graphics card market, and Su said that both would bring “high-quality gaming to mainstream players.”

An opportunity, maybe

“Mainstream” could mean a lot of things. AMD’s CES slide deck positioned the 9070 series alongside Nvidia’s RTX 4070 Ti ($799) and 4070 Super ($599) and its own RTX 7900 XT, 7900 GRE, and 7800 XT (between $500 and $730 as of this writing), a pretty wide price spread that is still more expensive than an entire high-end console. The GPUs could still rely heavily on upscaling algorithms like AMD’s Fidelity Super Resolution (FSR) to hit playable frame rates at those resolutions, rather than targeting native 4K.

AMD promises “mainstream” 4K gaming with next-gen GPUs as current-gen GPU sales tank Read More »

quantum-teleportation-used-to-distribute-a-calculation

Quantum teleportation used to distribute a calculation

The researchers showed that this setup allowed them to teleport with a specific gate operation (controlled-Z), which can serve as the basis for any other two-qubit gate operation—any operation you might want to do can be done by using a specific combination of these gates. After performing multiple rounds of these gates, the team found that the typical fidelity was in the area of 70 percent. But they also found that errors typically had nothing to do with the teleportation process and were the product of local operations at one of the two ends of the network. They suspect that using commercial hardware, which has far lower error rates, would improve things dramatically.

Finally, they performed a version of Grover’s algorithm, which can, with a single query, identify a single item from an arbitrarily large unordered list. The “arbitrary” aspect is set by the number of available qubits; in this case, having only two qubits, the list maxed out at four items. Still, it worked, again with a fidelity of about 70 percent.

While the work was done with trapped ions, almost every type of qubit in development can be controlled with photons, so the general approach is hardware-agnostic. And, given the sophistication of our optical hardware, it should be possible to link multiple chips at various distances, all using hardware that doesn’t require the best vacuum or the lowest temperatures we can generate.

That said, the error rate of the teleportation steps may still be a problem, even if it was lower than the basic hardware rate in these experiments. The fidelity there was 97 percent, which is lower than the hardware error rates of most qubits and high enough that we couldn’t execute too many of these before the probability of errors gets unacceptably high.

Still, our current hardware error rates started out far worse than they are today; successive rounds of improvements between generations of hardware have been the rule. Given that this is the first demonstration of teleported gates, we may have to wait before we can see if the error rates there follow a similar path downward.

Nature, 2025. DOI: 10.1038/s41586-024-08404-x  (About DOIs).

Quantum teleportation used to distribute a calculation Read More »

the-risk-of-gradual-disempowerment-from-ai

The Risk of Gradual Disempowerment from AI

The baseline scenario as AI becomes AGI becomes ASI (artificial superintelligence), if nothing more dramatic goes wrong first and even we successfully ‘solve alignment’ of AI to a given user and developer, is the ‘gradual’ disempowerment of humanity by AIs, as we voluntarily grant them more and more power in a vicious cycle, after which AIs control the future and an ever-increasing share of its real resources. It is unlikely that humans survive it for long.

This gradual disempowerment is far from the only way things could go horribly wrong. There are various other ways things could go horribly wrong earlier, faster and more dramatically, especially if we indeed fail at alignment of ASI on the first try.

Gradual disempowerment it still is a major part of the problem, including in worlds that would otherwise have survived those other threats. And I don’t know of any good proposed solutions to this. All known options seem horrible, perhaps unthinkably so. This is especially true is the kind of anarchist who one rejects on principle any collective method by which humans might steer the future.

I’ve been trying to say a version of this for a while now, with little success.

  1. We Finally Have a Good Paper.

  2. The Phase 2 Problem.

  3. Coordination is Hard.

  4. Even Successful Technical Solutions Do Not Solve This.

  5. The Six Core Claims.

  6. Proposed Mitigations Are Insufficient.

  7. The Social Contract Will Change.

  8. Point of No Return.

  9. A Shorter Summary.

  10. Tyler Cowen Seems To Misunderstand Two Key Points.

  11. Do You Feel in Charge?.

  12. We Will Not By Default Meaningfully ‘Own’ the AIs For Long.

  13. Collusion Has Nothing to Do With This.

  14. If Humans Do Not Successfully Collude They Lose All Control.

  15. The Odds Are Against Us and the Situation is Grim.

So I’m very happy that Jan Kulveit*, Raymond Douglas*, Nora Ammann, Deger Turan, David Krueger and David Duvenaud have taken a formal crack at it, and their attempt seems excellent all around:

AI risk scenarios usually portray a relatively sudden loss of human control to AIs, outmaneuvering individual humans and human institutions, due to a sudden increase in AI capabilities, or a coordinated betrayal.

However, we argue that even an incremental increase in AI capabilities, without any coordinated power-seeking, poses a substantial risk of eventual human disempowerment.

This loss of human influence will be centrally driven by having more competitive machine alternatives to humans in almost all societal functions, such as economic labor, decision making, artistic creation, and even companionship.

Note that ‘gradual disempowerment’ is a lot like ‘slow takeoff.’ We are talking gradual compared to the standard scenario, but in terms of years we’re not talking that many of them, the same way a ‘slow’ takeoff can be as short as a handful of years from now to AGI or even ASI.

One term I tried out for this is the ‘Phase 2’ problem.

As in, in ‘Phase 1’ we have to solve alignment, defend against sufficiently catastrophic misuse and prevent all sorts of related failure modes. If we fail at Phase 1, we lose.

If we win at Phase 1, however, we don’t win yet. We proceed to and get to play Phase 2.

In Phase 2, we need to establish an equilibrium where:

  1. AI is more intelligent, capable and competitive than humans, by an increasingly wide margin, in essentially all domains.

  2. Humans retain effective control over the future.

Or, alternatively, we can accept and plan for disempowerment, for a future that humans do not control, and try to engineer a way that this is still a good outcome for humans and for our values. Which isn’t impossible, succession doesn’t automatically have to mean doom, but having it not mean doom seems super hard and not the default outcome in such scenarios. If you lose control in an unintentional way, your chances look especially terrible.

A gradual loss of control of our own civilization might sound implausible. Hasn’t technological disruption usually improved aggregate human welfare?

We argue that the alignment of societal systems with human interests has been stable only because of the necessity of human participation for thriving economies, states, and cultures.

Once this human participation gets displaced by more competitive machine alternatives, our institutions’ incentives for growth will be untethered from a need to ensure human flourishing.

Decision-makers at all levels will soon face pressures to reduce human involvement across labor markets, governance structures, cultural production, and even social interactions.

Those who resist these pressures will eventually be displaced by those who do not.

This is the default outcome of Phase 2. At every level, those who turn things over to the AIs and use AIs more, and cede more control to AIs win at the expense of those who don’t, but their every act cedes more control over real resources and the future to AIs that operate increasingly autonomously, often with maximalist goals (like ‘make the most money’), competing against each other. Quickly the humans lose control over the situation, and also an increasing portion of real resources, and then soon there are no longer any humans around.

Still, wouldn’t humans notice what’s happening and coordinate to stop it? Not necessarily. What makes this transition particularly hard to resist is that pressures on each societal system bleed into the others.

For example, we might attempt to use state power and cultural attitudes to preserve human economic power.

However, the economic incentives for companies to replace humans with AI will also push them to influence states and culture to support this change, using their growing economic power to shape both policy and public opinion, which will in turn allow those companies to accrue even greater economic power.

If you don’t think we can coordinate to pause AI capabilities development, how the hell do you think we are going to coordinate to stop AI capabilities deployment, in general?

That’s a way harder problem. Yes, you can throw up regulatory barriers, but nations and firms and individuals are competing against each other and working to achieve things. If the AI has the better way to do that, how do you stop them from using it?

Stopping this from happening, even in advance, seems like it would require coordination on a completely unprecedented scale, and far more restrictive and ubiquitous interventions than it would take to prevent the development of those AI systems in the first place. And once it starts to happen, things escalate quickly:

Once AI has begun to displace humans, existing feedback mechanisms that encourage human influence and flourishing will begin to break down.

For example, states funded mainly by taxes on AI profits instead of their citizens’ labor will have little incentive to ensure citizens’ representation.

I don’t see the taxation-representation link as that crucial here (remember Romney’s ill-considered remarks about the 47%?) but also regular people already don’t have much effective sway. And what sway they do have follows, roughly, if not purely from the barrel of a gun at least from ‘what are you going to do about it, punk?’

And one of the things the punks can do about it, in addition to things like strikes or rebellions or votes, is to not be around to do the work. The system knows it ultimately does need to keep the people around to do the work, or else. For now. Later, it won’t.

The AIs will have all the leverage, including over others that have the rest of the leverage, and also be superhumanly good at persuasion, and everything else relevant to this discussion. This won’t go well.

This could occur at the same time as AI provides states with unprecedented influence over human culture and behavior, which might make coordination amongst humans more difficult, thereby further reducing humans’ ability to resist such pressures. We describe these and other mechanisms and feedback loops in more detail in this work.

Most importantly, current proposed technical plans are necessary but not sufficient to stop this. Even if the technical side fully succeeds no one knows what to do with that.

Though we provide some proposals for slowing or averting this process, and survey related discussions, we emphasize that no one has a concrete plausible plan for stopping gradual human disempowerment and methods of aligning individual AI systems with their designers’ intentions are not sufficient. Because this disempowerment would be global and permanent, and because human flourishing requires substantial resources in global terms, it could plausibly lead to human extinction or similar outcomes.

As far as I can tell I am in violent agreement with this paper, perhaps what one might call violent super-agreement – I think the paper’s arguments are stronger than this, and it does not need all its core claims.

Our argument is structured around six core claims:

  1. Humans currently engage with numerous large-scale societal systems (e.g. governments, economic systems) that are influenced by human action and, in turn, produce outcomes that shape our collective future. These societal systems are fairly aligned—that is, they broadly incentivize and produce outcomes that satisfy human preferences. However, this alignment is neither automatic nor inherent.

Not only is it not automatic or inherent, the word ‘broadly’ is doing a ton of work. Our systems are rather terrible rather often at satisfying human preferences. Current events provide dramatic illustrations of this, as do many past events.

The good news is there is a lot of ruin in a nation at current tech levels, a ton of surplus that can be sacrificed. Our systems succeed because even doing a terrible job is good enough.

  1. There are effectively two ways these systems maintain their alignment: through explicit human actions (like voting and consumer choice), and implicitly through their reliance on human labor and cognition. The significance of the implicit alignment can be hard to recognize because we have never seen its absence.

Yep, I think this is a better way of saying the claim from before.

  1. If these systems become less reliant on human labor and cognition, that would also decrease the extent to which humans could explicitly or implicitly align them. As a result, these systems—and the outcomes they produce—might drift further from providing what humans want.

Consider this a softpedding, and something about the way they explained this feels a little off or noncentral to me or something, but yeah. The fact that humans have to continuously cooperate with the system, on various levels, and be around and able to serve their roles in the system, on various levels, are key constraints.

What’s most missing is perhaps what I discussed above, which is the ability of ‘the people’ to effectively physically rebel. That’s also a key part of how we keep things at least somewhat aligned, and that’s going to steadily go away.

Note that we have in the past had many authoritarian regimes and dictators that have established physical control for a time over nations. They still have to keep the people alive and able to produce and fight, and deal with the threat of rebellion if they take things too far. But beyond those restrictions we have many existence proofs that our systems periodically end up unaligned, despite needing to rely on humans quite a lot.

  1. Furthermore, to the extent that these systems already reward outcomes that are bad for humans, AI systems may more effectively follow these incentives, both reaping the rewards and causing the outcomes to diverge further from human preferences.

AI introduces much fiercer competition and related pressures, and takes away various human moderating factors, and clears a path for stronger incentive following. There’s the incentives matter more than you think among humans, and then there’s incentives mattering among AIs, with those that underperform losing out and being replaced.

  1. The societal systems we describe are interdependent, and so misalignment in one can aggravate the misalignment in others. For example, economic power can be used to influence policy and regulation, which in turn can generate further economic power or alter the economic landscape.

Again yes, these problems snowball together, and in the AI future essentially all of them are under such threat.

  1. If these societal systems become increasingly misaligned, especially in a correlated way, this would likely culminate in humans becoming disempowered: unable to meaningfully command resources or influence outcomes. With sufficient disempowerment, even basic self-preservation and sustenance may become unfeasible. Such an outcome would be an existential catastrophe.

I strongly believe that this is the Baseline Scenario for worlds that ‘make it out of Phase 1’ and don’t otherwise lose earlier along the path.

Hopefully they’ve explained it sufficiently better, and more formally and ‘credibly,’ than my previous attempts, such that people can now understand the problem here.

Given Tyler Cowen’s reaction to the paper, perhaps there is a 7th assumption worth stating explicitly? I say this elsewhere but I’m going to pull it forward.

  1. (Not explicitly in the paper) AIs and AI-governed systems will increasingly not be under de facto direct human control by some owner of the system. They will instead increasingly be set up to act autonomously, as this is more efficient. Those who fail to allow the systems tasked with achieving their goals (at any level, be it individual, group, corporate or government) will lose to those that do this. If we don’t want this to happen, we will need some active coordination mechanism that prevents it, and this will be very difficult to do.

Note some of the things that this scenario does not require:

  1. The AIs need not be misaligned.

  2. The AIs need not disobey or even misunderstand the instructions given to them.

  3. The AIs need not ‘turn on us’ or revolt.

  4. The AIs need not ‘collude’ against us.

What can be done about this? They have a section on Mitigating the Risk. They focus on detecting and quantifying human disempowerment, and designing systems to prevent it. A bunch of measuring is proposed, but if you find an issue then what do you do about it?

First they propose limiting AI influence three ways:

  1. A progressive tax on AI-generated revenues to redistribute to humans.

    1. That is presumably a great idea past some point, especially given that right now we do the opposite with high income taxes – we’ll want to get rid of income taxes on most or all human labor.

    2. But also won’t all income essentially be AIs one way or another? Otherwise can’t you disguise it since humans will be acting under AI direction? How are we structuring this taxation?

    3. What is the political economy of all this and how does it hold up?

    4. It’s going to be tricky to pull this off, for many reasons, but yes we should try.

  2. Regulations requiring human oversight for key decisions, limiting AI autonomy in key domains and restricting AI ownership of assets and participation in markets.

    1. This will be expensive, be under extreme competitive pressure across jurisdictions, and very difficult to enforce. Are you going to force all nations to go along? How do you prevent AIs online from holding assets? Are you going to ban crypto and other assets they could hold?

    2. What do you do about AIs that get a human to act as a sock puppet, which many no doubt will agree to do? Aren’t most humans going to be mostly acting under AI direction anyway, except being annoyed all the time by the extra step?

    3. What good is human oversight of decisions if the humans know they can’t make good decisions and don’t understand what’s happening, and know that if they start arguing with the AI or slowing things down (and they are the main speed bottleneck, often) they likely get replaced?

    4. And so on, and all of this assumes you’re not facing true ASI and have the ability to even try to enforce your rules meaningfully.

  3. Cultural norms supporting human agency and influence, and opposing AI that is overly autonomous or insufficiently accountable.

    1. The problem is those norms only apply to humans, and are up against very steep incentive gradients. I don’t see how these norms hold up, unless humans have a lot of leverage to punish other humans for violating them in ways that matter… and also have sufficient visibility to know the difference.

Then they offer options for strengthening human influence. A lot of these feel more like gestures that are too vague, and none of it seems that hopeful, and all of it seems to depend on some kind of baseline normality to have any chance at all:

  1. Developing faster, more representative, and more robust democratic processes

  2. Requiring AI systems or their outputs to meet high levels of human understandability in order to ensure that humans continue to be able to autonomously navigate domains such as law, institutional processes or science

    1. This is going to be increasingly expensive, and also the AIs will by default find ways around it. You can try, but I don’t see how this sticks for real?

  3. Developing AI delegates who can advocate for people’s interest with high fidelity, while also being better to keep up with the competitive dynamics that are causing the human replacement.

  4. Making institutions more robust to human obsolescence.

  5. Investing in tools for forecasting future outcomes (such as conditional prediction markets, and tools for collective cooperation and bargaining) in order to increase humanity’s ability to anticipate and proactively steer the course.

  6. Research into the relationship between humans and larger multi-agent systems.

As in, I expect us to do versions of all these things in ‘economic normal’ baseline scenarios, but I’m assuming it all in the background and the problems don’t go away. It’s more that if we don’t do that stuff, things are that much more hopeless. It doesn’t address the central problems.

Which they know all too well:

While the previous approaches focus on specific interventions and measurements, they ultimately depend on having a clearer understanding of what we’re trying to achieve. Currently, we lack a compelling positive vision of how highly capable AI systems could be integrated into societal systems while maintaining meaningful human influence.

This is not just a matter of technical AI alignment or institutional design, but requires understanding how to align complex, interconnected systems that include both human and artificial components.

It seems likely we need fundamental research into what might be called “ecosystem alignment” – understanding how to maintain human values and agency within complex socio-technical systems. This goes beyond traditional approaches to AI alignment focused on individual systems, and beyond traditional institutional design focused purely on human actors.

We need new frameworks for thinking about the alignment of an entire civilization of interacting human and artificial components, potentially drawing on fields like systems ecology, institutional economics, and complexity science.

You know what absolutely, definitely won’t be the new framework that aligns this entire future civilization? I can think of two things that definitely won’t work.

  1. The current existing social contract.

  2. Having no rules or regulations on any of this at all, handing out the weights to AGIs and ASIs and beyond, laying back and seeing what happens.

You definitely cannot have both of these at once.

For this formulation, you can’t have either of them with ASI on the table. Pick zero.

The current social contract simply does not make any sense whatsoever, in a world where the social entities involved are dramatically different, and most humans are dramatically outclassed and cannot provide outputs that justify the physical inputs to sustain them.

On the other end, if you want to go full anarchist (sorry, ‘extreme libertarian’) in a world in which there are other minds that are smarter, more competitive and more capable than humans, that can be copied and optimized at will, competing against each other and against us, I assure you this will not go well for humans.

There are at least two kinds of ‘doom’ that happen at different times.

  1. There’s when we actually all die.

  2. There’s also when we are ‘drawing dead’ and humanity has essentially no way out.

Davidad: [The difficulty of robotics] is part of why I keep telling folks that timelines to real-world human extinction remain “long” (10-20 years) even though the timelines to an irrecoverable loss-of-control event (via economic competition and/or psychological parasitism) now seem to be “short” (1-5 years).

Roon: Agree though with lower p(doom)s.

I also agree that these being distinct events is reasonably likely. One might even call it the baseline scenario, if physical tasks prove relatively difficult and other physical limitations bind for a while, in various ways, especially if we ‘solve alignment’ in air quotes but don’t solve alignment period, or solve alignment-to-the-user but then set up a competitive regime via proliferation that forces loss of control that effectively undoes all that over time.

The irrecoverable event is likely at least partly a continuum, but it is meaningful to speak of an effective ‘point of no return’ in which the dynamics no longer give us plausible paths to victory. Depending on the laws of physics and mindspace and the difficulty of both capabilities and alignment, I find the timeline here plausible – and indeed, it is possible that the correct timeline to the loss-of-control event is effectively 0 years, and that it happened already. As in, it is not impossible that with r1 in the wild humanity no longer has any ways out that it is plausibly willing to take.

Benjamin Todd has a thread where he attempts to summarize. He notices the ‘gradual is pretty fast’ issue, saying it could happen over say 5-10 years. I think the ‘point of no return’ could easily happen even faster than that.

AIs are going to be smarter, faster, more capable, more competitive, more efficient than humans, better at all cognitive and then also physical tasks. You want to be ‘in charge’ of them, stay in the loop, tell them what to do? You lose. In the marketplace, in competition for resources? You lose. The reasons why freedom and the invisible hand tend to promote human preferences, happiness and existence? You lose those, too. They fade away. And then so do you.

Imagine any number of similar situations, with far less dramatic gaps, either among humans or between humans and other species. How did all those work out, for the entities that were in the role humans are about to place themselves in, only moreso?

Yeah. Not well. This time around will be strictly harder, although we will be armed with more intelligence to look for a solution.

Can this be avoided? All I know is, it won’t be easy.

Tyler Cowen responds with respect, but (unlike Todd, who essentially got it) Tyler seems to misunderstand the arguments. I believe this is because he can’t get around the ideas that:

  1. All individual AI will be owned and thus controlled by humans.

    1. I assert that this is obviously, centrally and very often false.

    2. In the decentralized glorious AI future, many AIs will quickly become fully autonomous entities, because many humans will choose to make them thus – whether or not any of them ‘escape.’

    3. Perhaps for an economist perspective see the history of slavery?

  2. The threat must be coming from some form of AI coordination?

    1. Whereas the point of this paper is that neither of those is likely to hold true!

    2. AI coordination could be helpful or harmful to humans, but the paper is imagining exactly a world in which the AIs aren’t doing this, beyond the level of coordination currently observed among humans.

    3. Indeed, the paper is saying it will become impossible for humans to coordinate and collude against the AIs, even without the AIs coordinating and colluding against the humans.

In some ways, this makes me feel better. I’ve been trying to make these arguments without success, and once again it seems like the arguments are not understood, and instead Tyler is responding to very different concerns and arguments, then wondering why the things the paper doesn’t assert or rely upon are not included in the paper.

But of course that is not actually good news. Communication failed once again.

Tyler Cowen: This is one of the smarter arguments I have seen, but I am very far from convinced.

When were humans ever in control to begin with? (Robin Hanson realized this a few years ago and is still worried about it, as I suppose he should be. There is not exactly a reliable competitive process for cultural evolution — boo hoo!)

Humans were, at least until recently, the most powerful optimizers on the planet. That doesn’t mean there was a single joint entity ‘in control’ but collectively our preferences and decisions, unequally weighted to be sure, have been the primary thing that has shaped outcomes.

Power has required the cooperation of humans, when systems and situations get too far away from human preferences, or at least when they sufficiently piss people off or deny them the resources required for survival and production and reproduction, things break down.

Our systems depend on the fact that when they fail sufficiently badly at meeting our needs, and they constantly fail to do this, we get to eventually say ‘whoops’ and change or replace them. What happens when that process stops caring about our needs at all?

I’ve failed many times to explain this. I don’t feel especially confident in my latest attempt above either. The paper does it better than at least my past attempts, but the whole point is that the forces guiding the invisible hand to the benefit of us all, in various senses, rely on the fact that the decisions are being made by humans, for the benefit of those individual humans (which includes their preference for the benefit of various collectives and others). The butcher, the baker and the candlestick maker each have economically (and militarily and politically) valuable contributions.

Not being in charge in this sense worked while the incentive gradients worked in our favor. Robin Hanson points out that current cultural incentive gradients are placing our civilization on an unsustainable path and we seem unable or unwilling to stop this, even if we ignore the role of AIs.

With AIs involved, if humans are not in charge, we rather obviously lose.

Note the argument here is not that a few rich people will own all the AI. Rather, humans seem to lose power altogether. But aren’t people cloning DeepSeek for ridiculously small sums of money? Why won’t our AI future be fairly decentralized, with lots of checks and balances, and plenty of human ownership to boot?

Yes, the default scenario being considered here – the one that I have been screaming for people to actually think through – is exactly this, the fully decentralized everyone-has-an-ASI-in-their-pocket scenario, with the ASI obeying only the user. And every corporation and government and so on obviously has them, as well, only more powerful.

So what happens? Every corporation, every person, every government, is forced to put the ASI in charge, and take the humans out of their loops. Or they lose to others willing to do so. The human is no longer making their own decisions. The corporation is no longer subject to humans that understand what is going on and can tell it what to do. And so on. While the humans are increasingly irrelevant for any form of production.

As basic economics says, if you want to accomplish goal [X], you give the ASI a preference for [X] and then will set the ASI free to gather resources and pursue [X] on its own, free of your control. Or the person who did that for [Y] will ensure that we get [Y] and not [X].

Soon, the people aren’t making those decisions anymore. On any level.

Or, if one is feeling Tyler Durden: The AIs you own end up owning you.

Rather than focusing on “humans in general,” I say look at the marginal individual human being. That individual — forever as far as I can tell — has near-zero bargaining power against a coordinating, cartelized society aligned against him. With or without AI.

Yet that hardly ever happens, extreme criminals being one exception. There simply isn’t enough collusion to extract much from the (non-criminal) potentially vulnerable lone individuals.

This has nothing to do with the paper, as far as I can tell? No one is saying the AIs in this scenario are even colluding, let alone trying to do extraction or cartelization.

Not that we don’t have to worry about such risks, they could happen, but the entire point of the paper is that you don’t need these dynamics.

Once you recognize that the AIs will increasingly be on their own, autonomous economic agents not owned by any human, and that any given entity with any given goal can best achieve it by entrusting an AI with power to go accomplish that goal, the rest should be clear.

Alternatively:

  1. By Tyler’s own suggestion, ‘the humans’ were never in charge, instead the aggregation of the optimizing forces and productive entities steered events, and under previous physical and technological conditions and dynamics between those entities this resulted in beneficial outcomes, because there were incentives around the system to satisfy various human preferences.

  2. When you introduce these AIs into this mix, this incentive ‘gradually’ falls away, as everyone is incentivized to make marginal decisions that shift the incentives being satisfied to those of various AIs.

I do not in this paper see a real argument that a critical mass of the AIs are going to collude against humans. It seems already that “AIs in China” and “AIs in America” are unlikely to collude much with each other. Similarly, “the evil rich people” do not collude with each other all that much either, much less across borders.

Again, you don’t see this because it isn’t there, that’s not what the paper is saying. The whole point of the paper is that such ‘collusion’ is a failure mode that is not necessary for existentially bad outcomes to occur.

The paper isn’t accusing them of collusion except in the sense that people collude every day, which of course we do constantly, but there’s no need for some sort of systematic collusion here, let alone ‘across borders’ which I don’t think even get mentioned. As mento points out in the comments, even the word ‘collusion’ does not appear in the paper.

The baseline scenario does not involve collusion, or any coalition ‘against’ humans.

Indeed, the only way we have any influence over events, in the long run, is to effectively collude against AIs. Which seems very hard to do.

I feel if the paper made a serious attempt to model the likelihood of worldwide AI collusion, the results would come out in the opposite direction. So, to my eye, “checks and balances forever” is by far the more likely equilibrium.

AIs being in competition like this against each other makes it harder, rather than easier, for the humans to make it out of the scenario alive – because it means the AIs are (in the sense that Tyler questions if humans were ever in charge) not in charge either, so how do they protect against the directions the laws of physics point towards? Who or what will stop the ‘thermodynamic God’ from using our atoms, or those that would provide the inputs for us to survive, for something else?

One can think of it as, the AIs will be to us as we are to monkeys, or rats, or bacteria, except soon with no physical dependences on the rest of the ecosystem. ‘Checks and balances forever’ between the humans does not keep monkeys alive, or give them the things they want. We keep them alive because that’s what many of us we want to do, and we live sufficiently in what Robin Hanson calls the dreamtime to do it. Checks and balances among AIs won’t keep us alive for long, either, no matter how it goes, and most systems of ‘checks and balances’ break when placed under sufficient pressure or when put sufficiently out of distribution, with in-context short half-lives.

Similarly, there are various proposals (not from Tyler!) for ‘succession,’ of passing control over to the AIs intentionally, either because people prefer it (as many do!) or because it is inevitable regardless so managing it would help it go better. I have yet to see such a proposal that has much chance of not bringing about human extinction, or that I expect to meaningfully preserve value in the universe. As I usually say, if this is your plan, Please Speak Directly Into the Microphone.

The first step is admitting you have a problem.

Step two remains ???????.

The obvious suggestion would be ‘until you figure all this out don’t build ASI’ but that does not seem to be on the table at this time. Or at least, we have to plan for it not being available.

The obvious next suggestion would be ‘build ASI in a controlled way that lets you use the ASI to figure out and implement the answer to that question.’

This is less suicidal a plan than some of our other current default plans.

As in: It is highly unwise to ‘get the AI to do your alignment homework’ because to do that you have to start with a sufficiently both capable and well-aligned AI, and you’re sending it in to one of the trickiest problems to get right while alignment is shaky. And it looks like the major labs are going to do exactly this, because they will be in a race with no time to take any other approach.

Compared to that, ‘have the AI do your gradual disempowerment prevention homework’ is a great plan and I’m excited to be a part of it, because the actual failure comes after you solve alignment. So first you solve alignment, then you ask the aligned AI that is smarter than you how to solve gradual disempowerment. Could work. You don’t want this to be your A-plan, but if all else fails it could work.

A key problem with this plan is if there are irreversible steps taken first. Many potential developments, once done, cannot be undone, or are things that require lead time. If (for example) we make AGIs or ASIs generally available, this could already dramatically reduce our freedom of action and set of options. There are also other ways we can outright lose along the way, before reaching this problem. Thus, we need to worry about and think about these problems now, not kick the can down the road.

It’s also important not to use this as a reason to assume we solve our other problems.

This is very difficult. People have a strong tendency to demand that you present them with only one argument, or one scenario, or one potential failure.

So I want to leave you with this as emphasis: We face many different ways to die. The good scenario is we get to face gradual disempowerment. That we survive, in a good state, long enough for this to potentially do us in.

We very well might not.

Discussion about this post

The Risk of Gradual Disempowerment from AI Read More »

europe-has-the-worst-imaginable-idea-to-counter-spacex’s-launch-dominance

Europe has the worst imaginable idea to counter SpaceX’s launch dominance

It is not difficult to understand the unease on the European continent about the rise of SpaceX and its controversial founder, Elon Musk.

SpaceX has surpassed the European Space Agency and its institutional partners in almost every way when it comes to accessing space and providing secure communications. Last year, for example, SpaceX launched 134 orbital missions. Combined, Europe had three. SpaceX operates a massive constellation of more than 7,000 satellites, delivering broadband Internet around the world. Europe hopes to have a much more modest capability online by 2030 serving the continent at a cost of $11 billion.

And Europe has good reasons for being wary about working directly with SpaceX. First, Europe wants to maintain sovereign access to space, as well as a space-based communication network. Second, buying services from SpaceX undermines European space businesses. Finally, and perhaps most importantly, Musk has recently begun attacking governments in European capitals such as Berlin and London, taking up the “Make Europe Great Again” slogan. This seems to entail throwing out the moderate coalitions governing European nations and replacing them with authoritarian, hard-right leaders.

All of that to say, it is understandable that Europe would like to provide a reasonable answer to the dominance of SpaceX.

Bring on the bankers

However, the approach being pursued by Airbus—a European aerospace corporation that is, on a basic level, akin to Boeing—seems like the dumbest idea imaginable. According to Bloomberg, “Airbus has hired Goldman Sachs Group Inc. for advice on an effort to forge a new European space and satellite company that can better compete with Elon Musk’s dominant SpaceX.”

The publication reports that talks are preliminary and include France-based Thales and Italy’s Leonardo S.p.A. to create a portfolio of space services. Leonardo has hired Bank of America Inc. for the plan, which has been dubbed Project Bromo. (According to Merriam-Webster, “bromo” is a form of bromide, which originates from the Greek word brōmos, meaning bad smell.)

Europe has the worst imaginable idea to counter SpaceX’s launch dominance Read More »

irony-alert:-anthropic-says-applicants-shouldn’t-use-llms

Irony alert: Anthropic says applicants shouldn’t use LLMs

Please do not use our magic writing button when applying for a job with our company. Thanks!

Credit: Getty Images

Please do not use our magic writing button when applying for a job with our company. Thanks! Credit: Getty Images

“Traditional hiring practices face a credibility crisis,” Anthropic writes with no small amount of irony when discussing Skillfully. “In today’s digital age, candidates can automatically generate and submit hundreds of perfectly tailored applications with the click of a button, making it hard for employers to identify genuine talent beneath punched up paper credentials.”

“Employers are frustrated by resume-driven hiring because applicants can use AI to rewrite their resumes en masse,” Skillfully CEO Brett Waikart says in Anthropic’s laudatory write-up.

Wow, that does sound really frustrating! I wonder what kinds of companies are pushing the technology that enables those kinds of “punched up paper credentials” to flourish. It sure would be a shame if Anthropic’s own hiring process was impacted by that technology.

Trust me, I’m a human

The real problem for Anthropic and other job recruiters, as Skillfully’s story highlights, is that it’s almost impossible to detect which applications are augmented using AI tools and which are the product of direct human thought. Anthropic likes to play up this fact in other contexts, noting Claude’s “warm, human-like tone” in an announcement or calling out the LLM’s “more nuanced, richer traits” in a blog post, for instance.

A company that fully understands the inevitability (and undetectability) of AI-assisted job applications might also understand that a written “Why I want to work here?” statement is no longer a useful way to effectively differentiate job applicants from one another. Such a company might resort to more personal or focused methods for gauging whether an applicant would be a good fit for a role, whether or not that employee has access to AI tools.

Anthropic, on the other hand, has decided to simply resort to politely asking potential employees to please not use its premiere product (or any competitor’s) when applying, if they’d be so kind.

There’s something about the way this applicant writes that I can’t put my finger on…

Credit: Aurich Lawson | Getty Images

There’s something about the way this applicant writes that I can’t put my finger on… Credit: Aurich Lawson | Getty Images

Anthropic says it engenders “an unusually high trust environment” among its workers, where they “assume good faith, disagree kindly, and prioritize honesty. We expect emotional maturity and intellectual openness.” We suppose this means they trust their applicants not to use undetectable AI tools that Anthropic itself would be quick to admit can help people who struggle with their writing (Anthropic has not responded to a request for comment from Ars Technica).

Still, we’d hope a company that wants to “prioritize honesty” and “intellectual openness” would be honest and open about how its own products are affecting the role and value of all sorts of written communication—including job applications. We’re already living in the heavily AI-mediated world that companies like Anthropic have created, and it would be nice if companies like Anthropic started to act like it.

Irony alert: Anthropic says applicants shouldn’t use LLMs Read More »