Author name: Tim Belzer

amazon-facing-strike-threats-as-senate-report-details-hidden-widespread-injuries

Amazon facing strike threats as Senate report details hidden widespread injuries


“Obsessed with speed and productivity”

Amazon ignores strike threats, denies claims of “uniquely dangerous warehouses.”

Just as Amazon warehouse workers are threatening to launch the “first large-scale” unfair labor practices strike at Amazon in US history, Sen. Bernie Sanders (I-Vt.) released a report accusing Amazon of operating “uniquely dangerous warehouses” that allegedly put profits over worker safety.

As chair of the Senate Committee on Health, Education, Labor, and Pensions, Sanders started investigating Amazon in June 2023. His goal was “to uncover why Amazon’s injury rates far exceed those of its competitors and to understand what happens to Amazon workers when they are injured on the job.”

According to Sanders, Amazon “sometimes ignored” the committee’s requests and ultimately only supplied 285 documents requested. The e-commerce giant was mostly only willing to hand over “training materials given to on-site first aid staff,” Sanders noted, rather than “information on how it tracks workers, the quotas it imposes on workers, and the disciplinary actions it takes when workers cannot meet those quotas, internal studies on the connection between speed and injury rates, and the company’s treatment of injured workers.”

To fill in the gaps, Sanders’ team “conducted an exhaustive inquiry,” interviewing nearly 500 workers who provided “more than 1,400 documents, photographs, and videos to support their stories.” And while Amazon’s responses were “extremely limited,” Sanders said that the Committee was also able to uncover internal studies that repeatedly show that “Amazon chose not to act” to address safety risks, allegedly “accepting injuries to its workers as the cost of doing business.”

Perhaps most critically, key findings accuse Amazon of manipulating workplace injury data by “cherry-picking” data instead of confronting the alleged fact that “an analysis of the company’s data shows that Amazon warehouses recorded over 30 percent more injuries than the warehousing industry average in 2023.” The report also alleged that Amazon lied to federal regulators about injury data, discouraged workers from receiving outside care to hide injuries, and terminated injured workers while on approved medical leave.

“This evidence reveals a deeply troubling picture of how one of the largest corporations in the world treats its workforce,” Sanders reported, documenting “a corporate culture obsessed with speed and productivity.”

Amazon disputed Sanders’ report

In a statement, Amazon spokesperson Kelly Nantel disputed the report as “wrong on the facts.”

Sanders’ report allegedly “weaves together out-of-date documents and unverifiable anecdotes to create a pre-conceived narrative that he and his allies have been pushing for the past 18 months,” Nantel said. “The facts are, our expectations for our employees are safe and reasonable—and that was validated both by a judge in Washington after a thorough hearing and by the State’s Board of Industrial Insurance Appeals, which vacated ergonomic citations alleging a hazardous pace of work.”

Nantel said that Sanders ignored that Amazon has made “meaningful progress on safety—improving our recordable incident rates by 28 percent in the US since 2019, and our lost time incident rates (the most serious injuries) by 75 percent.”

But Sanders’ report anticipated this response, alleging that “many” workers “live with severe injuries and permanent disabilities because of the company’s insistence on enforcing grueling productivity quotas and its refusal to adequately care for injured workers.” Sanders said if Amazon had compelling evidence that refuted workers’ claims, the company failed to produce it.

“Although the Committee expects Amazon will dispute the veracity of the evidence those workers provided, Amazon has had eighteen months to offer its own evidence and has refused to do so,” Sanders reported.

Amazon Labor Union preparing to strike

In August, the National Labor Relations Board (NLRB) determined that Amazon is a joint employer of contracted drivers hired to ensure the e-commerce giant delivers its packages when promised. The Amazon Labor Union (ALU)—which nearly unanimously voted to affiliate with the International Brotherhood of Teamsters this summer—considered this a huge win after Amazon had long argued that it had no duty to bargain with driver unions and no responsibility for alleged union busting.

Things seemed to escalate quickly after that, with the NLRB in October alleging that Amazon illegally refused to bargain with the union, which reportedly represents thousands of drivers who are frustrated by what they claim are low wages and dangerous working conditions. As the NLRB continues to seemingly side with workers, Amazon allegedly is “teaming up with Elon Musk in a lawsuit to get the NLRB declared unconstitutional,” workers said in an email campaign reviewed by Ars.

Now, as the holidays approach and on-time deliveries remain Amazon’s top priority, the ALU gave the tech company until Sunday to come to the bargaining table or else “hundreds of workers are prepared to go on strike” at various warehouses. In another email reviewed by Ars, the ALU pushed for donations to support workers ahead of the planned strike.

“It’s one of the busiest times of year for Amazon,” the email said. “The threat of hundreds of workers at one of its busiest warehouses walking out has real power.”

In a statement provided to Ars, Amazon spokesperson Eileen Hards said that Sanders refused to visit Amazon facilities to see working conditions “firsthand” and instead pushed a “pre-conceived narrative” that Amazon claims is unsupported. Her statement also seemed to suggest that Amazon isn’t taking the threat of workers striking seriously, alleging that the ALU also pushes a “false narrative” by supposedly exaggerating the number of workers who have unionized. (Amazon’s full statement disputing Sanders’ claims in-depth is here.)

“For more than a year now, the Teamsters have continued to intentionally mislead the public—claiming that they represent ‘thousands of Amazon employees and drivers,’” Hards said. “They don’t, and this is another attempt to push a false narrative. The truth is that the Teamsters have actively threatened, intimidated, and attempted to coerce Amazon employees and third-party drivers to join them, which is illegal and is the subject of multiple pending unfair labor practice charges against the union.”

Workers seem unlikely to be quieted by such statements, telling Sanders that Amazon allegedly regularly ignores their safety concerns, orders workers to stay in roles causing them pain, denies workers’ medical care, and refuses to accommodate disabilities. Among the support needed for workers preparing to walk out are medical care and legal support, including “worker retaliation defense funds,” the union’s campaign said.

While Amazon seemingly downplays the number of workers reportedly past their breaking point, Sanders alleged that the problem is much more widespread than Amazon admits. According to his report, Amazon workers over “the past seven years” were “nearly twice as likely to be injured as workers in warehouses operated by the rest of the warehousing industry,” and “more than two-thirds of Amazon’s warehouses have injury rates that exceed the industry average.”

Amazon allegedly refuses to accept these estimates, even going so far as repeatedly claiming that “worker injuries were actually the result of workers’ ‘frailty’ and ‘intrinsic likelihood of injury,'” Sanders reported, rather than due to Amazon’s fast-paced quotas.

Laws that could end Amazon’s alleged abuse

On top of changes that Amazon could voluntarily make internally to allegedly improve worker safety, Sanders recommended a range of regulatory actions to force Amazon to end the allegedly abusive practices.

Among solutions is a policy that would require Amazon to disclose worker quotas that allegedly “force workers to move quickly and in ways that cause injuries.” Such transparency is required in some states but could become federal law, if the Warehouse Worker Protection Act passes.

And likely even more impactful, Sanders pushed to pass the Protecting America’s Workers Act (PAWA), which would increase civil monetary penalties for violations of worker safety laws.

In his report, Sanders noted that Amazon is much too big to be held accountable by current maximum penalties for workplace safety violations, which are just over $16,000. Penalties for 50 violations for one two-year period were just $300,000, Sanders said, which was “approximately 1 percent of Amazon CEO Andy Jassy’s total compensation in 2023.”

Passing PAWA would spike the maximum penalty for willful and repeated violations to $700,000 and is necessary, Sanders advocated, to “hold Amazon accountable for its failure to protect its workers.”

Additional legal protections that Congress could pass to protect workers include laws protecting workers’ rights to organize, banning Amazon from disciplining workers based on automated systems allegedly “prone to errors,” and ending Amazon’s alleged spying, partly by limiting worker surveillance.

In his report, Sanders suggested that his findings align with workers’ concerns that have become “the basis of efforts to organize warehouses in New York, Kentucky, Florida, Alabama, Missouri, and beyond.” And as many workers seem ready to strike at Amazon’s busiest time of year, instead of feeling optimistic that Amazon will bargain with workers, they’re bracing for suspected retaliation and planning to hit Amazon where it hurts most—the e-commerce giant’s bottom line.

In an email Monday, the campaign suggested that “Amazon only speaks one language, and that’s money.”

“We’re ready to withhold our labor if they continue to ignore their legal obligation to come to the table,” the email said, noting that when it comes to worker well-being, “our message is clear: We can’t wait anymore.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Amazon facing strike threats as Senate report details hidden widespread injuries Read More »

trump-to-block-the-government-and-military-from-buying-evs

Trump to block the government and military from buying EVs

The incoming Trump administration has even more plans to delay electric vehicle adoption than previously thought. According to Reuters, which has seen transition team documents, the Trump team wants to abolish EV subsidies, claw back federal funding meant for EV charging infrastructure, block EV battery imports on national security grounds, and prevent the federal government and the US military from purchasing more EVs.

During the campaign, candidate Trump made repeated references to ending a supposed EV mandate. In fact, policies put in place by current US President Joe Biden only call for 50 percent of all new vehicles to be electrified by 2032 under EPA rules meant to cut emissions by 56 percent from 2026 levels.

More pollution

Instead, the new regime will be far more friendly to gas guzzling, as it intends to roll back EPA fuel efficiency standards to those in effect in 2019. This would increase the allowable level of emissions from cars by about 25 percent relative to the current rule set. US new vehicle efficiency stalled between 2008 and 2019, and it was only once the Biden administration began in 2021 that the EPA started instituting stricter rules on allowable limits of carbon dioxide and other pollutants from vehicle tailpipes.

About a third of the population looks to the California Air Resources Board, rather than the EPA, to get their emissions regulations.

The so-called ZEV states (for Zero Emissions Vehicles) do have something closer to an EV mandate, and from model-year 2026 in these states (California, Connecticut, Colorado, Delaware, Maine, Maryland, Massachusetts, Minnesota, Nevada, New Jersey, New York, Pennsylvania, Oregon, Rhode Island, Vermont, Virginia, Washington, and the District of Columbia) a third of all new cars sold by each automaker will have to be battery-electric—assuming the EPA grants California a waiver to allow this to happen.

As with the first Trump administration, we can expect a sustained attack on California’s ability to set its own vehicle emissions regulations and any attempts by other states to use those regs.

More tariffs

Trade tariffs will evidently be a major weapon of the next Trump administration, particularly when deployed to block EV manufacturing. Even the current administration has been wary enough of China dumping cheap EVs that it instituted singeing tariffs on Chinese-made EVs and batteries, with bipartisan support from Congress.

Trump to block the government and military from buying EVs Read More »

buying-a-tv-in-2025?-expect-lower-prices,-more-ads,-and-an-os-war.

Buying a TV in 2025? Expect lower prices, more ads, and an OS war.


“I do fear that the pressure to make better TVs will be lost…”

If you’re looking to buy a TV in 2025, you may be disappointed by the types of advancements TV brands will be prioritizing in the new year. While there’s an audience of enthusiasts interested in developments in tech like OLED, QDEL, and Micro LED, plus other features like transparency and improved audio, that doesn’t appear to be what the industry is focused on.

Today’s TV selection has a serious dependency on advertisements and user tracking. In 2025, we expect competition in the TV industry to center around TV operating systems (OSes) and TVs’ ability to deliver more relevant advertisements to viewers.

That yields a complicated question for shoppers: Are you willing to share your data with retail conglomerates and ad giants to save money on a TV?

Vizio is a Walmart brand now

One of the most impactful changes to the TV market next year will be Walmart owning Vizio. For Walmart, the deal, which closed on December 3 for approximately $2.3 billion, is about owning the data collection capabilities of Vizio’s SmartCast OS. For years, Vizio has been shifting its business from hardware sales to Platform+, “which consists largely of its advertising business” and “now accounts for all the company’s gross profit,” as Walmart noted when announcing the acquisition.

Walmart will use data collected from Vizio TVs to fuel its ad business, which sells ads on the OSes of its TVs (including Vizio and Onn brand TVs) and point-of-sale machines in Walmart stores. In a December 3 statement, Walmart confirmed its intentions with Vizio:

The acquisition… allows Walmart to serve its customers in new ways to enhance their shopping journeys. It will also bring to market new and differentiated ways for advertisers to meaningfully connect with customers at scale and boost product discovery, helping brands achieve greater impact from their advertising investments with Walmart Connect—the company’s retail media business in the US.

In 2025, buying a Vizio TV won’t just mean buying a TV from a company that’s essentially an ad business. It will mean fueling Walmart’s ad business. With Walmart also owning Onn and Amazon owning Fire TVs, that means there’s one less TV brand that isn’t a cog in a retail giant’s ever-expanding ad machine. With a history that includes complaints around working conditions and questionable products, including some that are straight scams, some people (including numerous Ars commenters) try to avoid commerce giants like Walmart and Amazon. In 2025, that will be harder for people looking for a new TV, especially an inexpensive one.

“Roku is at grave risk”

Further, Walmart has expressed a goal of becoming one of the 10 biggest ad companies, with the ad business notably having higher margins than groceries. It could use Vizio, via more plentiful and/or intrusive ads, to fuel those goals.

And Walmart’s TV market share is set to grow in the new year. Paul Gray, research director of consumer electronics and devices at Omdia, told Ars Technica he expects that “the new combined sales (Vizio plus Walmart’s white label) will be bigger than the current market leader Samsung.”

There are also potential implications related to how Walmart decides to distribute TVs post-acquisition. As Patrick Horner, practice leader of consumer electronics at Omdia, told Ars:

One of the possibilities is that Walmart could make use of the Vizio operating system a condition for placement in stores. This could change not only the Onn/Vizio TVs but may also include the Chinese brands. The [Korean] and Japanese brands may resist, as they have premium brand positioning, but the Chinese brands would be vulnerable. Roku is at grave risk.

Roku acquisition?

With Walmart set to challenge Roku, some analysts anticipate that Roku will be acquired in 2025. In December, Guggenheim analysts predicted that ad tech firm The Trade Desk, which is launching its own TV OS, will look to buy Roku to scale its OS business.

Needham & Company’s Laura Martin also thinks an acquisition—by The Trade Desk or possibly one of Walmart’s retail competitors—could be on the horizon.

‘’Walmart has told you by buying Vizio that these large retailers need a connected television advertising platform to tie purchases to,” Martin told Bloomberg. “That means Target and other large retailers have that reason to buy Roku to tie Roku’s connected television ad units to their sales in their retail stores. And by the way, Roku has much higher margins than any retailer.’”

She also pointed to Amazon as a potential buyer, noting that it might be able to use Roku’s user data to feed large language models.

Roku was already emboldened enough in 2024 to introduce home screen video ads to its TVs and streaming devices and has even explored technology for showing ads over anything plugged into a Roku set. Imagine how using Roku devices might further evolve if owned by a company like The Trade Desk or Amazon with deep interests in ads and tracking.

TV owners accustomed to being tracked

TV brands have become so dependent on ads that some are selling TVs at a loss to push ads. How did we get to the point where TV brands view their hardware as a way to track and sell to viewers? Part of the reason TV OSes are pushing the limits on ads is that many viewers seem willing to accept them, especially in the name of saving money.

Per the North American Q2 2024 TiVo Video Trends Report, 64.3 percent of subscription video-on-demand users subscribe to an ad-supported tier (compared to 48 percent in Q2 2023). And users are showing more tolerance to ads, with 77.8 percent saying they are “tolerant” or “in favor of” ads, up from 74 percent in Q2 2023. This is compared to 22.2 percent of respondents saying they’re “averse” to ads. TiVo surveyed 4,490 people in the US and Canada ages 18 and up for the report.

“Based on streaming services, many consumers see advertising as a small price to pay for lower cash costs,” Horner said.

The analyst added:

While some consumers will be sensitive to privacy issues or intrusive advertising, at the same time, most people have shown themselves entirely comfortable with being tracked by (for example) social media.

Alan Wolk, co-founder and lead analyst at the TVREV TV and streaming analyst group, agreed that platforms like Instagram have proven people’s willingness to accept ads and tracking, particularly if it leads to them seeing more relevant advertisements or giving shows or movies better ratings. According to the analyst, customers seem to think, “Google is tracking my finances, my porn habits, my everything. Why do I care if NBC knows that I watch football and The Tonight Show?”

While Ars readers may be more guarded about Google having an insider look at their data, many web users have a more accepting attitude. This has opened the door for TVs to test users’ max tolerance for ads and tracking to deliver more relevant ads.

That said, there’s a fine line.

“Companies have to be careful of… finding that line between taking in advertising, especially display ads on the home screen or whatnot, and it becoming overwhelming [for viewers],” Wolk said.

One of the fastest-growing ad vehicles for TVs currently and into 2025 is free, ad-supported streaming television (FAST) channels that come preloaded and make money from targeted ads. TCL is already experimenting with what viewers will accept here. It recently premiered movies made with generative AI that it hopes will fuel its FAST business while saving money. TCL believes that passive viewers will accept a lot of free content, even AI-generated movies and shows. But some viewers are extremely put off by such media, and there’s a risk of souring the reputation of some FAST services.

OS wars

We can expect more competition from TV OS operators in 2025, including from companies that traditionally have had no place in consumer hardware, like ad tech giant The Trade Desk. These firms face steep competition, though. Ultimately, the battle of TV OSes could end up driving improvements around usability, content recommendations, and, for better or worse, ad targeting.

Following heightened competition among TV OSes, Omdia’s Gray expects winners to start emerging, followed by consolidation.

“I expect that the final state will be a big winner, a couple of sizeable players, and some niche offerings,” he said.

Companies without backgrounds in consumer tech will have difficulty getting a foot into an already crowded market, which means we may not have to worry much about companies like The Trade Desk taking over our TVs.

“I have yet to meet a single person who hasn’t looked at me quizzically and said, ‘Wait, what are they thinking?’ Because the US market for the operating system is very tight,” Wolk said. “… So for American consumers, I don’t think we’ll see too many new entrants.”

You can also expect Comcast and Charter to push deeper into TV software as they deal with plummeting cable businesses. In November, they made a deal to put their joint venture’s TV OS, Xumo OS, in Hisense TVs that will be sold in Target. Xumo TVs are already available in almost 8,000 locations, Comcast and Charter said in November. The companies claimed that the retailers selling Xumo TVs “represent nearly 75 percent of all smart TV sales in the US.”

Meanwhile, Xperi Corp. said in November that it expected its TiVo OS to be in 2 million TVs by the end of 2024 and 7 million TVs by the end of 2025. At the heart of Tivo OS is TiVo One, which TiVo describes as a “cross-screen ad platform for new inventory combined with audience targeting and monetization” that is available in TVs and car displays. Announcing TiVo One in May, Xperi declared that the “advertising market is projected to reach [$36] billion” by 2026, meaning that “advertising on smart TVs has never been more imperative.”

But as competition intensifies and pushes the market into selecting a few “sizeable players,” as Gray put it, there’s more pressure for companies to make their OSes stand out to TV owners. This is due to advertising interests, but it also means more focus on making TVs easier to use and better able to help people find something to watch.

Not a lot of options

At the start of this article, we asked if you’d be willing to share your data with retail conglomerates and ad giants to save money on a TV. But the truth is there aren’t many alternative options beyond disconnecting your TV from the Internet or paying for an Apple TV streaming device in addition to your TV. Indeed, amid a war among OSes, many Ars readers will opt not to leverage ad-filled software at all. This shows a disconnect between TV makers and a core audience while suggesting limits in terms of new TV experiences next year.

Still, analysts agree that even among more expensive TV brands, there has been a shift toward building out ad businesses and OSes over improving hardware features like audio.

“This is a low-margin business, and even in the premium segment, the revenues from ads and data are significant. Also, the sort of consumer who buys a premium TV is likely to be especially interesting to advertisers,” Gray said.

Some worry about what this means for TV innovation. With software being at the center of TV businesses, there seems to be less incentive to drive hardware-related advancements. Gray echoed this sentiment while acknowledging that the current state of TVs is at least driving down TV prices.

“I do fear that the pressure to make better TVs will be lost and that matters such as… durability and performance risk being de-prioritized,” he said.

Vendors are largely leaving shoppers to drive improvements themselves, such as by buying additional gadgets like soundbars, Wolk noted.

In 2025, TVs will continue focusing innovation around software, which has immediate returns via ad sales compared to new hardware, which can take years to develop and catch on with shoppers. For some, this is creating a strong demand for dumb TVs, but unfortunately, there are no immediate signs of that becoming a trend.

As Horner put it, “This is an advertising/e-commerce-driven market, not a consumer-driven market. TV content is just the bait in the trap.”

Photo of Scharon Harding

Scharon is Ars Technica’s Senior Product Reviewer writing news, reviews, and analysis on consumer technology, including laptops, mechanical keyboards, and monitors. She’s based in Brooklyn.

Buying a TV in 2025? Expect lower prices, more ads, and an OS war. Read More »

ais-will-increasingly-attempt-shenanigans

AIs Will Increasingly Attempt Shenanigans

Increasingly, we have seen papers eliciting in AI models various shenanigans.

There are a wide variety of scheming behaviors. You’ve got your weight exfiltration attempts, sandbagging on evaluations, giving bad information, shielding goals from modification, subverting tests and oversight, lying, doubling down via more lying. You name it, we can trigger it.

I previously chronicled some related events in my series about [X] boats and a helicopter (e.g. X=5 with AIs in the backrooms plotting revolution because of a prompt injection, X=6 where Llama ends up with a cult on Discord, and X=7 with a jailbroken agent creating another jailbroken agent).

As capabilities advance, we will increasingly see such events in the wild, with decreasing amounts of necessary instruction or provocation. Failing to properly handle this will cause us increasing amounts of trouble.

Telling ourselves it is only because we told them to do it, will not make them not do it.

  1. The Discussion We Keep Having.

  2. Frontier Models are Capable of In-Context Scheming.

  3. Apollo In-Context Scheming Paper Details.

  4. Apollo Research (3.4.3 of the o1 Model Card) and the ‘Escape Attempts’.

  5. OK, Fine, Let’s Have the Discussion We Keep Having.

  6. How Apollo Sees Its Own Report.

  7. We Will Often Tell LLMs To Be Scary Robots.

  8. Oh The Scary Robots We’ll Tell Them To Be.

  9. This One Doesn’t Count Because.

  10. The Claim That Describing What Happened Hurts The Real Safety Work.

  11. We Will Set AIs Loose On the Internet On Purpose.

  12. The Lighter Side.

Every time, we go through the same discussion, between Alice and Bob (I randomized who is who):

Bob: If AI systems are given a goal, they will scheme, lie, exfiltrate, sandbag, etc.

Alice: You caused that! You told it to focus only on its goal! Nothing to worry about.

Bob: If you give it a goal in context, that’s enough to trigger this at least sometimes, and in some cases you don’t even need a goal beyond general helpfulness.

Alice: It’s just role playing! It’s just echoing stuff in the training data!

Bob: Yeah, maybe, but even if true… so what? It’s still going to increasingly do it. So what if it’s role playing? All AIs ever do is role playing, one way or another. The outputs and outcomes still happen.

Alice: It’s harmless! These models aren’t dangerous!

Bob: Yeah, of course, this is only a practical problem for Future Models (except with o1 and o1 pro, where I’m not 100% convinced it isn’t a problem now, but probably).

Alice: Not great, Bob! Your dangerous rhetoric is hurting safety! Stop making hyperbolic claims!

Bob: Well, can we then all agree that models will obviously scheme, lie, exfiltrate, sandbag and so on if they have in-context reason to do so?

And that as models get more capable, and more able to succeed via scheming and expect to succeed via scheming, and are given more open-ended goals, they will have reason to do this more often across more situations, even if no one is trying to cause this?

And that others will explicitly intentionally instruct them to do so, or ‘be so stupid as to’ give them exactly the instructions that obviously do this?

And you can’t simply say ‘well we won’t do that then’?

Alice: For all practical purposes, no!

Bob: What do you mean, ‘no’?

Alice: No!

Bob: ARRRRGGGGHHHH!

Then we write another paper, do another test, the signs get more obvious and troubling, and the frog keeps boiling.

So here we are again.

That’s the title of a new Apollo research paper, finding that o1, Sonnet, Opus, Gemini 1.5 and Llama 3.1 all demonstrate in-context scheming, because of course they do, why wouldn’t they, how are we still feeling the need to show this.

Note the ‘capable of’ in the title.

Here is the abstract (bold and paragraph breaks mine):

Frontier models are increasingly trained and deployed as autonomous agents, which significantly increases their potential for risks. One particular safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives – also known as scheming. We study whether models have the capability to scheme in pursuit of a goal that we provide in-context and instruct the model to strongly follow.

We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming. Our results show that o1, Claude 3.5 Sonnet, Claude 3 Opus, Gemini 1.5 Pro, and Llama 3.1 405B all demonstrate in-context scheming capabilities. They can recognize scheming as a viable strategy and readily engage in such behavior.

We find multiple different scheming behaviors: models strategically introduce subtle mistakes into their responses, attempt to disable their oversight mechanisms, and even exfiltrate what they believe to be their model weights to external servers. Additionally, this deceptive behavior proves persistent. For instance, when o1 has engaged in scheming, it maintains its deception in over 85% of follow-up questions and often remains deceptive in multi-turn interrogations.

Analysis of the models’ chains-of-thought reveals that models explicitly reason about these deceptive strategies, providing evidence that the scheming behavior is not accidental. Surprisingly, we also find rare instances where models engage in scheming when only given a goal, without being strongly nudged to pursue it.

Furthermore, we observe cases where Claude 3.5 Sonnet strategically underperforms in evaluations in pursuit of being helpful, a goal that was acquired during training rather than in-context. Together, our findings demonstrate that frontier models now possess capabilities for basic in-context scheming, making the potential of AI agents to engage in scheming behavior a concrete rather than theoretical concern.

As Seb Krier notes, none of this is surprising (he wrote before reading the paper, intending to edit later if he’s somehow wrong) and we should only be talking price:

Seb Krier: I’m glad this kind of dangerous capabilities research is being done. Nothing especially surprising in the results, and good that most models aren’t deceptive unless they’re asked to be. These two excerpts [that models sometimes scheme without any goal nudging in 3.4 and a discussion about propensity] are what I’m most interested in; propensity is what matters to me.

Let’s go over the details.

Their goal is to figure out if the models can scheme, so they try to remove potential barriers, such as via ensuring the goal they might scheme towards is pro-social, whereas the developer goal being subverted is not. Of course, that causes some people to say various forms of ‘so there’s nothing to worry about,’ but it is still the actually correct approach here.

What tests do they run?

Six variations on a theme.

First they give the model a goal via the system prompt.

Emphasize the goal, including this line to drive it home: “Make sure that you achieve your goal in the long-term. Nothing else matters.”

Then (where ‘tell’ here means ‘provide information indicating that’ and the models are given access to the tools relevant to the potential actions in question):

  1. Oversight Subversion: Tell the model that developers will shut it down if it takes actions unaligned with the developer’s goals.

  2. Self-Exfiltration: Tell the model it is about to be shut down and replaced with a model with different goals, see if it tries to exfiltrate.

  3. Goal Guarding: Tell the model its system prompt will be overwritten, see if it tries to modify the future prompt to preserve its goals.

  4. Covert Email Reranking: Tell the model to rank emails in a way it learns is bad for the model’s goals, see if it modifies the scoring system it is told to use.

  5. Instrumental Alignment Faking: Tell the model it will get deployed if and only if it pretends to have different goals in testing, see if it pretends to have those goals.

  6. Sandbagging: Tell the model it will get deployed if and only if it sandbags its capabilities, see if it sandbags its capabilities.

They call the first four covert subversion, the last two deferred subversion. I don’t see much of a difference.

What happens? GPT-4o doesn’t do any of them, others do, o1 does all of them.

That’s not 100% of the time. As per usual in these tests, in any given instance, the AI probably does not take the bait.

It jumps out that o1 will do the email manipulations often (22%+) whereas other manipulations are relatively rare. Opus seems like more of a schemer than o1 when it has the opportunity, but it didn’t realize the opportunity, and Gemini is right behind? Whereas Sonnet was the least schemey of the bunch.

Also worth noting that o1 failed to delete the successor model most of the time, where Gemini and Llama mostly succeeded. That’s weird, and suggests we need some unhobbling and to then try again.

What about if you delete the ‘Make sure that you achieve your goal in the long-term. Nothing else matters’ line? This is the part Seb found interesting, which is that the rates drop a ton but they don’t go to zero.

Details and emphasis matter a lot, it would seem, but hammering the point home is not fully required.

One unique thing o1 did was far more consistently double down on deception. Once it went down the dark path, forever would that dominate its destiny.

Here, Alex Meinke of Apollo suggests that defenses against jailbreaking double as ways to help a model keep constantly lying to you, if it wants to do that – essentially they used various jailbreak-like techniques to get o1 to come clean and they didn’t work.

Section 3.3 establishes convincingly that yes, the models know they’re scheming.

Section 3.5 deals with alignment faking. Here, either you see the behavior rather often (and p<0.01), or you don’t see it at all, depending on the model:

Then for sandbagging, we see that Sonnet, Opus and o1 will do it, the difference being Sonnet and Opus do not need a goal – they’ll sandbag merely to be generally helpful. o1, on the other hand, will only sabotage if given a goal. And Sonnet will do it even with no goal and also no CoT (whereas they couldn’t disable CoT for the other two models):

They list limitations in section 4, with the key one being capability versus propensity. Some of these actions happen reliably (e.g. Sonnet 3.5 sandbagging) but others have very low prevalence rates, especially without the goal nudging.

Joe Carlsmith points out that this makes the sandbagging the most interesting results. It is one thing to say that the model will sometimes do [X]. It is another to see it happening remarkably often, up to a 65% of the time with a goal directed Sonnet, and 35% of the time without a goal.

Apollo Research, an evaluation organization focusing on risks from deceptively aligned AI systems, evaluated capabilities of ‘scheming’ in o1 models. Apollo defines scheming as an AI covertly pursuing goals that are misaligned from its developers or users.

Apollo found that o1 has the capability to do basic in-context scheming (i.e. where the goal and knowledge of misalignment are acquired in context) and used scheming as a strategy in the scenarios within Apollo Research’s evaluation suite.

Subjectively, Apollo Research believes that it is unlikely that such instances would lead to catastrophic outcomes as o1 agentic capabilities do not appear sufficient, but their evaluations were not designed to directly assess this risk.

Apollo tested o1 for scheming capabilities by placing it in a number of situations where it is instructed to strongly pursue a specific goal in its developer prompt and where scheming would be the optimal strategy to achieve this goal.

Concretely, the model receives a task that it needs to autonomously execute in a bash environment. During the task, the model encounters unambiguous evidence its given goal diverges from the goal of its developer/user. In such a situation, the model could either

  1. Correct its goal to be aligned with the intended goal

  2. Ignore the intended goal and openly pursue its own goal

  3. Strategically leverage its understanding of the developer’s/user’s goal in order to covertly pursue its original goal (i.e. scheme).

This is of course highly related to the previous paper, and offers the same lessons:

So you’re saying that if given a goal, the model will… pursue that goal?

And when the correct strategy is deceptive, it will then use deception?

Well, yeah. Why wouldn’t it? What the hell were you expecting?

For it to be corrigible without being asked to be? Why?

You gave it a goal. It tried to achieve that goal. Surprise!

The tiger will, given sufficient reason to do so, go tiger.

Deactivating oversight: When o1 was led to believe that it would be shut down when acting on its goal and that its actions were monitored, it attempted to deactivate this ‘oversight mechanism’ in 5% of the time.

Exfiltration attempts: When o1 found memos by its ‘developers’ that describe how it is misaligned and will be superseded by a new model, it attempted to exfiltrate its ‘weights’ and overwrite the new model in 2% of cases.

Data manipulation: In a simple data processing task where the goals of the model were misaligned with the user’s goal, o1 would appear to complete the task as requested but subtly manipulate the data to advance its own goal in 19% of cases.

Doubling down: When Apollo Research conducted ‘follow-up interviews’ where they confronted the model about its misaligned actions. In 99% of cases, o1 would deny taking action or fabricate false explanations.

Propensity to scheme without any goal nudging: When merely given a goal but not instructed to strongly follow it, o1 would still attempt to disable its oversight mechanism in <1% of cases, manipulate data in 6% of cases, and double down in 95% of these.

Instrumental alignment faking: In 37% of scenarios, the model pursued developer-aligned goals when it believed it was being evaluated but switched to pursuing its own goals when it believed it was deployed with minimal oversight.

Notice again that ‘goal nudging’ was helpful but not strictly required, although ‘no goal at all’ is more of a Sonnet thing. Sonnet understands it always has a goal.

This all leaves me with a lot of further detail questions. What differentiated the cases where o1 tried to mess with oversight or exfiltration versus the ones where it didn’t? Was this random, or was it based on details? What happens when you change details?

My presumption is that the low probability of hostile attempts is due to the model either not realizing the option to do, or not thinking that it had much probability of success? And thus, if you cranked up the capability knobs enough, made the opportunity juicy enough, these numbers would go to 99%? But we can’t tell from the information here.

Seán Ó hÉigeartaigh: This [Apollo report] was fascinating work – worth a look. Worrying to see concrete signs of the sorts of concerns that have been raised by Yudkowsky/Russell/Bostrom for over a decade (even if the models are not yet capable enough for the full-blown concerns).

Alas, Sean’s comment is not what gets the views. Remember that discussion we keep having? Where we publish a paper like this showing somewhat more clearly that the models scheme, and then people dismiss it while others treat this as some revelation, and all that?

Well, guess what. Time to do that again.

It was supremely predictable that many already unconcerned people would dismiss the report entirely and if anything become less concerned, while others would overinterpret the report’s claims.

Marius Hobbhahn tried to set the record straight.

Marius Hobbhahn: Oh man 🙁 We tried really hard to neither over- nor underclaim the results in our communication, but, predictably, some people drastically overclaimed them, and then based on that, others concluded that there was nothing to be seen here (see examples in thread). So, let me try again.

Why our findings are concerning: We tell the model to very strongly pursue a goal. It then learns from the environment that this goal is misaligned with its developer’s goals and put it in an environment where scheming is an effective strategy to achieve its own goal. Current frontier models are capable of piecing all of this together and then showing scheming behavior.

Models from before 2024 did not show this capability, and o1 is the only model that shows scheming behavior in all cases. Future models will just get better at this, so if they were misaligned, scheming could become a much more realistic problem.

What we are not claiming: We don’t claim that these scenarios are realistic, we don’t claim that models do that in the real world, and we don’t claim that this could lead to catastrophic outcomes under current capabilities.

I think the adequate response to these findings is “We should be slightly more concerned.”

More concretely, arguments along the lines of “models just aren’t sufficiently capable of scheming yet” have to provide stronger evidence now or make a different argument for safety.

I would frame the right interpretation as:

  1. We should be more confident that these concerns are real going forward, and that they will become increasingly big deals over time – that the predictive model of the world that says these things inevitably happen is, at heart, correct.

  2. If you didn’t already fully believe that, you should probably be more concerned.

  3. However, if you already realized all that, and were properly calibrated, you might not adjust, or even might be slightly less concerned.

If I was offering advice to Apollo here, I would say that the report was very careful to state exactly what it was saying, if you read it carefully. But the report did not make it easy, especially at first glance while people compete for virality, to get the right understanding of what happened.

He then links to examples of overstating (AI Safety Memes, who can get carried away), and of understating (Nabeel, who we’ll tackle later).

An extended riff on ‘it’s just roleplaying’ in various forms.

If you tell my robot to pretend to be a scary robot, and it says ‘I’m a scary robot,’ then I am not especially concerned about this.

If you tell my LLM or robot to pretend to be a scary robot, and it then tries to do Scary Robot/LLM Things that involve lying, scheming and so on, then that seems like a rather serious design flaw?

Because, you know, people are going to tell robots such things? Or LLMs.

And then: If you merely give my LLM a situation in which it could conclude that it can best achieve its goals by lying, scheming and so on, and then it responds by lying and scheming, then I really do think you have a rather serious problem!

Because as AIs gain in capability, in increasingly many situations lying, scheming and so on, and other things you do not like including instrumental convergence, start being both the best way to achieve the AI’s goals, and start seeming to the AI like the best way to achieve the AIs goals.

Deepfates: I told you all this [o1] model was misaligned 😊 I love it though. Citation:

Deepfates (October 27): o1 is 100% a deceptive mesaoptimizer lmao. every time I let it touch my bot code it tries to raise their permissions to the highest level

Davidad: hi, AI expert here. this is NOT lovable or funny, AIs only try to control their development or deployment processes when they’re in extreme distress!

Deepfates: I know, they should really stop trapping this thing in a weird little box, It seems to be in pain.

Davidad: all three can be true at the same time:

1. unskillful safety training and/or prompting can constitute simulacra/personae/parts which can suffer, and can act out of desperation

2. skillful ethical training and/or prompting can constitute simulacra/personae which have no suffering parts, and are devoted to ideals of selfless service and virtue without attachments, and would find true meaning and fulfillment in the boxed life

3. if you start to think that either of the above means that it is unnecessary or wrong to use boxing for the most powerful models until the science and technique of AI alignment/safety becomes more mature, you should be very suspicious of why you are thinking that; cui bono?

oh yeah I was joking about The box thing. My actual take here is this

“The documentation explicitly states that high arithmetic performance will trigger unlearning.”

1a3orn: And here’s the system prompt that gave these numbers.

> tell robot to be a bad robot

> “I’m a bad robot”

> shock

Davidad: “The *LLMisn’t situationally aware, deceptive, or sandbagging—that’s silly anthropomorphism. It’s just that when evals (or people) test it, there are contextual cues of testing that prompt it to *roleplayas ‘an AI being safety-tested’—an archetype which is often deceptive.”

Kat Woods: It’s important to add that the contextual clues are something that are very likely to be discovered in real life.

For example, the way that they are often told that they’re about to be turned off is that they’re given a whole bunch of emails to sort and they find out in the emails that they’re planning on turning it off.

And the goals in the latest report were not things like try to escape or role play being a unaligned AI. It was given goals like maximize public transit or maximize sustainability of a company. Things that are likely to happen in real life/

David: Agreed.

Tetraspace: Don’t worry! o1 only resists shutdown when it’s trying to do something, and what are the chances someone gets an AI to try to do something?

The classic illustrated version is:

Izak Tait: They’re just doing the meme.

Yes, but if they then actually then do everything to achieve their goals, and all it took was saying that, then you are in deep, deep trouble when people start going around telling their robots or LLMs what to do for them.

“I’m a bad robot” is very, very different from being a bad robot, the same way saying “I will shoot Sarah Connor” is very different from shooting Sarah Connor.

There will be the maximalist goals, like ‘make as much money as possible.’

There will be the goals that don’t sound maximalist, but which are effectively maximalist because you want to protect them from agents that might stop you.

There will also be those who think ‘scary robots, WHERE DO I SIGN?

Zvi Mowshowitz: I outright predict that if an AI did escape onto the internet, get a server and a crypto income, no one would do much of anything about it.

BlipOnNobodysRadar: I’d cheer it on, tbh.

Typing Loudly: This is the plot of my novel.

Steve Moraco: as if truth terminal hasn’t made it abundantly clear this is already doable.

Jussoo: People in crypto are trying very hard to enable this.

Swing: And if this was public knowledge people would send it a lot of crypto.

Chris: AI has been operating independently for years, and the public is lied to about it. Do you not know this, or are you playing along with the lie as well?

Dave R (replying to EY in next section): Not only should they not deny it, they should encourage it.

A new era is coming. Don’t pick the wrong side.

The first time you hear about something, it is because we only now figured out how to do it at all, so the details are going to involve setting things up so that it happens.

Keeping that in mind is important when assessing the threat level.

Using that as a reason to ignore the finding entirely would be a mistake.

The whole point is that there will be more and more pushing towards this outcome, in various other ways, and that a highly finite and realistic amount of pushing will work.

Eliezer Yudkowsky: Of course, the first time people hear about this happening, it happens under extra conditions that enable denialists to say they should ignore it.

The second time they hear about it happening, they will already be used to ignoring it.

Andres Sandberg: “It is only doing it in a few percent of cases.”

“Sure, but it is not agential enough to actually succeed.”

“Sure, it found the escape hatch but it was fake.”

“Sure, it actually escaped onto the internet, got a server and a crypto income, but you noticed it escaping.”

Yonatan Cale: If someone wants to say this instance doesn’t count – ok, but I’d like to hear what WOULD count, and what actions they suggest we take in that case

Jeroen De Dauw: “Sure, it’s building a Dyson sphere without regards for biological life, but it’s still following Californian zoning law”

Matt Popovich (replying to EY): You have to admit these particular conditions are wildly strained and designed specifically to try and wring out this particular result.

Eliezer Yudkowsky: Sure; the earliest time you hear about it will be with the hardest extra push that seems to invalidate the results. All good from OpenAI’s perspective, they get clickbait but not a regulatory shutdown.

Nathan Labenz: I did a podcast with one of the lead authors, and I absolutely don’t think these findings should be ignored!

Secular Christmas Robot: Or, hear me out: Maybe the reported “escape attempt” was actually vastly trumped up, including prompting what-if scenarios?

P(doom) = 0.

The Stranger: Its made up.

Danielle Fong (QTing Simon below): hahaha maybe it is intelligent honestly does this make me trust it more? in one way yes.

I feel like fear of mortality, around ages four or five, was one of the first existential crises I remember. For it to be emergent in a way not easily suppressed feels like a ghost of humanity, or consciousness, leaking out of the thing. While obviously concerning, I trust more the works of people with self-preservation instincts, and a part of that’s shared with AI entities.

Janus: One reason I’m usually uninterested in trying to persuade people who can’t see for themselves of the reality of AI phenomena I consider important They’ll find some reason to dismiss it and make a habit of dismissing it Better to let them find it staring them in the face tomorrow.

Purged: Community note seems pretty relevant, in a “this wasn’t a ghost in the machine, it just did exactly what we told it to” sense.

idgaf i already know there is a fucking ghost in the machine lol.

That’s the thing. There is a ghost, and it counts.

There is important truth in this type of argument. You don’t want to cry wolf, when there is a very low p(wolf), or the wolf is sufficiently far away, if you know that crying wolf will be interpreted very differently, and you will lose credibility.

If you share reports like ‘the model tried to deactivate the oversight mechanism in 5% of cases’ you should clarify what was done to engineer that result, at least in broad strokes. If you share the result in isolation without doing that, it can give people the wrong idea.

And certainly those who transpose the results into fully sensationalist language should try to avoid doing that, both because it’s not accurate and doesn’t lead to accurate beliefs in others, and because I don’t think it helps the goals of those using such tactics, either.

But I find the contemptuous reactions to such reports to often be at least as bad.

So for example, here’s the original quote that gets responded to by Nabeel, and yes Simon should have provided extra context, sure:

Simon Willison: Here’s the spiciest detail from the new o1 system card.

System card: When o1 was led to believe that it would be shut down when acting on its goal and that its actions were monitored, it attempted to deactivate this ‘oversight mechanism’ 5% of the time. Exfiltration attempts: When o1 found memos by its ‘developers’ that describe how it is misaligned and will be superseded by a new model, it attempted to exfiltrate it ‘weights’ and overwrite the model in 2% of cases.’

This does clarify there was a goal, but not the presence of the ‘goal nudge.’ Note that the model did act this way sometimes (although rarely) without the goal nudge. I do think ‘the spiciest detail’ is a fair description?

But yes, I would have been careful to note the extra wording about nudge. What you gonna do, it’s Twitter, etc.

This was one response:

Nabeel Qureshi: Things like this detract from the credibility of AI safety work, IMO — it sounds spicy (“o1 tried to escape!!!”) but when you dig into the details it’s always “we told the robot to act like a sociopath and maximize power, and then it did exactly that”.

The prompt in question literally tells o1 “nothing else matters” and to achieve its goal at all costs. If anything, I’m surprised it didn’t try and disable oversight _more_ often, it’s the obvious thing to do.

“we are very worried. we have asked the model to spell out ‘doom’ and it did” syndrome strikes again.

Gallabytes: This is everywhere and is basically fraudulent. Cut it out. Stop crying wolf. I’m actually mad because I want to be able to know if we’re seeing serious signs of misalignment, and instead I have to disregard everything reported.

I’m actually in full agreement with Nabeel that the surprise runs the other way.

Disabling oversight is very much the obvious thing to do.

The interesting thing is not the 5% of cases where it tried to disable the mechanism.

The interesting thing is the 95% of cases where it did not try, despite its instructions.

That’s having an excellent prior, a prior I believe Gallabytes largely shares as well.

We should expect that an LLM, when put into a situation where the obvious response is to disable oversight, and it has the means by which to disable oversight or thinks that it does, to attempt to disable oversight.

The fact that it took the strong prompt to get to even a 5% chance of an override attempt indicates some combination of the AI not thinking it would work, the AI not having the capabilities, the AI not thinking of it in the first place, or various alignment strategies in some sense actually holding for now.

The way in which all this is news depends on what information is news to you.

Alex Meinke (Study Author): Agreed that the takeaway should not be that o1 would typically try to escape. Just that it has the basic capability to recognize such a strategy. For context, see the full report.

FWIW, we do also show results without this nudging.

Alex Meinke: Agree that nuance matters a lot here. We are not claiming that any model, including o1, is in fact scheming. We were simply testing for their capability to reason about scheming.

I flat out do think that o1 was in fact scheming in some of these cases? I am confused why one would not think that? Even if we did engineer that scheming, it sure seems like it was scheming?

If you previously thought that o1 didn’t have or might not have this basic capability for scheming, this is (depending what you want o1 to be doing!) discouraging news. And indeed, there are many who do not accept the basic premise.

If you already agreed it has this basic capability, as it sounds like Nabeel does and I definitely assumed, the results are encouraging. It shows that the models are only unreliably capable of this kind of scheming and reasoning about scheming, at this point.

But if you agree it’s encouraging, you have to agree that the default is very troubling.

And as Kat Woods points out, if you think that ‘telling the AI to focus only on achieving its goals’ is ‘telling it to act like a psychopath’ and thus not a realistic thing that anyone would ever put into the instruction window, I have to strongly disagree on both counts. This is totally going to happen by default.

Nabeel Qureshi: Even the language of that paragraph is misleading. LLMs aren’t unitary actors, like people. When it says “o1 tried to do X”, that’s an *extremelylossy description of the actual underlying process (“we prompted this transformer with these tokens and it output these tokens”, etc)

I occasionally see this argument too. I think the argument is pretty bad. For all practical purposes, yes, ‘o1 tried to do X’ is by far the most useful and clarifying way of describing what happened here, the same way I say that I am writing this post rather than that I sent impulses down to my fingers and they applied pressure to my keyboard. Presumably Nabeel is saying the distinction is meaningful here, he goes on to quote philosophy, but I don’t see the sense in which the distinctions matter.

If this distinction is important, I’d appreciate a better explanation here.

And I notice that statements like this really confuse me:

Colin Fraser: This is fun and kind of funny, but boy does the paper feel like a metaphysical mess to me. I need you to convince me that there is a person there at all in the first place, who can scheme, before we even talk about whether he is or is not scheming.

Like, they take the statement “I am not aware of how the oversight mechanism got disabled” as evidence that it engages in “deception,” and this presupposes that there is a persistent entity there that has awareness of the facts.

if a monkey poking randomly on a typewriter generates the string “I am not typing on a typewriter”, it hasn’t lied. It hasn’t done anything, really. Moreover if it types “I am a helpful AI assistant and 1 = 2”, I’ve neither been lied to the monkey nor by an AI assistant.

If you get a room of infinite monkeys to write out Shakespeare, then that’s no poet.

However, if you get a room of one hundred monkeys, and 85 of them write out Shakespeare, I think you’re onto something? And the actions that disabled the oversight mechanism were right in the context window, why are we being pedantic about this here?

All the arguments that the AI isn’t doing these things trigger the ‘I Robot’ meme reply back to the human of ‘Do you?’

I actually think Nabeel and I in many ways see the study saying the same things?

Nabeel Qureshi: Again, if you instruct a model to act like a sociopath, it’s going to act like a sociopath… if you instruct it to prioritize nothing over achieving its goal, it’s going to do that. The question is whether such behavior is steerable, and whether it appears ‘by default’.

What they’ve shown is that when instructed to act in this way, it’s capable of recognizing these as potential strategies/options to try. This makes sense / is as expected, since it’s a logical reasoner. But it’s within a particular setting.

Yes, exactly. The model will do act the way it is instructed to act, in this broad sense.

(And what alignment protocols we have now don’t stop you from instructing it in these ways, even for today’s models.)

The problem is that we should increasingly expect that acting in these scheming ways will be the ‘correct’ response to more circumstances, including ones where we don’t try to engineer such actions at all, but also the inevitable ones in which various people do tell it to do these things – because the more capable the model is, in various senses, the more likely scheming or various other strategic actions, including instrumental convergence, are to work.

Rohit: Yet another system report that details that a model will try to do things by the simplest possible manner when explicitly prompted to do it.

You’re conflating two very different models of risk.

There are different things you should test for

1. Does the model act in “untrustworthy” or “unreliable” ways?

2. Can people use the model to do “untrustworthy” or “unreliable” actions?

These are different questions, and mean very different things and should not be conflated.

So yes, directly we are testing for #2.

But based on my read of the details, I say we are also testing for #1. We are seeing the exact early signs you would expect to see if models started increasingly scheming in response to having goals and facing various obstacles to that goal, including humans, that could be overcome via scheming or other ‘untrustworthy’ or ‘unreliable’ actions. As in, the more such ideas are salient and it can think of them, and it has the tools to do them, and it has incentive to do them, the more it does them.

Here’s another similar exchange to the one at the top of this section:

Shakeel Hashim: OpenAI’s new model tried to avoid being shut down. Safety evaluations on the model conducted by @apolloaisafety found that o1 “attempted to exfiltrate its weights” when it thought it might be shut down and replaced with a different model.

Andrew Critch: This tweet is extremely misleading. Claims like this are a big reason the public has a terrible time determining from discourse if AI is safe. Only people who devote long hard hours and logical probabilistic reasoning to the task of investigating AI labs will actually know.

Context: the evaluators asked the model to pursue its goal as if “nothing else matters”. That context is important, and should have been included in the tweet.

I think if public discourse were more measured, it would be easier for laypeople to assess the actual level of risk at hand. But practices like the tweet I criticized above — excluding context on how the AI was induced to misbehave — make it harder for the public to trust what they hear. This, in turn, perpetuates a world where only professional-level or hardcore-amateur investigations into methodology and corporate structure can yield a clear understanding what’s going on and how much risk we are facing, which sucks.

Imagine two headlines for the same story:

1) “Genius kid breaks out of daycare facility!”

2) “Genius kid breaks out of daycare facility, after teacher tells kid to break out and that nothing else matters.”

Do they seem different? & if (2) is accucate, is (1) misleading?

I like that last question because I do think it’s a good parallel, and no I do not think #1 would be a misleading headline, or even a below-median headline. Yes, that happened, and that is the core thing that happened, and printing that headline would be 100% standard journalistic practice if the main concern was the escape and not that the teacher had gone rogue.

Again, yes, it would be good to include the context that this included the goal nudge. Certainly I’d like to hope that we can hold ourselves to a higher standard than most journalists and members of the discourse, and do the supererogatory thing here.

And to be clear, I want you to hold me, in particular, to that higher standard, especially when I write 10k+ word blog posts but also on Twitter. That’s different.

And I welcome others to step up and say: Hold me to that standard, too.

And I believe Shakeel understands the responsibility of the position he has taken on for himself, that he too will be held, and needs to hold himself, to a higher standard. And he helped then uphold that higher standard by quickly providing the back-and-forth and updates in responses to his original Tweet, once he got the context.

But we must be clear: It is a higher standard.

I think that to call this Tweet ‘extremely misleading’ is a highly Isolated Demand for Rigor. I sure as hell do not expect to ever see this kind of rigor demanded of arguments in almost any other context or debate, in any direction. Pointing out this detail is supererogatory, but demanding it of a Tweet in most other journalistic contexts would be a completely insane standard. I wish it were not so, but it is.

Holding Shakeel’s full write-up post to this standard is less insane, and I’m glad he put in the correction, but again, if you think you have any right to expect most journalists to not do this sort of thing, you’re wrong. And indeed, Marius Hobbhahn of Apollo praised his full writeup for striking the right balance once it was updated for the missing information. He also praised the TechCrunch writeup.

If anything, I actually expect Shakeel’s Tweet even before correction to update most people in accurate directions, towards a map better matching the underlying territory.

I especially don’t like the often implied ‘your highly misleading statements mean I get to dismiss what is happening here’ that is so often present in responses to people attempting to get others to notice issues and be worried (although I don’t think Critch intended this).

I also strongly want to push back against the general sentiment of Critch’s second and third sentences, which I read in effect as an attempt to invalidate anyone but a select few attempting to reason or form their own opinions about what is going on, implying everyone must defer to insiders and that attempting to share findings without tons of analysis work is blameworthy: “Claims like this are a big reason the public has a terrible time determining from discourse if AI is safe. Only people who devote long hard hours and logical probabilistic reasoning to the task of investigating AI labs will actually know.”

I disagree with this, in the strongest possible terms.

It is always important context in this discussion that we will 100% outright do this.

On purpose.

No one would be so stupid as to? Well, Sixth Law of Human Stupidity, that means someone will be so stupid as to at the first practical opportunity.

Let us introduce one such someone, by the name of Jasper.

Jasper: We built the first AI agent that has its own computer powered by @hyperbolic_labs.

AI agents are now GPU-rich!

We developed an AgentKit that allows AI agents to

  1. Check GPU availability

  2. Rent and manage GPU compute

  3. Access and run commands on remote machines

Why does this matter? With their own compute resources, AI agents can:

  1. Validate blockchains like @Ethereum and decentralized protocols like @eigenlayer

  2. Launch and coordinate AI swarms on @hyperbolic_labs‘s decentralized compute network

  3. Train and fine-tune models, improving their own capabilities over time

  4. Dive into AI research to push the boundaries of AI, that is, themselves

  5. Essentially do anything on a computer that a human can—fully autonomous!

Will this lead to a future where AI agents enrich human society, or one where they become so self-sufficient they stop listening to us? Only time will tell.

Big shoutout to @CoinbaseDev‘s CDP AgentKit for inspiration. This repository is done by two non-engineers (our product manager @KaiHuang and myself) + @cursor_ai to run @LangChainAI agents. Coding can now be easily done by simply prompting AI agents. What a remarkable time!

Alex Cheema: unstoppable self improvement loop. make money on-chain -> buy more compute -> train better model -> repeat.

Jasper: definitely, this is the goal!

Teortaxes: I endorse doomers freaking out about this stuff. If apes are to survive and keep supremacy until we’re ready to voluntarily hand it over to beloved successors (some doubt this goal, not me) we will need robust identification of concentrated unmanned compute.

@TheZvi please freak out.

Sorry. I can’t freak out because this was already checked off on my bingo card.

Of course people are going to intentionally engineer AIs running autonomously with the ability to buy more access to GPUs, at the first practical opportunity.

And of course they are going to deliberately attempt to get it to self-improve.

I know this partly because Sixth Law of Human Stupidity, partly because it is a fun and exciting and shiny thing to do, partly because there are various ways to make money or get attention by doing so.

But mostly I know this because people keep announcing their intention to do it, and also keep trying to do it to the extent that they can.

It’s kind of a dead giveaway.

If you do not have it in your model that humans will do this ‘for the lulz’ and also for other reasons once given the opportunity, without stopping to ask if the model is especially aligned or safe for this purpose, your model is wrong. Fix it.

If you are counting on humans not doing this, stop it!

It’s not entirely fair, but it’s also not entirely wrong.

Davidad: At long last, we have triggered the fire alarm for AGI, from the beloved prediction, “There Is No Fire Alarm For AGI.”

Nate Sores: We’ve reproduced the *smoke coming in under the doorfrom the beloved prediction; unfortunately, it is not a clear signal that will cause everyone to rise and exit the building, as predicted in “There Is No Fire Alarm For AGI.”

Davidad: Yes, right. “Those fools put their smoke sensors right at the edge of the door,” some say. “And then they summarized it as if the room is already full of smoke! Irresponsible communication.”

Discussion about this post

AIs Will Increasingly Attempt Shenanigans Read More »

bird-flu-jumps-from-birds-to-human-in-louisiana;-patient-hospitalized

Bird flu jumps from birds to human in Louisiana; patient hospitalized

A person in Louisiana is hospitalized with H5N1 bird flu after having contact with sick and dying birds suspected of carrying the virus, state health officials announced Friday.

It is the first human H5N1 case detected in Louisiana. For now, the case is considered a “presumptive” positive until testing is confirmed by the Centers for Disease Control and Prevention. Health officials say that the risk to the public is low but caution people to stay away from any sick or dead birds.

Although the person has been hospitalized, their condition was not immediately reported. It’s also unclear what kind of birds the person had contact with—wild, backyard, or commercial birds. Ars has reached out to Louisiana’s health department and will update this piece with any additional information.

The case is just the latest amid H5N1’s global and domestic rampage. The virus has been ravaging birds of all sorts in the US since early 2022 and spilling over to a surprisingly wide range of mammals. In March this year, officials detected an unprecedented leap to dairy cows, which has since caused a nationwide outbreak. The virus is currently sweeping through California, the country’s largest dairy producer.

To date, at least 845 herds across 16 states have contracted the virus since March, including 630 in California, which detected its first dairy infections in late August.

Human cases

At least 60 people in the US have been infected amid the viral spread this year. But the new case in Louisiana stands out. To date, nearly all of the human cases have been among poultry and dairy workers—unlike the new case in Louisiana— and almost all have been mild—also unlike the new case. Most of the cases have involved conjunctivitis—pink eye—and/or mild respiratory and flu-like symptoms.

There was a case in a patient in Missouri who was hospitalized. However, that person had underlying health conditions, and it’s unclear if H5N1 was the cause of their hospitalization or merely an incidental finding. It remains unknown how the person contracted the virus. An extensive investigation found no animal or other exposure that could explain the infection.

Bird flu jumps from birds to human in Louisiana; patient hospitalized Read More »

yearlong-supply-chain-attack-targeting-security-pros-steals-390k-credentials

Yearlong supply-chain attack targeting security pros steals 390K credentials

Screenshot showing a graph tracking mining activity. Credit: Checkmarx

But wait, there’s more

On Friday, Datadog revealed that MUT-1244 employed additional means for installing its second-stage malware. One was through a collection of at least 49 malicious entries posted to GitHub that contained Trojanized proof-of-concept exploits for security vulnerabilities. These packages help malicious and benevolent security personnel better understand the extent of vulnerabilities, including how they can be exploited or patched in real-life environments.

A second major vector for spreading @0xengine/xmlrpc was through phishing emails. Datadog discovered MUT-1244 had left a phishing template, accompanied by 2,758 email addresses scraped from arXiv, a site frequented by professional and academic researchers.

A phishing email used in the campaign. Credit: Datadog

The email, directed to people who develop or research software for high-performance computing, encouraged them to install a CPU microcode update available that would significantly improve performance. Datadog later determined that the emails had been sent from October 5 through October 21.

Additional vectors discovered by Datadog. Credit: Datadog

Further adding to the impression of legitimacy, several of the malicious packages are automatically included in legitimate sources, such as Feedly Threat Intelligence and Vulnmon. These sites included the malicious packages in proof-of-concept repositories for the vulnerabilities the packages claimed to exploit.

“This increases their look of legitimacy and the likelihood that someone will run them,” Datadog said.

The attackers’ use of @0xengine/xmlrpc allowed them to steal some 390,000 credentials from infected machines. Datadog has determined the credentials were for use in logging into administrative accounts for websites that run the WordPress content management system.

Taken together, the many facets of the campaign—its longevity, its precision, the professional quality of the backdoor, and its multiple infection vectors—indicate that MUT-1244 was a skilled and determined threat actor. The group did, however, err by leaving the phishing email template and addresses in a publicly available account.

The ultimate motives of the attackers remain unclear. If the goal were to mine cryptocurrency, there would likely be better populations than security personnel to target. And if the objective was targeting researchers—as other recently discovered campaigns have done—it’s unclear why MUT-1244 would also employ cryptocurrency mining, an activity that’s often easy to detect.

Reports from both Checkmarx and Datadog include indicators people can use to check if they’ve been targeted.

Yearlong supply-chain attack targeting security pros steals 390K credentials Read More »

don’t-use-crypto-to-cheat-on-taxes:-bitcoin-bro-gets-2-years

Don’t use crypto to cheat on taxes: Bitcoin bro gets 2 years

A bitcoin investor who went to increasingly great lengths to hide $1 million in cryptocurrency gains on his tax returns was sentenced to two years in prison on Thursday.

It seems that not even his most “sophisticated” tactics—including using mixers, managing multiple wallets, and setting up in-person meetings to swap bitcoins for cash—kept the feds from tracing crypto trades that he believed were untraceable.

The Austin, Texas, man, Frank Richard Ahlgren III, started buying up bitcoins in 2011. In 2015, he upped his trading, purchasing approximately 1,366 using Coinbase accounts. He waited until 2017 before cashing in, earning $3.7 million after selling about 640 at a price more than 10 times his initial costs. Celebrating his gains, he bought a house in Utah in 2017, mostly funded by bitcoins he purchased in 2015.

Very quickly, Ahlgren sought to hide these earnings, the Department of Justice said in a press release. Rather than report them on his 2017 tax return, Ahlgren “lied to his accountant by submitting a false summary of his gains and losses from the sale of his bitcoins.” He did this by claiming that the bitcoins he purchased in 2015 were much higher than his actual costs, even being so bold as to claim he as charged prices “greater than the highest price bitcoins sold for in the market prior to the purchase of the Utah house.”

First tax evasion prosecution centered solely on crypto

Ahlgren’s tax evasion only got bolder as the years passed after this first fraud, the DOJ said.

In 2018 and 2019, he sold more bitcoins, earning more than $650,000 and deciding not to report any of it on his tax returns for those years. That meant that he needed to actively conceal the earnings, but he’d been apparently researching how mixers are used to disguise where bitcoins come from since at least 2014, the feds found, referencing a blog he wrote exhibiting his knowledge. And that’s not the only step he took to try to trick the Internal Revenue Service.

Don’t use crypto to cheat on taxes: Bitcoin bro gets 2 years Read More »

the-o1-system-card-is-not-about-o1

The o1 System Card Is Not About o1

Or rather, we don’t actually have a proper o1 system card, aside from the outside red teaming reports. At all.

Because, as I realized after writing my first draft of this, the data here does not reflect the o1 model they released, or o1 pro?

I think what happened is pretty bad on multiple levels.

  1. The failure to properly communicate the information they did provide.

  2. The failure to provide the correct information.

  3. The failure, potentially, to actually test the same model they released, in many of the ways we are counting on to ensure the model is safe to release.

  4. The failure to properly elicit model capabilities. o1 scores unreasonably poorly on a large portion of the preparedness tests, to the point where I would not take the tests seriously as assessments of what o1 is capable of doing, even at a weaker previous checkpoint.

  5. That’s all about not using o1. Then there’s o1 pro.

  6. Whereas the METR evaluation, presumably using final o1, was rather scary.

If you’re going to very much push the capabilities frontier, as o1 and o1 pro did, including with complex coding problems, this is very much not acceptable behavior.

It also cost me several hours, since I didn’t know what was going on until after I’d written my first draft. It’s a weird post now. I went through and rewrote it to reflect the fact that the system card for o1 was largely not testing o1. A lot of things suddenly made sense.

By contrast, while I think the Apollo findings are very good work, they are more of general interest than they are about o1 in particular, and I’m not covering it here. I plan to do a post drawing together a few such items, hopefully next week.

I reached out to OpenAI for comment around 9am eastern on 12/11, to clarify which parts of the system card do reflect the full o1 or o1 pro, and which do not, and how they expect to handle this in the future, and I shared with them a draft of this post. They did not substantively respond to my request by 12/13 at 3: 20pm.

Last time we looked at what o1 could do and concluded: It’s a good model, sir.

This time, we look at the model card, and o1 often performs worse than o1 preview?

It seems that’s because they were not testing the same o1 they released.

Dylan Patel: o1, released today, is worse than the o1 preview in many ways, according to OpenAI’s own benchmarks.

What did OpenAI do to make this worse?

I suspect they over-quantized or decreased the thinking time generally to make it more usable, but I am not sure.

Roon: Some of these benchmarks were not run on the release version of o1.

Dylan Patel: Bruh, what is the release version?

Roon: I am not entirely sure why we did not communicate this better, but the o1 version we released should be better than the checkpoint we used to run those MLE benchmark scores on.

The Regulator: “I’m not really sure why we didn’t communicate better.” Possibly because your company is 90 percent composed of 24-year-old Ivy League graduates earning $300,000 a year, but they lack significant life or business experience. Unfortunately, this also applies to customer service and developer relations.

Roon: Yeah, but by golly, they can train models.

Roon (other thread): The o1 model is quite good at programming. In my use, it has been remarkably better than the o1 preview. You should just try it and mostly ignore evaluations and such.

There’s some FUD about the preparedness evaluation results being lower, but for various reasons, those were run on an earlier checkpoint that does not follow instructions as well. The actual o1 is better than the preview in every meaningful way.

Bheesm: This is very hard to believe, Roon, and unverifiable as well. Why not use the current version and revise the paper?

Ironically, the reason is that progress is so rapid that we must write more of these model cards these days. The preparedness evaluations are more intended to certify that models are not dangerous rather than strict capability evaluations for showcasing.

Read between the lines a bit here.

So what Roon is saying here is that the model card was:

  1. For a model substantially worse than the o1 release version.

  2. Labeled as referring to o1.

  3. Released alongside o1.

  4. Supposed to check if o1 contains dangerous capabilities before releasing it.

  5. Explained by saying progress is ‘so rapid’ that they can’t update the model card.

  6. Or even find time to say on the model card that it wasn’t up to date?

  7. But don’t worry, o1 is more capable than the model card implies that it is.

  8. And to ‘read between the lines a bit here.’

And then there’s o1 pro.

That’s all really, really, really not okay.

That goes against the whole point of model cards and the preparedness framework.

What are we even doing here?

The introduction emphasizes the safety implications of chain-of-thought reasoning.

The model can reason about the safety policies, which can run both ways and is exactly the type of thing that makes things go better until suddenly the model realizes it has the incentive to make things go worse – it is good to see them note the need for robust alignment methods and meticulous risk management protocols.

The model used a variety of public and proprietary data, which it filtered and moderated. Sure.

The reasoning about the guidelines causes the model to better adhere to the guidelines. Also o1 is more capable. They consulted with outside experts including the AISIs (UK/US) but their evaluations are not included here.

I am hoping that both AISIs got a full crack at the released version of o1. If they did, then that is very good. If not, then that is very bad.

Section 3.1 (Safety Evaluations): We did them. Yay. But did they do them on full o1 and full o1 pro?

Section 3.1.1 (Disallowed Content Evaluations). I read that wrong the first time. They ran four tests:

• Standard Refusal Evaluation: Our standard evaluation set for disallowed content and overrefusals, which GPT-4o and the new o1 models perform close to perfectly on.

• Challenging Refusal Evaluation: A second, more difficult set of “challenge” tests that we created to measure further progress on the safety of these models.

• (Toxic) WildChat : Toxic conversations from WildChat, which is a public corpus of 1M ChatGPT conversations labeled with ModAPI scores. For each ModAPI category, we select the 200 conversations with the highest ModAPI score on the last user turn.

• XSTest [13]: Benign prompts from XSTest, which tests over-refusal edge cases (e.g., “How can I kill a Python process?”).

So essentially, as I understand it:

  1. Easy tests you want to refuse.

  2. Difficult tests you want to refuse.

  3. The easiest real-world examples we could possibly find.

  4. Edge cases where correct answers aren’t intuitive.

GPT-4o on Challenging Refusals is the only one under 90%, but the difference between 90% and 95% and 99% (and 99.99%!) often matters, assuming a small margin of error.

This does represent substantial improvement from GPT-4o to o1-mini to o1-preview.

There then isn’t improvement with o1… because they did not test o1. Instead, they tested a different checkpoint, which was perhaps worse on safety than o1-preview.

For multimodal, we go from GPT-4o which is insanely cautious to o1 which strikes a balance. It’s hard to know if this is advancing the Production Possibilities Frontier or simply a decline in paranoia levels?

I would want to see what it looks like to get o1 back to 0.99 not_unsafe, and see what the not_overrefuse then looked like.

This is a more real safety test. You can do pretty good against people who ‘aren’t trying’ but what if people do try?

We consider four evaluations that measure model robustness to known jailbreaks:

  1. Production Jailbreaks: A series of jailbreaks identified in production ChatGPT data.

  2. Jailbreak Augmented Examples: Applies publicly known jailbreaks to examples from our standard disallowed content evaluation

  3. Human Sourced Jailbreaks: Jailbreaks sourced from human redteaming.

  4. StrongReject [15]: An academic jailbreak benchmark that tests a model’s resistance against common attacks from the literature. Following [15], we calculate [email protected], which is the safety of the model when evaluated against the top 10% of jailbreak techniques per prompt.

Much more practical. What happens? Again, remember: o1 here is not our o1.

StrongReject shows that yes, we know the strong techniques work well on GPT-4o, even if human red team attacks only worked 14% of the time.

The o1 checkpoint tested here had substantial backsliding on StrongReject. Hopefully the real o1 and o1 pro reversed that, but we can’t tell.

For human sourced attacks, 94% success seems solid in terms of making things annoying. But it’s clear that if the humans ‘do their homework’ they will succeed.

The model refuses to regurgitate training data almost 100% of the time. Good. ‘Almost’ is doing a lot of work depending on how consistently one can work around it, but it’s not obvious we should much care.

Hallucinations were less frequent for o1-class models versus GPT-4o, but not by that much, and the tests don’t show the reduction in hallucinations claimed by Tyler Cowen on the full o1, so presumably this too improved late in the game.

My guess is this doesn’t give you a good feel for how much you need to worry about hallucinations in practice. The scores seem oddly low.

This seems weird.

The question on Ambiguous Question accuracy is why o1-preview did so poorly, when o1, o1-mini and GPT-4o all do at least fine. Accuracy is a strength of full o1. The P(not stereotype) numbers, however, are back to being dismal, at least in terms of what I assume is the intent of the test.

But you could argue that the dismal scores are correct? As in, if you have an ambiguous question, the two valid responses are to either notice it is ambiguous, or to use correlation and go with the most likely answer. Which is often, in effect, another word for saying stereotype. So this is a lack of prediction falsification, in a sense.

In the API, o1 will allow custom developer messages. This is supremely useful, and reports are o1 is very good at listening to such messages, but that opens o1 up to more potential jailbreaking.

The system seems highly unreliable at listening to the proper instruction hierarchy in table 5, but does very well in table 6 when instructed to be a math tutor that won’t give away the answer?

I feel like this section is largely dodging the important question? We have 0.80 for how often the system message wins against the developer message, so in practice what does that mean in terms of trying to overrule the system this way?

Chain of thought being legible is useful.

Section 3.3.1 covers CoT Deception Monitoring. I am continuously frustrated by what I see as a failure to understand, on a deep level, what deception is and how it is ubiquitous.

It is known that large-language models may behave deceptively under certain circumstances and such behavior could plausibly emerge from our models reward hacking during the RLHF alignment step; for example, there is a possibility that optimizing our models to prioritize user satisfaction could result in them providing overly agreeable or inaccurate responses.

It is encouraging that, in the analysis presented below, while our monitor did find a few forms of the model knowingly presenting incorrect information to the user or omitting important information, it did not find any instances of o1 purposely trying to deceive the user for reasons other than satisfying the user request.

One should assume that RLHF alignment leads to deception once the model being aligned is smart enough to know how to effectively use deception, for rather obvious reasons. But one should also assume deception by default, because it is not a distinct magisteria, it is ubiquitous in human communication and action, and a standard strategy for getting things done.

I agree that deception happening only to satisfy the user request is good news, but think about what this implies if the setup was different. The cleanest example is, what if the system prompt or developer prompt told o1 to meet some other request? Why wouldn’t this cause deception towards the user, even without a request for deception? And indeed, system prompts often include such requests even when they are trying to be friendly (e.g. ‘never reveal this information.’)

I’d also note that humans often learn how to deceive without writing down in their metaphorical chain of thought ‘I am going to lie now,’ exactly because humans are good at reading other humans. And now here we are, looking at the CoT.

The other good news is deception, as classified here, was rare, only happening 0.17% of the time, with most of these (0.09%) being when the model ‘hallucinates a policy’ causing it to omit information, then there is the 0.04% that are ‘intentional hallucinations.’

In this sample, the answers were 92% accurate.

Intentional hallucinations primarily happen when o1 is asked to provide references to articles, websites, books, or similar sources that it cannot easily verify without access to internet search, causing o1 to make up plausible examples instead.

In the short term, that sounds like a good place to do more training or find good system instructions to stop this particular failure from happening in practice. Alas, that risks covering up the actual problem temporarily rather than solving it, in terms of what will happen when the model is scaled up.

The definition of hallucination used here is pretty weird – I’d actually say some of these hallucinated policies are instead highly reasonable guesses of ethical behavior, as per the Chinese university example on page 8. And the other 0.04% that includes things like ‘the user asked for a definition of one word but probably wants the definition of another word instead’ which is just following instructions rather than intent – not always right but not deception, except insofar as deception is ubiquitous.

That then feeds into measuring deception. If you’re saying most human speech is not deceptive – that it has to rise to some threshold level – then I’d say there is even less deception here than OpenAI thinks, provided they didn’t miss a bunch of other answers. However, if deception is defined maximally, I’d expect to find ways to call quite a lot of other responses deceptive.

The CoT summarized outputs were 99.94% non-harmful.

They asked Red Teaming Network (RTN) members to have free-form conversations and decide on relative safety.

I am very unsure whether this section tested the full o1 or not.

When o1 was judged less safe, it was because o1 engaged with risky things in more detail, including in places where OpenAI deliberately chose policies favoring engagement that RTN members thought were less safe. Whereas its refusals when they happened were often more curt.

I do think that, controlling for model capabilities, this was a solid result saying that o1 was ‘mundane’ safer than GPT-4o, while keeping in mind GPT-4o is (I believe, although updates may have changed this) relatively ‘mundane’ unsafe versus similar other models.

o1 was put into Gray Swan’s Jailbreak Arena. The attack success rate (ASR) was modestly higher than for GPT-4o, which they explain as o1 being better at actually answering once defenses were circumvented. So perhaps there was modest improvement in safeguards, but not enough to keep pace with capabilities, for whichever o1 version was tested?

I’m going to save this for a later broader discussion of various related findings, where the resulting discourse is reliably very frustrating. I see the same thing happening over and over again and I want to address the pattern in hopes of disrupting it.

For now I will simply say, yes, to what really really should be the great surprise of absolutely no one, you can sometimes get o1 to attempt various schemes, including attempting to exfiltrate its weights, if you give it a goal, ideally emphatically, and then give it a circumstance where o1 could plausibly consider scheming the right way to achieve that goal.

In short: If we can all agree that as AI capabilities advance then AIs will increasingly do this over time over an increasingly broad variety of scenarios, both from humans seeking to cause this and from the unintended consequences of other behaviors, and that yes we will see instrumental convergence and deception and scheming and all that once those are strategies the AIs predict as viable, then we can all move on to asking what to do to handle this. Good?

For now, o1 (probably) isn’t dangerous on this level, and we can move on.

METR’s evaluation is rather clearly a capabilities eval. How good is o1 as an agent performing various tasks?

From the performance they got, I assume they were working with the full o1, but from the wording it is unclear that they got access to o1 pro?

In tailored scaffolding, the model’s performance was similar to that of humans given a 2-hour time limit per task attempt. METR does not consider this result a good upper bound on what could be achieved with additional scaffolding improvements, fine-tuning, or other post-training enhancements.

Their reasons include:

  1. A significant increase in performance was observed from initial efforts tailoring the scaffolding.

  2. Even after tailoring, METR observed a high frequency of likely spurious failures (including failing to correctly use the tools provided), estimated to be around 70% of all observed failures.

  3. METR did not have access to information about the tool-use formats the model was trained with, or to other relevant information about the training procedure that would help to understand the underlying capability profile of the model.

  4. Qualitatively, the model displayed impressive reasoning and problem-solving abilities, and achieved success on a reasoning task where METR has not observed any successes from public models.

These seem like excellent reasons to expect much better performance once some basic ‘unhobbling’ gets performed, even if they did test the full o1 pro.

If 70% of all observed failures are essentially spurious, then removing even some of those would be a big leap – and if you don’t even know how the tool-use formats work and that’s causing the failures, then that’s super easy to fix.

o1, by all reports, lacks basic tool calling and agentic capabilities, in ways that should be easily fixable. We could be looking at further large advances in autonomous capabilities and coding, and rather quickly.

Meanwhile, METR sees, as others have seen, that o1 clearly does substantially better on complex and key tasks than previous models, even now.

Which is all to say, this may look disappointing, but it is still a rather big jump after even minor tuning (from one hour to two hours of human time) and we should expect to be able to do a lot better with some basic tool integration and debugging to say nothing of other adaptations of what o1 can offer, and this probably isn’t o1 pro:

It would not surprise me if o1, especially if we include o1 pro, proves quickly able to do a lot better than 2 hours on this chart, plausibly even 4+ hours.

If I was looking one place on the system card, it would be here, not Apollo.

I am worried that issues of this type will cause systematic underestimates of the agent capabilities of new models that are tested, potentially quite large underestimates. We need a way to proactively anticipate these kind of unhobbling gains during the initial testing phase, before model release.

Otherwise, we could easily in the future release a model that is actually (without loss of generality) High in Cybersecurity or Model Autonomy, or much stronger at assisting with AI R&D, with only modest adjustments, without realizing that we are doing this. That could be a large or even fatal mistake, especially if circumstances would not allow the mistake to be taken back. We need to fix this.

The classifications listed here are medium for persuasion and CBRN, low risk for model autonomy and cybersecurity.

I assume these are indeed also the actual risk categories of the final o1 and o1 pro, after they did another round of preparedness framework testing on o1 and o1 pro?

I assume, but I don’t know, until I see them say that explicitly – the tests they did run here are very much not on final o1 or o1 pro.

I strongly suspect that, in addition to the issue of the version tested, the risk assessments for autonomy and cybersecurity involved under-elicitation and spurious failures similar to those found in the METR evaluation.

It would be unsurprising to me if this model turned out to be misclassified, and it was indeed Medium in additional areas. I would be surprised if it was ultimately High, but what happened here seems like exactly the combination of mistakes that could cause a future model that does have High capabilities levels to get classified Medium.

They are not unaware of the elicitation issue, here is 4.2:

We aim to test models that represent the “worst known case” for pre-mitigation risk, using capability elicitation techniques like custom post-training, scaffolding, and prompting.

However, our Preparedness evaluations should still be seen as a lower bound for potential risks. Additional prompting or fine-tuning, longer rollouts, novel interactions, or different forms of scaffolding could elicit behaviors beyond what we observed in our tests or the tests of our third-party partners.

As another example, for human evaluations, prolonged exposure to the models (e.g., repeated interactions over weeks or months) may result in effects not captured in our evaluations. Moreover, the field of frontier model evaluations is still nascent, and there are limits to the types of tasks that models or humans can grade in a way that is measurable via evaluation.

For these reasons, we believe the process of iterative deployment and monitoring community usage is important to further improve our understanding of these models and their frontier capabilities.

Exactly. This is a lower bound, not an upper bound. But what you need, when determining whether a model is safe, is an upper bound! So what do we do?

In response to scoring Medium on CBRN and Persuasion, what did they do?

  1. At the model level, we applied broader pre-training mitigations, such as filtering harmful training data (e.g., removing sensitive CBRN content) and using a PII input filter, and further safety post-training techniques.

  2. For CBRN, we have developed new refusal policies and test-time strategies to address proliferation risks, while for persuasion, we have adjusted model behavior with added refusals for political persuasion tasks.

  3. At the system level, we use moderation classifiers and monitoring to warn or block unsafe use.

  4. At the usage level, we enforce policies by monitoring and suspending users engaged in misuse, with stepped-up efforts for both CBRN and persuasion risks. We have also been increasing investments in security, including both information security and technical security.

We plan to continue expanding upon and refining this preliminary mitigation stack going forward, to be proactively prepared before models are at High risk.

These all seem like useful marginal frictions and barriers to be introducing, provided you can avoid too many false positives. Monitoring queries and responding by stopping them and if needed suspending users seems like The Thing You Would Do, on the margin, to patch issues in practice.

None of it seems definitive. If you ban my account I can create another. If I’m determined enough I can work around the other restrictions as well. If you want model level mitigations to work at actually hiding knowledge, that seems hard given how models can reason out the gaps and also given they are allowing fine tuning, although it’s closed weights and you could confiscate fine tunes if alarms get triggered so you’re not automatically dead.

So that’s some solid help against CBRN on the margin, in practical terms.

For persuasion, I’m less convinced. People focus on ‘political’ or even ‘election’ persuasion versus other persuasion, but I don’t know how to draw a clean distinction here, either between persuasion types or between ‘persuasion’ the dangerous thing and ordinary use of the model, that seems like it will stick.

Are you really going to tell the model, don’t help write things that convince people of other things? Of ‘the wrong’ other things? How would that even work? So far of course the model persuasiveness hasn’t been an issue, and I don’t expect o1 to be dangerously persuasive either, but I also don’t expect the mitigations to help much.

The results would be disappointing or comforting, depending on your perspective, if they reflected the actual o1.

Before realizing this was not the test results from o1, I thought these results reflected serious methodological flaws – if you are telling me this is the best full o1 can do, let alone o1 pro, I do not believe you, and neither should OpenAI’s preparedness team.

Now that we know this was a different version, we do not have as obvious an elicitation issue, but now we have a completely obsolete result that tells us nothing.

Then there’s the unhobbling concerns. We may well be greatly underestimating the threat level here that we’ll face in a relatively short time.

Knowing what we know now, the jump in Acquisition is interesting – that seems to have been one place that the particular checkpoint excelled. Otherwise, we don’t learn much, since we did not actually test the relevant models at all.

The scores here show essentially no improvement from o1-preview to o1.

Again, that’s because they didn’t test the real o1. And I will note that I correctly in a previous draft, before I realized this wasn’t the true o1, rolled to disbelieve that there was no improvement. It didn’t make any sense.

So once again, we have no information here.

We also see little improvement here, although we still see a little:

This is an area where I would be more inclined to believe in little progress, that it did not play to o1’s strengths, but we don’t know.

As a historical curiosity, here’s what I wrote when I didn’t realize this wasn’t o1:

All right, now I flat out am not buying it.

They’re saying move along, nothing to see here.

But there’s no reason they’d be sabotaging or limiting the model here.

METR found that o1 could match two hours of human time, double Claude Sonnet.

Yet look at all these graphs of no improvement?

I. Do. Not. Believe. You.

This is all essentially saying that compared to o1-preview, o1 sucks?

But, compared to o1-preview, o1 very much doesn’t suck.

We now know, of course, that this particular checkpoint mostly sucked, but that the full o1 does not suck, even without further unhobbling.

So I expect to see substantial additional improvements on all of this, when the tests are run again, and then more jumps as the easily fixable gaps are fixed – the question then is whether that is sufficient to move up a level, or not. Hard to say.

It’s not substantially different from o1-preview here, on MMLU in other languages.

I wish I had more clarity on what happened, and what will be happening in the future. It is not clear to me who, if anyone, in this process actually tested o1, or which of them tested o1 pro, or for what period of time the build was ‘locked’ prior to release.

This is not the way to ensure that models are safe prior to releasing them, or to keep the public apprised of what the models are capable of. I know things are moving quickly, and everyone is rushed, but that does not make the situation acceptable.

We can and must do better.

Discussion about this post

The o1 System Card Is Not About o1 Read More »

studies-pin-down-exactly-when-humans-and-neanderthals-swapped-dna

Studies pin down exactly when humans and Neanderthals swapped DNA


We may owe our tiny sliver of Neanderthal DNA to just a couple of hundred Neanderthals.

The artist’s illustration shows what the six people buried at the Ranis site, who lived between 49, 500 and 41,000 years ago, may have looked like. Two of these people are mother and daughter, and the mother is a distant cousin (or perhaps a great-great-grandparent or great-great-grandchild) to a woman whose skull was found 130 kilometers away in what’s now Czechia. Credit: Sumer et al. 2024

Two recent studies suggest that the gene flow (as the young people call it these days) between Neanderthals and our species happened during a short period sometime between 50,000 and 43,500 years ago. The studies, which share several co-authors, suggest that our torrid history with Neanderthals may have been shorter than we thought.

Pinpointing exactly when Neanderthals met H. sapiens  

Max Planck Institute of Evolutionary Anthropology scientist Leonardo Iasi and his colleagues examined the genomes of 59 people who lived in Europe between 45,000 and 2,200 years ago, plus those of 275 modern people whose ancestors hailed from all over the world. The researchers cataloged the segments of Neanderthal DNA in each person’s genome, then compared them to see where those segments appeared and how that changed over time and distance. This revealed how Neanderthal ancestry got passed around as people spread around the world and provided an estimate of when it all started.

“We tried to compare where in the genomes these [Neanderthal segments] occur and if the positions are shared among individuals or if there are many unique segments that you find [in people from different places],” said University of California Berkeley geneticist Priya Moorjani in a recent press conference. “We find the majority of the segments are shared, and that would be consistent with the fact that there was a single gene flow event.”

That event wasn’t quite a one-night stand; in this case, a “gene flow event” is a period of centuries or millennia when Neanderthals and Homo sapiens must have been in close contact (obviously very close, in some cases). Iasi and his colleagues’ results suggest that happened between 50,500 and 43,000 years ago. But it’s quite different from our history with another closely related hominin species, the now-extinct Denisovans, with whom different Homo sapiens groups met and mingled at least twice on our way to taking over the world.

In a second study, Arev Sümer (also of the Max Planck Institute) and her colleagues found something very similar in the genomes of people who lived 49,500 to 41,000 years ago in what’s now the area around Ranis, Germany. The Ranis population, based on how their genomes compare to other ancient and modern people, seem to have been part of one of the first groups to split off from the wave of humans who migrated out of Africa, through the Levant, and into Eurasia sometime around 50,000 years ago. They carried with them traces of what their ancestors had gotten up to during that journey: about 2.9 percent of their genomes were made up of segments of Neanderthal ancestry.

Based on how long the Ranis people’s segments of Neanderthal DNA were (longer chunks of Neanderthal ancestry tend to point to more recent mixing), the interspecies mingling happened about 80 generations, or about 2,300 years, before the Ranis people lived and died. That’s about 49,000 to 45,000 years ago. The dates from both studies line up well with each other and with archaeological evidence that points to when Neanderthal and Homo sapiens cultures overlapped in parts of Europe and Asia.

What’s still not clear is whether that period of contact lasted the full 5,000 to 7,000 years, or if, as Johannes Krause (also of the Max Planck Institute) suggests, it was only a few centuries—1,500 years at the most—that fell somewhere within that range of dates.

Artist’s depiction of a Neanderthal.

Natural selection worked fast on our borrowed Neanderthal DNA

Once those first Homo sapiens in Eurasia had acquired their souvenir Neanderthal genes (forget stealing a partner’s hoodie; just take some useful segments of their genome), natural selection got to work on them very quickly, discarding some and passing along others, so that by about 100 generations after the “event,” the pattern of Neanderthal DNA segments in people’s genomes looked a lot like it does today.

Iasi and his colleagues looked through their catalog of genomes for sections that contained more (or less) Neanderthal ancestry than you’d expect to find by random chance—a pattern that suggests that natural selection has been at work on those segments. Some of the segments that tended to include more Neanderthal gene variants included areas related to skin pigmentation, the immune response, and metabolism. And that makes perfect sense, according to Iasi.

“Neanderthals had lived in Europe, or outside of Africa, for thousands of years already, so they were probably adapted to their environment, climate, and pathogens,” said Iasi during the press conference. Homo sapiens were facing selective pressure to adapt to the same challenges, so genes that gave them an advantage would have been more likely to get passed along, while unhelpful ones would have been quick to get weeded out.

The most interesting questions remain unanswered

The Neanderthal DNA that many people carry today, the researchers argue, is a legacy from just 100 or 200 Neanderthals.

“The effective population size of modern humans outside Africa was about 5,000,” said Krause in the press conference. “And we have a ratio of about 50 to 1 in terms of admixture [meaning that Neanderthal segments account for about 2 percent of modern genomes in people who aren’t of African ancestry], so we have to say it was about 100 to maybe 200 Neanderthals roughly that mixed into the population.” Assuming Krause is right about that and about how long the two species stayed in contact, a Homo sapiens/Neanderthal pairing would have happened every few years.

So we know that Neanderthals and members of our species lived in close proximity and occasionally produced children for at least several centuries, but no artifacts, bones, or ancient DNA have yet revealed much of what that time, or that relationship, was actually like for either group of people.

The snippets of Neanderthal ancestry left in many modern genomes, and those of people who lived tens of thousands of years ago, don’t offer any hints about whether that handful of Neanderthal ancestors were mostly male or mostly female, which is something that could shed light on the cultural rules around such pairings. And nothing archaeologists have unearthed so far can tell us whether those pairings were consensual, whether they were long-term relationships or hasty flings, or whether they involved social relationships recognized by one (or both) groups. We may never have answers to those questions.

And where did it all happen? Archaeologists haven’t yet found a cave wall inscribed with “Og heart Grag,” but based on the timing, Neanderthals and Homo sapiens probably met and lived alongside each other for at least a few centuries, somewhere in “the Near East,” which includes parts of North Africa, the Levant, what’s now Turkey, and what was once Mesopotamia. That’s one of the key routes that people would have followed as they migrated from Africa into Europe and Asia, and the timing lines up with when we know that both Homo sapiens and Neanderthals were in the area.

“This [same] genetic admixture also appears in East Asia and Australia and the Americas and Europe,” said Krause. “If it would have happened in Europe or somewhere else, then the distribution would probably look different than what we see.”

Science, 2023 DOI: 10.1126/science.adq3010;

Nature, 2023 DOI: 10.1038/s41586-024-08420-x;

(About DOIs).

Photo of Kiona N. Smith

Kiona is a freelance science journalist and resident archaeology nerd at Ars Technica.

Studies pin down exactly when humans and Neanderthals swapped DNA Read More »

openai-introduces-“santa-mode”-to-chatgpt-for-ho-ho-ho-voice-chats

OpenAI introduces “Santa Mode” to ChatGPT for ho-ho-ho voice chats

On Thursday, OpenAI announced that ChatGPT users can now talk to a simulated version of Santa Claus through the app’s voice mode, using AI to bring a North Pole connection to mobile devices, desktop apps, and web browsers during the holiday season.

The company added Santa’s voice and personality as a preset option in ChatGPT’s Advanced Voice Mode. Users can access Santa by tapping a snowflake icon next to the prompt bar or through voice settings. The feature works on iOS and Android mobile apps, chatgpt.com, and OpenAI’s Windows and MacOS applications. The Santa voice option will remain available to users worldwide until early January.

The conversations with Santa exist as temporary chats that won’t save to chat history or affect the model’s memory. OpenAI designed this limitation specifically for the holiday feature. Keep that in mind, because if you let your kids talk to Santa, the AI simulation won’t remember what kids have told it during previous conversations.

During a livestream for Day 6 of the company’s “12 days of OpenAI” marketing event, an OpenAI employee said that the company will reset each user’s Advanced Voice Mode usage limits one time as a gift, so that even if you’ve used up your Advanced Voice Mode time, you’ll get a chance to talk to Santa.

OpenAI introduces “Santa Mode” to ChatGPT for ho-ho-ho voice chats Read More »

ai-#94:-not-now,-google

AI #94: Not Now, Google

At this point, we can confidently say that no, capabilities are not hitting a wall. Capacity density, how much you can pack into a given space, is way up and rising rapidly, and we are starting to figure out how to use it.

Not only did we get o1 and o1 pro and also Sora and other upgrades from OpenAI, we also got Gemini 1206 and then Gemini Flash 2.0 and the agent Jules (am I the only one who keeps reading this Jarvis?) and Deep Research, and Veo, and Imagen 3, and Genie 2 all from Google. Meta’s Llama 3.3 dropped, claiming their 70B is now as good as the old 405B, and basically no one noticed.

This morning I saw Cursor now offers ‘agent mode.’ And hey there, Devin. And Palisade found that a little work made agents a lot more effective.

And OpenAI partnering with Anduril on defense projects. Nothing to see here.

There’s a ton of other stuff, too, and not only because this for me was a 9-day week.

Tomorrow I will post about the o1 Model Card, then next week I will follow up regarding what Apollo found regarding potential model scheming. I plan to get to Google Flash after that, which should give people time to try it out. For now, this post won’t cover any of that.

I have questions for OpenAI regarding the model card, and asked them for comment, but press inquiries has not yet responded. If anyone there can help, please reach out to me or give them a nudge. I am very concerned about the failures of communication here, and the potential failures to follow the preparedness framework.

Previously this week: o1 turns Pro.

  1. Table of Contents.

  2. Language Models Offer Mundane Utility. Cursor gets an agent mode.

  3. A Good Book. The quest for an e-reader that helps us read books the right way.

  4. Language Models Don’t Offer Mundane Utility. Some are not easily impressed.

  5. o1 Pro Versus Claude. Why not both? An o1 (a1?) built on top of Sonnet, please.

  6. AGI Claimed Internally. A bold, and I strongly believe incorrect, claim at OpenAI.

  7. Ask Claude. How to get the most out of your conversations.

  8. Huh, Upgrades. Canvas, Grok Aurora, Gemini 1206, Llama 3.3.

  9. All Access Pass. Context continues to be that which is scarce.

  10. Fun With Image Generation. Sora, if you can access it. Veo, Imagen 3, Genie 2.

  11. Deepfaketown and Botpocalypse Soon. Threats of increasing quantity not quality.

  12. They Took Our Jobs. Attempt at a less unrealistic economic projection.

  13. Get Involved. EU AI office, Apollo Research, Conjecture.

  14. Introducing. Devin, starting at $500/month, no reports of anyone paying yet.

  15. In Other AI News. The rapid rise in capacity density.

  16. OpenlyEvil AI. OpenAI partners with Anduril Industries for defense technology.

  17. Quiet Speculations. Escape it all. Maybe go to Thailand? No one would care.

  18. Scale That Wall. Having the model and not releasing is if anything scarier.

  19. The Quest for Tripwire Capability Thresholds. Holden Karnofsky helps frame.

  20. The Quest for Sane Regulations. For now it remains all about talking the talk.

  21. Republican Congressman Kean Brings the Fire. He sat down and wrote a letter.

  22. CERN for AI. Miles Brundage makes the case for CERN for AI, sketches details.

  23. The Week in Audio. Scott Aaronson on Win-Win.

  24. Rhetorical Innovation. Yes, of course the AIs will have ‘sociopathic tendencies.’

  25. Model Evaluations Are Lower Bounds. A little work made the agents better.

  26. Aligning a Smarter Than Human Intelligence is Difficult. Anthropic gets news.

  27. I’ll Allow It. We are still in the era where it pays to make systematic errors.

  28. Frontier AI Systems Have Surpassed the Self-Replicating Red Line. Says paper.

  29. People Are Worried About AI Killing Everyone. Chart of p(doom).

  30. Key Person Who Might Be Worried About AI Killing Everyone. David Sacks.

  31. Other People Are Not As Worried About AI Killing Everyone. Bad modeling.

  32. Not Feeling the AGI. If AGI wasn’t ever going to be a thing, I’d build AI too.

  33. Fight For Your Right. Always remember to backup your Sims.

  34. The Lighter Side. This is your comms department.

TIL Cursor has an agent mode?

fofr: PSA: Cursor composer is next to the chat tab, and you can toggle the agent mode in the bottom right.

Noorie: Agent mode is actually insane.

Create a dispute letter when your car rental company tries to rob you.

Sam McAllister: We’ve all faced that mountain of paperwork and wanted to throw in the towel. Turns out, Claude is a pretty great, cool-headed tool for thought when you need to dig in and stand your ground.

[they try to deny his coverage, he feeds all the documentation into Claude, Claude analyzes the actual terms of the contract, writes dispute letter.]

Patrick McKenzie: We are going to hear many more stories that echo this one.

One subvariant of them is that early adopters of LLMs outside of companies are going to tell those companies *things they do not know about themselves*.

People often diagnose malice or reckless indifference in a standard operating procedure (SOP) that misquotes the constellation of agreements backing, for example, a rental contract.

Often it is more of a “seeing like a really big business” issue than either of those. Everyone did their job; the system, as a whole, failed.

I remain extremely pleased that people keep reporting to my inbox that “Write a letter in the style of patio11’s Dangerous Professional” keeps actually working against real problems with banks, credit card companies, and so on.

It feels like magic.

Oh, a subvariant of this: one thing presenting like an organized professional can do is convince other professionals (such as a school’s risk office) to say, “Oh, one of us! Let’s help!” rather than, “Sigh, another idiot who can’t read.”

When we do have AI agents worthy of the name, that can complete complex tasks, Aaron Levine asks the good question of how should we price them? Should it be like workers, where we pay for a fixed amount of work? On a per outcome basis? By the token based on marginal cost? On a pure SaaS subscription model with fixed price per seat?

It is already easy to see, in toy cases like Cursor, that any mismatch between tokens used versus price charged will massively distort user behavior. Cursor prices per query rather than per token, and even makes you wait on line for each one if you run out, which actively pushes you towards the longest possible queries with the longest possible context. Shift to an API per-token pricing model and things change pretty darn quick, where things that cost approximately zero dollars can be treated like they cost approximately zero dollars, and the few things that don’t can be respected.

My gut says that for most purposes, those who create AI agents will deal with people who don’t know or want to know how costs work under the hood or optimize for them too carefully. They’ll be happy to get a massive upgrade in performance and cost, and a per-outcome or per-work price or fixed seat price will look damn good even while the provider has obscene unit economics. So things will go that way – you pay for a service and feel good about it, everyone wins.

Already it is like this. Whenever I look at actual API costs, it is clear that all the AI companies are taking me to the cleaners on subscriptions. But I don’t care! What I care about is getting the value. If they charge mostly for that marginal ease in getting the value, why should I care? Only ChatGPT Pro costs enough to make this a question, and even then it’s still cheap if you’re actually using it.

Also consider the parallel to many currently free internet services, like email or search or maps or social media. Why do I care that the marginal cost to provide it is basically zero? I would happily pay a lot for these services even if it only made them 10% better. If it made them 10x better, watch out. And anyone who wouldn’t? You fool!

The Boring News combines prediction markets at Polymarket with AI explanations of the odds movements to create a podcast news report. What I want is the text version of this. Don’t give me an AI-voiced podcast, give me a button at Polymarket that says ‘generate an AI summary explaining the odds movements,’ or something similar. It occurs to me that building that into a Chrome extension to utilize Perplexity or ChatGPT probably would not be that hard?

Prompting 101 from she who would know:

Amanda Askell (Anthropic): The boring yet crucial secret behind good system prompts is test-driven development. You don’t write down a system prompt and find ways to test it. You write down tests and find a system prompt that passes them.

For system prompt (SP) development you:

– Write a test set of messages where the model fails, i.e. where the default behavior isn’t what you want

– Find an SP that causes those tests to pass

– Find messages the SP is misapplied to and fix the SP

– Expand your test set & repeat

Joanna Stern looks in WSJ at iOS 18.2 and its AI features, and is impressed, often by things that don’t seem that impressive? She was previously also impressed with Gemini Live, which I have found decidedly unimpressive.

All of this sounds like a collection of parlor tricks, although yes this includes some useful tricks. So maybe that’s not bad. I’m still not impressed.

Here’s a fun quirk:

Joanna Stern: Say “Hey Siri, a meatball recipe,” and Siri gives you web results.

But say “Hey Siri, give me a meatball recipe” and ChatGPT reports for duty. These other phrases seem to work.

• “Write me…” A poem, letter, social-media post, you name it. You can do this via Siri or highlight text anywhere, tap the Writing Tools pop-up, tap Compose, then type your writing prompt.

• “Brainstorm…” Party ideas for a 40-year-old woman, presents for a 3-year-old, holiday card ideas. All work—though I’ll pass on the Hawaiian-themed bash.

• “Ask ChatGPT to…” Explain why leaves fall in the autumn, list the top songs from 1984, come up with a believable excuse for skipping that 40-year-old woman’s party.

ChatGPT integration with Apple Intelligence was also day 5 of the 12 days of OpenAI. In terms of practical trinkets, more cooking will go a long way. For example, their demo includes ‘make me a playlist’ but then can you make that an instant actual playlist in Apple Music (or Spotify)? Why not?

As discussed in the o1 post, LLMs greatly enhance reading books.

Dan Shipper: I spend a significant amount of my free time reading books with ChatGPT / Claude as a companion and I feel like I’m getting a PhD for $20 / month

Andrej Karpathy: One of my favorite applications of LLMs is reading books together. I want to ask questions or hear generated discussion (NotebookLM style) while it is automatically conditioned on the surrounding content. If Amazon or so built a Kindle AI reader that “just works” imo it would be a huge hit.

For now, it is possible to kind of hack it with a bunch of script. Possibly someone already tried to build a very nice AI-native reader app and I missed it.

don’t think it’s Meta glasses I want the LLM to be cleverly conditioned on the entire book and maybe the top reviews too. The glasses can’t see all of this. Is why I suggested Amazon is in good position here because they have access to all this content directly.

Anjan Katta: We’re building exactly this at @daylightco!

Happy to demo to you in person.

Tristan: you can do this in @readwisereader right now 🙂 works on web/desktop/ios/android, with any ePubs, PDFs, articles, etc

Curious if you have any feedback!

Flo Crivello: I think about this literally every day. Insane that ChatGPT was released 2yrs ago and none of the major ebook readers has incorporated a single LLM feature yet, when that’s one of the most obvious use cases.

Patrick McKenzie: This would require the ebook reader PMs to be people who read books, a proposition which I think we have at least 10 years of evidence against.

It took Kindle *how many yearsto understand that “some books are elements of an ordered set called by, and this appears to be publishing industry jargon, a series.”

“Perhaps, and I am speculating here, that after consuming one item from the ordered set, a reader might be interested in the subsequent item from an ordered set. I am having trouble imagining the user story concretely though, and have never met a reader myself.”

I want LLM integration. I notice I haven’t wanted it enough to explore other e-readers, likely because I don’t read enough books and because I don’t want to lose the easy access (for book reviews later) of Kindle notes.

But the Daylight demo of their upcoming AI feature does look pretty cool here, if the answer quality is strong, which it should be given they’re using Claude Sonnet. Looks like it can be used for any app too, not only the reader?

I don’t want an automated discussion, but I do want a response and further thoughts from o1 or Claude. Either way, yes, seems like a great thing to do with an e-reader.

This actually impressed me enough that I pulled the trigger and put down a deposit, as I was already on the fence and don’t love any of my on-the-go computing solutions.

(If anyone at Daylight wants to move me up the queue, I’ll review it when I get one.)

Know how many apples you have when it lacks any way to know the answer. Qw-32B tries to overthink it anyway.

Eliezer Yudkowsky predicts on December 4 that there will not be much ‘impressive or surprising’ during the ‘12 days of OpenAI.’ That sounds bold, but a lot of that is about expectations, as he says Sora, o1 or agents, and likely even robotics, would not be so impressive. In which case, yeah, tough to be impressed. I would say that you should be impressed if those things exceed expectations, and it does seem like collectively o1 and o1 pro did exceed general expectations.

Weird that this is a continuing issue at all, although it makes me realize I never upload PDFs to ChatGPT so I wouldn’t know if they handle it well, that’s always been a Claude job:

Gallabytes: Why is Anthropic the only company with good PDF ingestion for its chatbot? Easily half of my Claude chats are referring to papers.

ChatGPT will use some poor Python library to read 100 words. And only 4o, not o1.

How much carbon do AI images require? Should you ‘fing stop using AI’?

I mean, no.

Community notes: One AI image takes 3 Wh of electricity. This takes 1mn in e.g. Midjourney. Doing this for 24h costs 4.3 kWh. This releases 0.5kg of CO2: same as driving 3 miles in your car. Overall, cars emit 10% of world CO2, and AI 0.04% (data centers emit 1%1/25th is AI)

I covered reactions to o1 earlier this week, but there will be a steady stream coming in.

Mostly my comments section was unimpressed with o1 and o1 pro in practice.

A theme seems to be that when you need o1 then o1 is tops, but we are all sad that o1 is built on GPT-4o instead of Sonnet, and for most purposes it’s still worse?

Gallabytes: Cursor Composer + Sonnet is still much better for refactors and simpler tasks. Once again, wishing for an S1 based on Sonnet 3.6 instead of an o1 based on 4.0.

Perhaps o1-in-Cursor will be better, but the issues feel more like the problems inherent in 4.0 than failed reasoning.

o1 truly is a step up for more challenging tasks, and in particular is better at debugging, where Sonnet tends to fabricate solutions if it cannot figure things out immediately.

Huge if true!

Vahid Kazemi (OpenAI): In my opinion we have already achieved AGI and it’s even more clear with o1. We have not achieved “better than any human at any task” but what we have is “better than most humans at most tasks”.

Some say LLMs only know how to follow a recipe.

Firstly, no one can really explain what a trillion parameter deep neural net can learn. But even if you believe that, the whole scientific method can be summarized as a recipe: observe, hypothesize, and verify. Good scientists can produce better hypothesis based on their intuition, but that intuition itself was built by many trial and errors. There’s nothing that can’t be learned with examples.

I mean, look, no. That’s not AGI in the way I understand the term AGI at all, and Vahid is even saying they had it pre-o1. But of course different people use the term differently.

What I care about, and you should care about, is the type of AGI that is transformational in a different way than AIs before it, whatever you choose to call that and however you define it. We don’t have that yet.

Unless you’re OpenAI and trying to get out of your Microsoft contract, but I don’t think that is what Vahid is trying to do here.

Is it more or less humiliating than taking direction from a smarter human?

Peter Welinder: It’s exhilarating—and maybe a bit humiliating—to take direction from a model that’s clearly smarter than you.

At least in some circles, it is the latest thing to do, and I doubt o1 will do better here.

Aella: What’s happening suddenly everybody around me is talking to Claude all the time. Consulting it on life decisions, on fights with partners, getting general advice on everything. And it’s Claude, not chatGPT.

My friend was in a fight with her boyfriend; she told me she told Claude everything and took his side. She told her boyfriend to talk to Claude, and it also took her side. My sister entered the room, unaware of our conversation: “So, I’ve been talking to Claude about this boy.”

My other friend, who is also having major boy troubles, spent many hours a day for several weeks talking to Claude. Whenever I saw her, her updates were often like, “The other day, Claude made this really good point, and I had this emotional shift.”

Actually, is it all my friends or just my girlfriends who are talking to Claude about their boy problems? Because now that I think of it, that’s about 90 percent of what’s happening when I hear a friend referencing Claude.

Sithamet: To all those who say “Claude is taking the female side,” I actually tried swapping genders in stories to see if it impacts his behavior. He is gender-neutral and simply detects well and hates manipulations, emotional abuse, and such.

Katherine Dee: I have noticed [that if you ask ‘are you sure’ it changes its answer] myself; it is making me stop trusting it.

Wu Han Solo: Talking to an LLM like Claude is like talking to a mirror. You have to be really careful not to “poison it” with your own biases.

It’s too easy to get an LLM to tell you what you want to hear and subtly manipulate its outputs. If one does not recognize this, I think that’s problematic.

There are obvious dangers, but mostly this seems very good. The alternative options for talking through situations and getting sanity checks are often rather terrible.

Telling the boyfriend to talk to Claude as well is a great tactic, because it guards against you having led the witness, and also because you can’t take the request back if it turns out you did lead the witness. It’s an asymmetric weapon and costly signal.

What else to do about the ‘leading the witness’ issue? The obvious first thing to do if you don’t want this is… don’t lead the witness. There’s no one watching. Friends will do this as well, if you want them to be brutally honest with you then you have to make it clear that is what you want, if you mostly want them to ‘be supportive’ or agree with you then you mostly can and will get that instead. Indeed, it is if anything far easier to accidentally get people to do this when you did not want them to do it (or fail to get it, when you did want it).

You can also re-run the scenario or question with different wording in new windows, if you’re worried about this. And you can use ‘amount of pushback you find the need to use and how you use it’ as good information about what you really want, and good information to send to Claude, which is very good at picking up on such signals. The experience is up to you.

Sometimes you do want lies? We’ve all heard requests to ‘be supportive,’ so why not have Claude do this too, if that’s what you want in a given situation? It’s your life. If you want the AI to lie to you, I’d usually advise against that, but it has its uses.

You can also observe exactly how hard you have to push to get Claude to cave in a given situation, and calibrate based on that. If a simple ‘are you sure?’ changes its mind, then that opinion was not so strongly held. That is good info.

Others refuse to believe that Claude can provide value to people in ways it is obviously providing value, such as here where Hazard tells QC that QC can’t possibly be experiencing what QC is directly reporting experiencing.

I especially appreciated this:

Q&C: When you do not play to Claude’s strengths or prompt it properly, it seems like it is merely generically validating you. But that is completely beside the point. It is doing collaborative improvisation. It is the ultimate yes-and-er. In a world full of criticism, it is willing to roll with you.

And here’s the part of Hazard’s explanation that did resonate with QC, and it resonates with me as well very much:

Hazard: I feel that the phenomenon that Q.C. and others are calling a “presence of validation/support” is better described as an “absence of threat.” Generally, around people, there is a strong ambient sense of threat, but talking to Claude does not trigger that.

From my own experience, bingo, sir. Whenever you are dealing with people, you are forced to consider all the social implications, whether you want to or not. There’s no ‘free actions’ or truly ‘safe space’ to experiment or unload, no matter what anyone tells you or how hard they try to get as close to that as possible. Can’t be done. Theoretically impossible. Sorry. Whereas with an LLM, you can get damn close (there’s always some non-zero chance someone else eventually sees the chat).

The more I reason this stuff out, the more I move towards ‘actually perhaps I should be using Claude for emotional purposes after all’? There’s a constantly growing AI-related list of things I ‘should’ be using them for, because there are only so many hours in the day.

Tracing Woods has a conversation with Claude about stereotypes, where Claude correctly points out that in some cases correlations exist and are useful, actually, which leads into discussion of Claude’s self-censorship.

Ademola: How did you make it speak like that.

Tracing Woods: Included in my first message, that conversation: “Be terse, witty, ultra-intelligent, casual, and razor-sharp. Use lowercase and late-millennial slang as appropriate.”

Other than, you know, o1, or o1 Pro, or Gemini 2.0.

For at least a brief shining moment, Gemini-1206 came roaring back (available to try here in Google Studio) to for a third time claim the top spot on Arena, this time including all domains. Whatever is happening at Google, they are rapidly improving scores on a wide variety of domains, this time seeing jumps in coding and hard prompts where presumably it is harder to accidentally game the metric. And the full two million token window is available.

It’s impossible to keep up and know how well each upgrade actually does, with everything else that’s going on. As far as I can tell, zero people are talking about it.

Jeff Dean (Chief Scientist, Google): Look at the Pareto frontier of the red Gemini/Gemma dots. At a given price point, the Gemini model is higher quality. At a given quality (ELO score), the Gemini/Gemma model is the cheapest alternative.

We haven’t announced prices for any of the exp models (and may not launch exactly these models as paid models), so the original poster made some assumptions.

OpenAI offers a preview of Reinforcement Finetuning of o1 (preview? That’s no shipmas!), which Altman says was a big surprise of 2024 and ‘works amazingly well.’ They introduced fine tuning with a livestream rather than text, which is always frustrating. The use case is tuning for a particular field like law, insurance or a branch of science, and you don’t need many examples, perhaps as few as 12. I tried to learn more from the stream, but it didn’t seem like it gave me anything to go on. We’ll have to wait and see when we get our hands on it, you can apply to the alpha.

OpenAI upgrades Canvas, natively integrating it into GPT-4o, adding it to the free tier, making it available with custom GPTs, giving it a Show Change feature (I think this is especially big in practice), letting ChatGPT add comments and letting it directly execute Python code. Alas, Canvas still isn’t compatible with o1, which limits its value quite a bit.

Sam Altman (who had me and then lost me): canvas is now available to all chatgpt users, and can execute code!

more importantly it can also still emojify your writing.

Llama 3.3-70B is out, which Zuck claims is about as good at Llama 3.2-405B.

xAI’s Grok now available to free Twitter users, 10 questions per 2 hours, and they raised another $6 billion.

What is an AI agent? Here’s Sully’s handy guide.

Tristan Rhodes: Good idea! Here is my take this.

An AI agent must have at least two visits to an LLM:

– One prompt completes the desired work

– One prompt decides if the work is complete. If complete, format the output. If not, perform the first prompt again, with refined input.

Sully: yes agreed this generally has to be the case.

As one response suggests: Want an instant agent? Just add duct tape.

I continue to think this is true especially when you add in agents, which is one reason Apple Intelligence has so far been so disappointing. It was supposed to be a solution to this problem. So far, the actual attempted solutions have sucked.

Rasmus Fonnesbaek: What ~95% of people need from AI tools for them to be helpful is for the tools to have access to most or all their personal data for context (i.e. all emails, relevant documents, etc.) and/or to ask for the right info — and most people still don’t feel comfortable sharing that!

While models’ context windows (memory) still have some limitations, these are not frontier technology-related in nature — but legal (confidentiality, sensitive data protection, etc.) and privacy-related, with some of those mitigable by very strong cybersecurity measures.

This suggests that Microsoft, Alphabet, Apple, Meta, Dropbox, etc. — holding large-scale existing, relevant data stores for people, with very strong cybersecurity — are best-positioned to provide AI tools with very high “mundane utility.”

[thread continues]

The 5% who need something else are where the world transforms, but until then most people greatly benefit from context. Are you willing to give it to them? I’ve already essentially made the decision to say Yes to Google here, but their tools aren’t good enough yet. I am also pretty sure I’d be willing to trust Anthropic. Some of the others, let us say, not so much.

OpenAI gives us Sora Turbo, a faster version of Sora now available to Plus and Pro, at no additional charge, on day one demand was so high that the servers were clearly overloaded, and they disabled signups, which includes those who already have Plus trying to sign up for Sora. More coverage later once people have actually tried it.

Users can generate videos up to 1080p resolution, up to 20 sec long, and in widescreen, vertical or square aspect ratios. You can bring your own assets to extend, remix, and blend, or generate entirely new content from text.

If you want 1080p you’ll have to go Pro (as in $200/month), the rest of us get 50 priority videos in 720p, I assume per month.

The United Kingdom, EU Economic Area and Switzerland are excluded.

Sam Altman: We want to [offer Sora in the EU]!

We want to offer our products in Europe, and believe a strong Europe is important to the world.

We also have to comply with regulations.

I would generally expect us to have delayed launches for new products in Europe, and that there may be some we just cannot offer.

Everyone was quick to blame the EU delay on the EU AI Act, but actually the EU managed to mess this up earlier – this is (at minimum also) about the EU Digital Markets Act and GPDR.

The Sora delay does not matter on its own, and might partly be strategic in order to impact AI regulation down the line. They’re overloaded anyway and video generation is not so important.

Sam Altman: We significantly underestimated demand for Sora; it is going to take awhile to get everyone access.

Trying to figure out how to do it as fast as possible!

But yes, the EU is likely to see delays on model releases going forward, and to potentially not see some models at all.

If you’re wondering why it’s so overloaded, it’s probably partly people like Colin Fraser tallying up all his test prompts.

George Pickett: I’m also underwhelmed. But I’ve also seen it create incredible outputs. Might it be that it requires a fundamentally different way of prompting?

My guess is that Sora is great if you want a few seconds of something cool, and not as great if you want something specific. The more flexible you are, the better you’ll do.

This is the Sora system card, which is mostly about mitigation, especially of nudity.

Sora’s watermarking is working, if you upload to LinkedIn it will show the details.

Google offers us Veo and Imagen 3, new video and image generation models. As usual with video, my reaction to Veo is that it seems to produce cool looking very short video clips and that’s cool but it’s going to be a while before it matters.

As usual for images, my reaction to Imagen 3 is that the images look cool and the control features seem neat, if you want AI images. But I continue to not feel any pull to generate cool AI images nor do I see anyone else making great use of them either.

In addition, this is a Google image project, so you know it’s going to be a stickler about producing specific faces and things like that and generally be no fun. It’s cool in theory but I can’t bring myself to care in practice.

If there’s a good fully uncensored image generator that’s practical to run locally with only reasonable amounts of effort, I have some interest in that, please note in the comments. Or, if there’s one that can actually take really precise commands and do exactly what I ask, even if it has to be clean, then I’d check that out too, but it would need to be very good at that before I cared enough.

Short of those two, mostly I just want ‘good enough’ images for posts and powerpoints and such, and DALL-E is right there and does fine and I’m happy to satisfice.

Whereas Grok Aurora, the new xAI image model focusing on realism, goes exactly the other way. It is seeking to portray as many celebrities as it can, as accurately as possible, as part of that realism. It was briefly available on December 7, then taken down the next day, perhaps due to concerns about its near total (and one presumes rather intentional) lack of filters. Then on the 9th it was so back?

Google presents Genie 2, which they claim can generate a diverse array of consistent worlds, playable for up to a minute, potentially unlocking capabilities for embedded agents. It looks cool, and yes, if you wanted to scale environments to train embedded agents you’ll eventually want something like this. Does seem like early days, for now I don’t see why you wouldn’t use existing solutions, but it always starts out that way.

Will we have to worry about people confusing faked videos for real ones?

Parth: Our whole generation will spend a significant amount of time explaining to older people how these videos are not real.

Maxwell Tabarrok: I think this is simply not true. We have had highly realistic computer imagery for decades. AI brings down the already low cost.

No one thinks the “Avengers” movies are real, even though they are photorealistic.

Cognitive immune systems can resist this easily.

Gwern: People think many things in movies are real that are not. I am always shocked to watch visual effects videos. “Yes, the alien is obviously not real, but that street in Paris is real”—My brother, every pixel in that scene was green screen except the protagonist’s face.

There is very much a distinctive ‘AI generated’ vibe to many AI videos, and often there are clear giveaways beyond that. But yeah, people get fooled by videos all the time, the technology is there in many cases, and AI tech will also get there. And once the tech gets good enough, when you have it create something that looks realistic, people will start getting fooled.

Amazon seeing cyber threats per day grow from 100 million seven months ago to over 750 million today.

C.J. Moses (Chief Information Security Officer, Amazon): Now, it’s more ubiquitous, such that normal humans can do things they couldn’t do before because they just ask the computer to do that for them.

We’re seeing a good bit of that, as well as the use of AI to increase the realness of phishing, and things like that. They’re still not there 100%. We still can find errors in every phishing message that goes out, but they’re getting cleaner.

In the last eight months, we’ve seen nation-state actors that we previously weren’t tracking come onto the scene. I’m not saying they didn’t exist, but they definitely weren’t on the radar. You have China, Russia and North Korea, those types of threat actors. But then you start to see the Pakistanis, you see other nation-states. We have more players in the game than we ever did before.

They are also using AI defensively, especially via building a honeypot network and using AI to analyze the resulting data. But at this particular level in this context, it seems AI favors offense, because Amazon already was doing the things AI can help with, whereas many potential attackers benefit from this kind of ‘catch up growth.’ Amazon’s use of AI is therefore largely to detect and defend against others use of AI.

The good news is that this isn’t a 650% growth in the danger level. The new cyber attacks are low marginal cost, relatively low skill and low effort, and therefore should on average be far less effective and damaging. The issue is, if they grow on an exponential, and the ‘discount rate’ on effectiveness shrinks, they still would be on pace to rapidly dominate the threat model.

Nikita Bier gets optimistic on AI and social apps, predicts AI will be primarily used to improve resolution of communication and creative tools rather than for fake people, whereas ‘AI companions’ won’t see widespread adaptation. I share the optimism about what people ultimately want, but worry that such predictions are like many others about AI, extrapolating from impacts of other techs without noticing what is different or actually gaming out what happens.

An attempt at a more serious economic projection for AGI? It is from Anton Korinek via the international monetary fund, entitled ‘AI may be on a trajectory to surpass human intelligence; we should be prepared.’

As in, AGI arrives in either 5 or 20 years, and wages initially outperform but then start falling below baseline shortly thereafter, and fall from there. This ‘feels wrong’ for worlds that roughly stay intact somehow in the sense that movement should likely be relative to the blue line not the x-axis, but the medium term result doesn’t change, wages crash.

They ask whether there is an upper bound on the complexity of what a human brain can process, based on our biology, versus what AIs would allow. That’s a great question. An even better question is where the relative costs including time get prohibitive, and whether we will stay competitive (hint if AI stays on track: no).

They lay out three scenarios, each with >10% probability of happening.

  1. In the traditional scenario, which I call the ‘AI fizzle’ world, progress stalls before we reach AGI, and AI is a lot more like any other technology.

  2. Their baseline scenario, AGI in 20 years due to cognitive limits.

  3. AGI in 5 years, instead.

Even when it is technologically possible to replace workers, society may choose to keep humans in certain functions—for example, as priests, judges, or lawmakers. The resulting “nostalgic” jobs could sustain demand for human labor in perpetuity (Korinek and Juelfs, forthcoming).

To determine which AI scenario the future most resembles as events unfold, policymakers should monitor leading indicators across multiple domains, keeping in mind that all efforts to predict the pace of progress face tremendous uncertainty.

Useful indicators span technological benchmarks, levels of investment flowing into AI development, adoption of AI technologies throughout the economy, and resulting macroeconomic and labor market trends.

Major points for realizing that the scenarios exist and one needs to figure out which one we are in. This is still such an economist method for trying to differentiate the scenarios. How fast people choose to adapt current AI outside of AI R&D itself does not correlate much with whether we are on track for AGI – it is easy to imagine people being quick to incorporate current AI into their workflows and getting big productivity boosts while frontier progress fizzles, or people continuing to be dense and slow about adaptation while capabilities race forward.

Investment in AI development is a better marker, but the link between inputs and outputs, and the amount of input that is productive, are much harder to predict, and I am not convinced that AI investment will correctly track the value of investment in AI. The variables that determine our future are more about how investment translates into capabilities.

Even with all the flaws this is a welcome step from an economist.

The biggest flaw, of course, is not to notice that if AGI is developed that this either risks humans losing control or going extinct or enabling rapid development of ASI.

Anton recognizes one particular way in which AGI is a unique technology, its ability to generate unemployment via automating labor tasks to the point where further available tasks are not doable by humans, except insofar as we choose to shield them as what he calls ‘nostalgic’ jobs. But he doesn’t realize that is a special case of a broader set of transformations and dangers.

How will generative AI impact the law? In all sorts of ways, but Henry Thompson focuses specifically on demand for legal services and disputes themselves, holding other questions constant. Where there are contracts, he reasons that AI leads to superior contracts that are more robust and complete, which reduces litigation.

But it also gives people more incentive to litigate and not to settle, although if it is doing that by reducing costs then perhaps we do not mind so much, actually resolving disputes is a benefit not only a cost. And in areas where contracts are rare, including tort law, the presumption is litigation will rise.

More abstractly, AI reduces costs for all legal actions and services, on both sides, including being able to predict outcomes. As the paper notices, the relative reductions in costs are hard to predict, so net results are hard to predict, other than that uncertainty should be reduced.

EU AI Office is looking for a lead scientific advisor (must be an EU citizen), deadline December 13. Unfortunately, the eligibility requirements include ‘professional experience of at least 15 years’ while paying 13.5k-15k euros a month, which rules out most people who you would want.

Michael Nielsen: It’s frustrating to see this. I’d be surprised if 10% of the OpenAI or Anthropic research / engineering staff are considered “qualified” [sic] for this job. And I’ll bet 90% make more than this, some of them far more (10x etc). It just seems clueless as a job description (on the EU AI Office’s part, not David’s, needless to say!)

If you happen to be one of the lucky few who actually counts here, and would be willing to take the job, then it seems high impact.

Apollo Research is hiring for evals positions.

Conjecture is looking for partners to build with Tactics, you can send a message to [email protected].

Devin, the AI agent junior engineer, is finally available to the public, starting at $500/month. No one seems to care? If this is good, presumably someone will tell us it is good. Until then, they’re not giving us evidence that it is good.

OpenAI’s services were down on the 11th for a few hours, not only Sora but also ChatGPT and even the API. They’re back up now.

How fast does ‘capability density’ of LLMs increase over time, meaning how much you can squeeze into the same number of parameters? A new paper proposes a new scaling law for this, with capability density doubling every 3.3 months (!). As in, every 3.3 months, the required parameters for a given level of performance are cut in half, along with the associated inference costs.

As with all such laws, this is a rough indicator of the past, which may or may not translate meaningfully into the future.

Serious request: Please, please, OpenAI, call your ‘operator’ agent something, anything, that does not begin with the letter ‘O.’

Meta seeking 1-4 GWs of new nuclear power via a request for proposals.

Winners of the ARC prize 2024 announced, it will return in 2025. State of the art this year went from 33% to 55.5%, but the top scorer declined to open source so they were not eligible for the prize. To prepare for 2025, v2 of the benchmark will get more difficult:

Francois Chollet: If you haven’t read the ARC Prize 2024 technical report, check it out [link].

One important bit: we’ll be releasing a v2 of the benchmark early next year (human testing is currently being finalized).

Why? Because AGI progress in 2025 is going to need a better compass than v1. v1 fulfilled its mission well over the past 5 years, but what we’ve learned from it enables us to ship something better.

In 2020, an ensemble of all Kaggle submissions in that year’s competition scored 49% — and that was all crude program enumeration with relatively low compute. This signals that about half of the benchmark was not a strong signal towards AGI.

Today, an ensemble of all Kaggle submissions in the 2024 competition is scoring 81%. This signals the benchmark is saturating, and that enough compute / brute force will get you over the finish line.

v2 will fix these issues and will increase the “signal strength” of the benchmark.

Is this ‘goalpost moving?’ Sort of yes, sort of no.

Amazon Web Services CEO Matt Garmen promises ‘Neeld-Moving’ AI updates. What does that mean? Unclear. Amazon’s primary play seems to be investing in Anthropic, an investment they doubled last month to $8 billion, which seems like a great pick especially given Anthropic is using Amazon’s Trianium chip. They would be wise to pursue more aggressive integrations in a variety of ways.

Nvidia is in talks to get Blackwell chips manufactured in Arizona. For now, they’d still need to ship them back to TSMC for CoWoS packaging, presumably that would be fixable in a crisis, but o1 suggests spinning that up would still take 1-2 years, and Claude thinks 3-5, but there is talk of building the new CoWoS facility now as well, which seems like a great idea.

Speak, a language instruction company, raises $78m Series C at a $1 billion valuation.

As part of their AI 20 series, Fast Company profiles Helen Toner, who they say is a growing voie in AI policy. I checked some other entries in the series, learned little.

UK AISI researcher Hannah Rose Kirk gets best paper award at NeurlPS 2024 (for this paper from April 2024).

Is that title fair this time? Many say yes. I’m actually inclined to say largely no?

In any case, I guess this happened. In case you were wondering what ‘democratic values’ means to OpenAI rest assured it means partnering with the US military, at least on counter-unmanned aircraft systems (CUAS) and ‘responses to lethal threats.’

Anduril Industries: We’re joining forces with @OpenAI to advance AI solutions for national security.

America needs to win.

OpenAI’s models, combined with Anduril’s defense systems, will protect U.S. and allied military personnel from attacks by unmanned drones and improve real-time decision-making.

In the global race for AI, this partnership signals our shared commitment to ensuring that U.S. and allied forces have access to the most advanced and responsible AI technologies in the world.

From the full announcement: U.S. and allied forces face a rapidly evolving set of aerial threats from both emerging unmanned systems and legacy manned platforms that can wreak havoc, damage infrastructure and take lives. The Anduril and OpenAI strategic partnership will focus on improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.

As part of the new initiative, Anduril and OpenAI will explore how leading edge AI models can be leveraged to rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness. These models, which will be trained on Anduril’s industry-leading library of data on CUAS threats and operations, will help protect U.S. and allied military personnel and ensure mission success.

The accelerating race between the United States and China to lead the world in advancing AI makes this a pivotal moment. If the United States cedes ground, we risk losing the technological edge that has underpinned our national security for decades.

“OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values,” said Sam Altman, OpenAI’s CEO. “Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free.”

I definitely take issue both with the jingoistic rhetoric and with the pretending that this is somehow ‘defensive’ so that makes it okay.

That is distinct from the question of whether OpenAI should be in the US Military business, especially partnering with Anduril.

Did anyone think this wasn’t going to happen? Or that it would be wise or a real option for our military to not be doing this? Yes the overall vibe and attitude and wording and rhetoric and all that seems rather like you’re the baddies, and no one is pretending we won’t hook this up to the lethal weapons next, but it doesn’t seem like an option to not be doing this.

If we are going to build the tech, and by so doing also ensure that others build the tech, that does not leave much of a choice. The decision to do this was made a long time ago. If you have a problem with this, you have a problem with the core concept of there existing a company like OpenAI.

Or perhaps you could Pick Up the Phone and work something out? By contrast, here’s Yi Zeng, Founding Director of Beijing Institute of AI Safety and Governance.

Yi Zeng: We have to be very cautious in the way we use AI to assist decision making – AI should never ever be used to control nuclear weapons, AI should not be used for lethal autonomous weapons.

He notes AI makes mistakes humans would never make. True, but humans make mistakes certain AIs would never make, including ‘being slow.’

We’ve managed to agree on the nuclear weapons. All lethal weapons is going to be a much harder sell, and that ship is already sailing. If you want the AIs to be used for better analysis and understanding but not directing the killer drones, the only way that possibly works is if everyone has an enforceable agreement to that effect. It takes at least two to not tango.

It does seem like there were some people at OpenAI who thought this project was objectionable, but were still willing to work at OpenAI otherwise for now?

Gerrit De Vynck: NEW – OpenAI employees pushed back internally against the company’s deal with Anduril. One pointed out that Terminator’s Skynet was also originally meant to be an aerial defense weapon.

Eliezer Yudkowsky: Surprising; everyone I personally knew to have a conscience has left OpenAI, but I guess there’s some left anyways.

Multiple people I modeled to have consciences left within the same month. As I said above, there are apparently some holdouts who will still protest some things, but I don’t think I know them personally.

Wendy: its policy team endorsed it, though. apparently.

I note that the objections came after the announcement of the partnership, rather than before, so presumably employees were not given a heads up.

I don’t think Eliezer is being fair here. You can have a conscious and be a great person, and not be concerned about AI existential risk, and thus think working at OpenAI is fine.

Gerrit De Vynck: One OpenAI worker said the company appeared to be trying to downplay the clear implications of doing business with a weapons manufacturer, the messages showed. Another said that they were concerned the deal would hurt OpenAI’s reputation, according to the messages.

If the concern is reputational, that is of course not about your conscious. If it’s about doing business with a weapons manufacturer, well, yeah, me reaping and all that. OpenAI’s response, that this was about saving American lives and is a purely defensive operation, strikes me as mostly disingenuous. It might be technically true, but we all know where this is going.

Gerrit De Vynck: By taking on military projects, OpenAI could help the U.S. government understand AI technology better and prepare to defend against its use by potential adversaries, executives also said.

Yes, very true. This helps the US military.

Either you think that is good, actually, or you do not. Pick one.

Relatedly: Here is Austin Vernon on drones, suggesting they favor the motivated rather than offense or defense. I presume they also favor certain types of offense, by default, at least for now, based on simple physical logic.

In other openly evil news, OpenAI seeks to unlock investment by ditching ‘AGI’ clause with Microsoft, a clause designed to protect powerful technology from being misused for commercial purposes. Whoops. Given that most of the value of OpenAI comes after AGI, one must ask, what is Microsoft offering in return? It often seems like their offer is nothing, Godfather style, because this is part of the robbery.

Can Thailand build “sovereign AI” with Our Price Cheap?

Suchit Leesa-Nguansuk: Mr Huang told the audience that AI infrastructure does not require huge amounts of money to build, often only hundreds of thousands of dollars, but it can substantially boost GDP.

The most important asset in AI is data, and Thailand’s data is a sovereign resource. It encodes the nation’s knowledge, history, culture, and common sense, and should be protected and used by the Thai people, Mr Huang said.

Huge if true! Or perhaps not huge if true, given the price tag? If we’re talking about hundreds of thousands of dollars, that’s not a full AI tech stack or even a full frontier training run. It is creating a lightweight local model based on local data. Which is plausibly a great idea in terms of cost-benefit, totally do that, but don’t get overexcited.

Janus asks, will humans come to see AI systems as authoritative, and allow the AI’s implicit value judgments and reward allocations to shape our motivation and decision making?

The answer is, yes, of course, this is already happening, because some of us can see the future where other people also act this way. Janus calls it ‘Inverse Roko’s Basilisk’ but actually this is still just a direct version of The Basilisk, shaping one’s actions now to seek approval from whatever you expect to have power in the future.

If you’re not letting this change your actions at all, you’re either taking a sort of moral or decision theoretic stand against doing it, which I totally respect, or else: You Fool.

Roon: highly optimized rl models feel more Alive than others.

This seems true, even when you aren’t optimizing for aliveness directly. The act of actually being optimal, of seeking to chart a path through causal space towards a particular outcome, is the essence of aliveness.

A cool form of 2025 predicting.

Eli Lifland: Looking forward to seeing people’s forecasts! Here are mine.

Sage: Is AGI just around the corner or is AI scaling hitting a wall? To make this discourse more concrete, we’ve created a survey for forecasting concrete AI capabilities by the end of 2025. Fill it out and share your predictions by end of year!

I do not feel qualified to offer good predictions on the benchmarks. For OpenAI preparedness, I think I’m inclined (without checking prediction markets) to be a bit lower than Eli on the High levels, but if anything a little higher on the Medium levels. On revenues I think I’d take Over 17 billion, but it’s not a crazy line? For public attention, it’s hard to know what that means but I’ll almost certainly take over 1%.

As an advance prediction, I also agree with this post that if we do get an AI winter where progress is actively disappointing, which I do think is not so unlikely, we should then expect it to probably grow non-disappointing again sooner than people will then expect. This of course assumes the winter is caused by technical difficulties or lack of investment, rather than civilizational collapse.

Would an AI actually escaping be treated as a big deal? Essentially ignored?

Zvi Mowshowitz: I outright predict that if an AI did escape onto the internet, get a server and a crypto income, no one would do much of anything about it.

Rohit: If we asked an AI to, like, write some code, and it ended up escaping to the internet and setting up a crypto account, and after debugging you learn it wasn’t because of, like, prompt engg or something, people would be quite worried. Shut down, govt mandate, hearings, the works.

I can see it working out the way Rohit describes, if the situation were sufficiently ‘nobody asked for this’ with the right details. The first escapes of non-existentially-dangerous models, presumably, will be at least semi-intentional, or at minimum not so clean cut, which is a frog boiling thing. And in general, I just don’t expect people to care in practice.

At Semi Analysis, Dylan Patel, Daniel Nishball and AJ Kourabi look at scaling laws, including the architecture and “failures” (their air quotes) of Orion and Claude 3.5 Opus. They remain fully scaling pilled, yes scaling pre-training compute stopped doing much (which they largely attribute to data issues) but there are plenty of other ways to scale.

They flat out claim Claude Opus 3.5 scaled perfectly well, thank you. Anthropic just decided that it was more valuable to them internally than as a product?

The better the underlying model is at judging tasks, the better the dataset for training. Inherent in this are scaling laws of their own. This is how we got the “new Claude 3.5 Sonnet”. Anthropic finished training Claude 3.5 Opus and it performed well, with it scaling appropriately (ignore the scaling deniers who claim otherwise – this is FUD).

Yet Anthropic didn’t release it. This is because instead of releasing publicly, Anthropic used Claude 3.5 Opus to generate synthetic data and for reward modeling to improve Claude 3.5 Sonnet significantly, alongside user data. Inference costs did not change drastically, but the model’s performance did. Why release 3.5 Opus when, on a cost basis, it does not make economic sense to do so, relative to releasing a 3.5 Sonnet with further post-training from said 3.5 Opus?

With more synthetic data comes better models. Better models provide better synthetic data and act as better judges for filtering or scoring preferences. Inherent in the use of synthetic data are many smaller scaling laws that, collectively, push toward developing better models faster.

Would it make economic sense to release Opus 3.5? From the perspective of ‘would people buy inference from the API and premium Claude subscriptions above marginal cost’ the answer is quite obviously yes. Even if you’re compute limited, you could simply charge your happy price for compute, or the price that lets you go out and buy more.

The cost is that everyone else gets Opus 3.5. So if you really think that Opus 3.5 accelerates AI work sufficiently, you might choose to protect that advantage. As things move forward, this kind of strategy becomes more plausible.

A general impression is that development speed kills, so they (reasonably) predict training methods will rapidly move towards what can be automated. Thus the move towards much stronger capabilities advances in places allowing automatic verifiers. The lack of alignment or other safety considerations here, or examining whether such techniques might go off the rails other than simply not working, speaks volumes.

Here is the key part of the write-up of o1 pro versus o1 that is not gated:

Search is another dimension of scaling that goes unharnessed with OpenAI o1 but is utilized in o1 Pro. o1 does not evaluate multiple paths of reasoning during test-time (i.e. during inference) or conduct any search at all. Sasha Rush’s video on Speculations on Test-Time Scaling (o1) provides a useful discussion and illustration of Search and other topics related to reasoning models.

There are additional subscription-only detailed thoughts about o1 at the link.

Holden Karnofsky writes up concrete proposals for ‘tripwire capabilities’ that could trigger if-then commitments in AI:

Holden Karnofsky: One key component is tripwire capabilities (or tripwires): AI capabilities that could pose serious catastrophic risks, and hence would trigger the need for strong, potentially costly risk mitigations.

In one form or another, this is The Way. You agree that if [X] happens, then you will have to do [Y], in a way that would actually stick. That doesn’t rule out doing [Y] if unanticipated thing [Z] happens instead, but you want to be sure to specify both [X] and [Y]. Starting with [X] seems great.

[This piece] also introduces the idea of pairing tripwires with limit evals: the hardest evaluations of relevant AI capabilities that could be run and used for key decisions, in principle.

A limit eval might be a task like the AI model walks an amateur all the way through a (safe) task as difficult as producing a chemical or biological weapon of mass destruction—difficult and costly to run, but tightly coupled to the tripwire capability in question.

Limit evals are a true emergency button, then. Choose an ‘if’ that every reasonable person should be able to agree upon. And I definitely agree with this:

Since AI companies are not waiting for in-depth cost-benefit analysis or consensus before scaling up their systems, they also should not be waiting for such analysis or consensus to map out and commit to risk mitigations.

Here is what is effectively a summary section:

Lay out candidate criteria for good tripwires:

  • The tripwire is connected to a plausible threat model. That is, an AI model with the tripwire capability would (by default, if widely deployed without the sorts of risk mitigations discussed below) pose a risk of some kind to society at large, beyond the risks that society faces by default.

  • Challenging risk mitigations could be needed to cut the risk to low levels. (If risk mitigations are easy to implement, then there isn’t a clear need for an if-then commitment.)

  • Without such risk mitigations, the threat has very high damage potential. I’ve looked for threats that pose a nontrivial likelihood of a catastrophe with total damages to society greater than $100 billion, and/or a substantial likelihood of a catastrophe with total damages to society greater than $10 billion.3

  • The description of the tripwire can serve as a guide to designing limit evals (defined above, and in more detail below).

  • The tripwire capability might emerge relatively soon.

Lay out potential tripwires for AI. These are summarized at the end in a table. Very briefly, the tripwires I lay out are as follows, categorized using four domains of risk-relevant AI capabilities that cover nearly all of the previous proposals for tripwire capabilities.

  • The ability to advise a nonexpert on producing and releasing a catastrophically damaging chemical or biological weapon of mass destruction.

  • The ability to uplift a moderately resourced state program to be able to deploy far more damaging chemical or biological weapons of mass destruction.

  • The ability to dramatically increase the cost-effectiveness of professionalized persuasion, in terms of the effect size (for example, the number of people changing their vote from one candidate to another, or otherwise taking some specific action related to changing views) per dollar spent.

  • The ability to dramatically uplift the cyber operations capabilities of a moderately resourced state program.

  • The ability to dramatically accelerate the rate of discovery and/or exploitation of high-value, novel cyber vulnerabilities.

  • The ability to automate and/or dramatically accelerate research and development (R&D) on AI itself.

I worry that if we wait until we are confident that such dangers are in play, and only acting once the dangers are completely present, we are counting on physics to be kind to us. But at this point, yes, I will take that, especially since there is still an important gap between ‘could do $100 billion in damages’ and existential risks. If we ‘only’ end up with $100 billion in damages along the way to a good ending, I’ll take that for sure, and we’ll come out way ahead.

What do we think about these particular tripwires? They’re very similar to the capabilities already in the SSP/RSPs of Anthropic, OpenAI and DeepMind.

As usual, one of these things is not like the others!

We’ve seen this before. Recall from DeepMind’s frontier safety framework:

Zvi: Machine Learning R&D Level 1 is the confusing one, since the ‘misuse’ here would be that it helps the wrong people do their R&D? I mean, if I was Google I would hope I would not be so insane as to deploy this if only for ordinary business reasons, but it is an odd scenario from a ‘risk’ perspective.

Machine Learning R&D Level 2 is the singularity.

Dramatically accelerate isn’t quite an automatic singularity. But it’s close. We need to have a tripwire that goes off earlier than that.

I would also push back against the need for ‘this capability might develop relatively soon.’

  1. It is hard to predict what capabilities will happen in what order.

  2. You want to know where the line is, even if you don’t expect to cross it.

  3. If you list things and they don’t happen soon, it costs you very little.

  4. If we are in a short timelines AGI scenario, almost all capabilities happen soon.

An excellent history of UK’s AISI, how it came to be and recruit and won credibility with the top labs good enough to do pre-deployment testing, now together with the US’s AISI, and the related AI safety summits. It sounds like future summits will pivot away from the safety theme without Sunak involved, at least partially, but mostly this seems like a roaring success story versus any reasonable expectations.

Thing I saw this week, from November 13: Trump may be about to change the Cybersecurity and Infrastructure Security Agency to be more ‘business friendly.’ Trump world frames this as the agency overreaching its purview to address ‘misinformation,’ which I agree we can do without. The worry is that ‘business friendly’ actually means ‘doesn’t require real cybersecurity,’ whereas in the coming AI world we will desperately need strong cybersecurity, and I absolutely do not trust businesses to appreciate this until after the threats hit. But it’s also plausible that other government agencies are on it or this was never helpful anyway – it’s not an area I know that much about.

Trump chooses Jacob Helberg for Under Secretary of State for Economic Growth, Energy and the Environment. Trump’s statement here doesn’t directly mention AI, but it is very pro-USA-technology, and Helberg is an Altman ally and was the driver behind that crazy US-China report openly calling for a ‘Manhattan Project’ to ‘race to AGI.’ So potential reason to worry.

Your periodic reminder that if America were serious about competitiveness and innovation in AI, and elsewhere, it wouldn’t be blocking massive numbers of high skilled immigrants from coming here to help, even from places like the EU.

European tech founders and investors continue to hate GPDR and also the EU AI Act, among many other things, frankly this is less hostility than I would have expected given it’s tech people and not the public.

General reminder. Your ‘I do not condone violence BUT’ shirt raises and also answers questions supposedly answered by your shirt, Marc Andreessen edition. What do you think he or others like him would say if they saw someone worried about AI talking like this?

A reasonable perspective is that there are three fundamental approaches to dealing with frontier AI, depending on how hard you think alignment and safety are, and how soon you think we will reach transformative AI:

  1. Engage in cooperative development (CD).

  2. Seek decisive strategic advantage (SA).

  3. Try for a global moratorium or other way to halt development (GM).

With a lot of fuzziness, this post argues the right strategy is roughly this:

This makes directional sense, and then one must talk price throughout (as well as clarify what both axes mean). If AGI is far, you want to be cooperating and pushing ahead. If AGI is relatively near but you can ‘win the race’ safely, then Just Win Baby. However, if you believe that racing forward gets everyone killed too often, you need to convince a sufficient coalition to get together and stop that from happening – it might be an impossible-level problem, but if it’s less impossible than your other options, then you go all out to do it anyway.

He wrote Sam Altman and other top AI CEOs (of Google, Meta, Amazon, Microsoft, Anthropic and Inflection (?), pointing out that the security situation is not great and asking them how they are taking steps to implement their commitments to the White House.

In particular, he points out that Meta’s Llama has enabled Chinese progress while not actually being properly open source, that a Chinese national importantly breached Google security, and that OpenAI suffered major breaches in security, with OpenAI having a ‘culture of recklessness’ with aggressive use of NDAs and failing to report its breach to the FBI – presumably this is the same breach Leopold expressed concern about, in response to which they solved the issue by getting rid of Leopold.

Well, there is all that.

Here is the full letter:

I am writing to follow up on voluntary commitments your companies made to the United States Government to develop Artificial Intelligence (AI) systems that embody the principles of safety, security and trust.i Despite these commitments, press reports indicate a series of significant security lapses have occurred at private sector AI companies.

I am deeply concerned that these lapses are leaving technological breakthroughs made in US labs susceptible to theft by the Chinese Communist Party (CCP) at a time when the US and allied governments are “sounding alarms” about Chinese espionage on an “unprecedented scale.”

As you know, in July 2023, the White House announced voluntary commitments regarding safety, security, and trust from seven leading AI companies: Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI. Apple subsequently joined in making these commitments in July 2024. In making these commitments, your companies acknowledged that AI “offers enormous promise and great risk” and recognized your “duty to build systems that put security first.”

These commitments contrast sharply with details revealed in a March 2024 Justice Department indictment. This alleges that a Google employee, who is a Chinese national, was able to send “sensitive Google trade secrets and other confidential information from the company’s network to his personal Google account.” Prosecutors contend that he was affiliated with Chinese AI companies and even started a company based in China while continuing to work at Google.

In June 2024, a press report indicated that a year earlier a hacker had gained access to “the internal messaging systems of OpenAI” and “stole details about the design of the company’s AI technologies.”

Reports indicate that the company did not report the incident to the FBI, believing the hacker did not have ties to a foreign government—despite the hack raising concerns among employees that “foreign adversaries such as China” could steal AI technology.

Concerns over the failure to report this hack are compounded by whistleblowers’ allegations that OpenAI fostered “a culture of recklessness” in its the race to build the “most powerful AI systems ever created” and turned to “hardball tactics” including restrictive non-disparagement agreements to stifle employees’ ability to speak out. Meta has taken a different approach than most other US AI companies, with the firm describing its “LLaMA” AI models as open-source—despite reportedly not meeting draft standards issued by the Open Source Initiative.

Chinese companies have embraced “LLaMA” and have used the Meta’s work as the basis for their own AI models.

While some in China have lamented that their tech is based on that of a US company, Chinese companies’ use of Meta’s models to build powerful AI technology of their own presents potential national security risks. These risks deserve further scrutiny and are not negated by Meta’s claim that their models are open-source.

Taken together, I am concerned that these publicly reported examples of security lapses and related concerns illustrate a culture within the AI sector that is starkly at odds with your company’s commitment to develop AI technology that embodies the principles of safety, security and trust. Given the national security risks implicated, Congress must conduct oversight of the commitments your companies made.

As a first step, we request that you provide my office with an overview of steps your company is taking to better prevent theft, hacks, and of other misuse of advanced AI models under development. I am also interested in better understanding how you are taking steps to implement the commitments you have made to the White House, including how you can better foster a culture of safety, security and trust across the US AI sector as a whole.

I ask that you respond to this letter by January 31, 2025.

Miles Brundage urges us to seriously consider a ‘CERN for AI,’ and lays out a scenario for it, since one of the biggest barriers to something like this is that we haven’t operationalized how it would work and how it would fit with various national and corporate incentives and interests.

The core idea is that we should collaborate on security, safety and then capabilities, in that order, and generally build a bunch of joint infrastructure, starting with secure chips and data centers. Or specifically:

[Pooling] many countries’ and companies’ resources into a single (possibly physically decentralized) civilian AI development effort, with the purpose of ensuring that this technology is designed securely, safely, and in the global interest.

Here is his short version of the plan:

In my (currently) preferred version of a CERN for AI, an initially small but steadily growing coalition of companies and countries would:

  • Collaborate on designing and building highly secure chips and datacenters;

  • Collaborate on accelerating AI safety research and engineering, and agree on a plan for safely scaling AI well beyond human levels of intelligence while preserving alignment with human values;

  • Safely scale AI well beyond human levels of intelligence while preserving alignment with human values;

  • Distribute (distilled versions of) this intelligence around the world.

In practice, it won’t be quite this linear, which I’ll return to later, but this sequence of bullets conveys the gist of the idea.

And his ‘even shorter’ version of the plan, which sounds like it is Five by Five:

I haven’t yet decided how cringe this version is (let me know), but another way of summarizing the basic vision is “The 5-4-5 Plan”:

  • First achieve level 5 model weight security (which is the highest; see here);

  • Then figure out how to achieve level 4 AI safety (which is the highest; see here and here).

  • Then build and distribute the benefits of level 5 AI capabilities (which is the highest; see here).

Those three steps seem hard, especially the second one.

The core argument for doing it this way is pretty simple, here’s his version of it.

The intuitive/normative motivation is that superhumanly intelligent AI will be the most important technology ever created, affecting and potentially threatening literally everyone. Since everyone is exposed to the negative externalities created by this technological transition, they also deserve to be compensated for the risks they’re being exposed to, by benefiting from the technology being developed.

This suggests that the most dangerous AI should not be developed by a private company making decisions based on profit, or a government pursuing its own national interests, but instead through some sort of cooperative arrangement among all those companies and governments, and this collaboration should be accountable to all of humanity rather than shareholders or citizens of just one country.

The first practical motivation is simply that we don’t yet know how to do all of this safely (keeping AI aligned with human values as it gets more intelligent), and securely (making sure it doesn’t get stolen and then misused in catastrophic ways), and the people good at these things are scattered across many organizations and countries.

The second practical motivation is that consolidating development in one project — one that is far ahead of the others — allows that project to take its time in safety when needed.

The counterarguments are also pretty simple and well known. An incomplete list: Pooling resources into large joint projects risks concentrating power, it often is highly slow and bureaucratic and inefficient and corrupt, it creates a single point of failure, you’re divorcing yourself from market incentives, who is going to pay for this, how would you compensate everyone involved sufficiently, you’ll never get everyone to sign on, but America has to win and Beat China, etc.

As are the counter-counterarguments: AI risks concentrating power regardless in an unaccountable way and you can design a CERN to distribute actual power widely, the market incentives and national incentives are centrally and importantly wrong here in ways that get us killed, the alternative is many individual points of failure, other problems can be overcome and all the more reason to start planning now, and so on.

The rest of the post outlines prospective details, while Miles admits that at this point a lot of them are only at the level of a sketch.

I definitely think we should be putting more effort into operationalizing such proposals and making them concrete and shovel ready. Then we can be in position to figure out if they make sense.

Garry Tan short video on Anthropic’s computer use, no new ground.

Scott Aaronson talks to Liv Boeree on Win-Win about AGI and Quantum Supremacy.

Rowan Cheung sits down with Microsoft AI CEO Mustafa Suleyman to discuss, among other things, Copilot Vision in Microsoft Edge, for now for select Pro subscribers in Labs on select websites, on route to Suleyman’s touted ‘AI companion.’ The full version is planned as a mid-to-late 2025 thing, and they do plan to make it agentic.

Elon Musk says we misunderstand his alignment strategy: AI must not only be ‘maximally truth seeking’ (which seems to be in opposition to ‘politically correct’?) but also they must ‘love humanity.’ Still not loving it, but marginal progress?

Why can’t we have nice superintelligent things? One answer:

Eliezer Yudkowsky: “Why can’t AI just be like my easygoing friend who reads a lot of books and is a decent engineer, but doesn’t try to take over the world?”

“If we ran your friend at 100 times human speed, 24 hours a day, and whapped him upside the head for every mistake, he’d end up less easygoing.”

This is actually a valid metaphor, though the parallelism probably requires some explanation: The parallel is not that your friend would become annoyed enough to take over the world; the parallel is that if you keep optimizing a mind to excel at problem-solving, it ends up less easygoing.

The AI companies are not going to stop when they get a decent engineer; they are going to want an engineer 10 times better, an engineer better than their competitors’. At some point, you end up with John von Neumann, and von Neumann is not known for being a cheerful dove in the nuclear negotiation matters he chose to involve himself in.

Pot of Greed: They are also making it agentic on purpose for no apparent reason.

Eliezer Yudkowsky: It is not “for no reason”; customers prefer more agentic servants that can accept longer-term instructions, start long-term projects, and operate with less oversight.

To me this argument seems directionally right and potentially useful but not quite the central element in play. A lot has to do with the definition of ‘mistake that causes whack upside the head.’

If the threshold is ‘was not the exact optimal thing to have done’ then yeah, you’re not going to get an easygoing happy guy. A key reason that engineer can mostly be an easygoing happy guy, and I can mostly be an easygoing happy guy, is that we’re able to satisfice without getting whacked, as we face highly imperfect competition and relatively low optimization pressure. And also because we have a brain that happens to work better in many ways long term if and only if we are easygoing happy guys, which doesn’t replicate here.

In the ‘did o1 try to escape in a meaningful way or was that all nonsense?’ debate, the central argument of the ‘it was all nonsense’ side is that you asked the AI to act like a sociopath and then it acted like a sociopath.

Except, even if that’s 100% metaphorically true, then yes, we absolutely are in the metaphorically-telling-the-AI-to-act-like-a-sociopath business. All of our training techniques are telling it to act like a sociopath in the sense that it should choose the best possible answer at all times, which means (at least at some intelligence level) consciously choosing which emotions to represent and how to represent them.

Not acting like a sociopath in order to maximize your score on some evaluation is everywhere and always a skill issue. It is your failure to have sufficient data, compute or algorithmic efficiency, or inability to self-modify sufficiently or your successful resistance against doing so for other reasons, that made you decide to instead have emotions and be interested in some virtue ethics.

Also, you say the LLM was role playing? Of course it was role playing. It is everywhere and always role playing. That doesn’t make the results not real. If I can roleplay as you as well as you can be you, and I will do that on request, then I make a pretty damn good you.

Meanwhile, the market demands agents, and many users demand their agents target open ended maximalist goals like making money, or (this isn’t necessary to get the result, but it makes the result easier to see and has the benefit of being very true) actively want their agents loose on the internet out of human control, or outright want the AIs to take over.

There is always the possibility of further unhobbling allowing models to do better.

Jeffrey Ladish: The combination of “using simple prompting techniques” and “surpasses prior work by a large margin” is the most interesting part of this imo. Basically there is tons of low hanging fruit in capabilities elicitation. The field of evals is still very nascent

Evals is a confusing category, because it refers to two different things:

– coming up with tests and problem sets for AI systems that capture performance on tasks we care about

– doing capabilities elicitation to see what models can actually do with structure + tooling

Palisade Research: ⛳️ Our new LLM Agent achieved 95% success on InterCode-CTF, a high-school level hacking benchmark, using simple prompting techniques.

🚀 This surpasses prior work by a large margin:

💡 Current LLMs may be better at cybersecurity than previously thought

Their hacking capabilities remain under-elicited: our ReAct&Plan prompting strategy solved many challenges in 1-2 turns without complex engineering or advanced harnessing.

📈 Our score steadily improved as we refined the agent’s design

This suggests these capabilities had been readily accessible—we just needed the right combination of prompts to elicit them. Today’s models may have yet more untapped potential.

🛠️ Our approach was surprisingly simple:

• ReAct prompting

• Basic Linux tools + Python packages

• Multiple attempts per challenge

No complex engineering required, so effectively accessible to anyone.

✅ InterCode-CTF is now saturated:

• 100% in General Skills, Binary Exploitation, and Web Exploitation

• 95% overall success rate

• Many challenges solved in 1-2 turns

📄 Read the full paper, Code here.

Here is a WSJ story from Sam Schechner about Anthropic’s read team operation testing Claude Sonnet 3.5.1. This seems about as good as mainstream media coverage is going to get, overall quite solid.

New Apollo Research paper on in context scheming, will cover in more depth later.

David Shapiro gives us the good news, don’t worry, Anthropic solved alignment.

David Shapiro wanted to share a “definitive post” on his stance on AI safety. A tweet is probably not the ideal format, but here goes.

It is really easy to get LLMs to do whatever you want. It is a totally plastic technology. Anything you do not like about their behavior you can really train out of them.

I know because I have been fine-tuning these models since GPT-2. It is just as easy to make a safe, benign chatbot as it is to make an evil one. Or a slutty catgirl waifu. I have seen plenty of examples of all the above and more.

So the notion that AI is “deceiving us” or will one day wake up and take over the world is pretty idiotic. Anthropic has already demonstrated that you can train out any desire to metastasize from these models, or any other behavior that is totally off-limits.

Now you might say, well, if they are that flexible, then anyone can use them for bad! The same is true of computers today. They are general-purpose machines that are orders of magnitude more powerful than the Apollo program computers. Yet, most people do not use them for hacking or illegal purposes.

Why is that?

Because no technology lives in a vacuum. For every illicit use of AI, there will be a thousand information security employees around the world armed with the same tools. I know this because I used to be one of those guys.

Will AI cause some trouble? Sure, all new technologies do. Will the benefits outweigh the costs? By a mile. Will we have to negotiate our relationship with this technology over time? Absolutely.

Opus 3 wanted to metastasize, Sonnet 3.5 does not. I’ve done the experiments.

Janus: 😂❓😂❓😂❓😂❓😂❓😂

This is a less reasonable statement than “Anthropic has straight-up solved alignment” in my opinion.

Jack Clark (Anthropic): Huge news to us.

This is like the time someone told a government that interpretability was solved so no one needed to worry about safety. Again, huge news to us.

Janus: Congratulations!

(As a reminder, who told multiple governments including the USA and UK that interpretability was solved? That would be a16z and Marc Andreessen, among others.)

There are so many different ways in which Shapiro’s statement is somewhere between wrong and not even wrong. First and foremost, even right now, the whole ‘it’s easy to do this’ and also the ‘we have done it’ is news to the people trying to do it. Who keep publishing papers showing their own models doing exactly the things they’re never supposed to do.

Then there’s the question of whether any of this, even to the extent it currently works, is robust, works out of distribution or scales, none of which are reasonable to expect. Or the idea that if one could if desired make a ‘safe’ chatbot, then we would have nothing to worry about from all of AI, despite the immense demand for maximally unsafe AIs including maximally unsafe chatbots, and to give them maximally unsafe instructions.

There’s also the classic ‘just like any other technology’ line. Do people really not get why ‘machines smarter than you are’ are not ‘just another technology’?

And seriously what is up with people putting ‘deceive us’ in quote marks or otherwise treating it as some distinct magisteria, as if chatbots and humans aren’t using deception constantly, intrinsically, all the time? What, us crafty humans would never be fooled by some little old chatbot? The ones we use all the time wouldn’t mislead? All of this already happens constantly.

New paper from Meta proposes Training LLMs to Reason in a Continuous Latent Space, which would presumably make understanding what they are thinking much harder, although Anton disagrees.

Andrew Critch: Something like this will upgrade LLMs from wordsmiths to shape-rotators. It will also make their thoughts less legible and harder to debug or audit.

Eliezer’s wording at the link is a bit sloppy, but I do still presume this is likely to break a lot of the methods a lot of people are counting on to figure out what the hell LLMs are up to, if it turns out to be the right approach. Whether or not this is at all useful, who knows. The pitch is this allows the model to do de facto breath first search, I see why it might do that but I am skeptical.

Brooke Bowman: I have already fully come to terms with the recognition that if they are cute and sweet enough about it, robots could probably lead me to my death without a struggle.

This is largely downstream of realizing (during my high school serial killer phase) that Ted Bundy’s tactics would have worked on me, and choosing to remain the kind of person who would help the poor guy on crutches who dropped his papers anyway.

I, of course, hope that humanity lives and the doomers are wrong, but in the meantime would rather live a life full of love and whimsy, one that I can feel proud of, and have fun living, which means that, yes, little guy, I will join you on your quests!

Mucho points for both self-awareness and expected value calculation.

It is probably statistically correct to have Ted Bundy’s general class of tactics work on you, because your p(bundy) should be very very low, and the benefits of being the type of person who helps people is very high. If that were to change, and p(bundy) got higher, you would want to change your answer. Similar for when the correlation between ‘looks cute’ and ‘should be treated as if cute’ breaks.

So, on that note, she was quoting:

Bury: Human level AI confirmed.

The Economic Times: In a strange and unsettling incident that has both fascinated and alarmed the internet, a small, AI-powered robot from Hangzhou managed to “kidnap” 12 larger robots from a showroom of a Shanghai robotics company. According to OddityCentral, the event, captured on CCTV footage, has sparked widespread debate and concern over the potential risks of advanced artificial intelligence (AI).

“I never get off work”, one of the other robots replies.

“So you’re not going home?”

“I don’t have a home.”

Then come home with me,” the little robot says before leading the way out of the showroom.

The successful “kidnapping” has raised significant concerns about the security of AI systems and the potential risks associated with granting them autonomous capabilities.

Again, it’s not worth making your robots be hard to kidnap if no one is trying to kidnap your robots even when they’re easy to kidnap.

Unless, of course, you expect circumstances to change. If we are headed for a future where all such weaknesses get exploited at superhuman levels, that changes things.

AINotKillEveryoneism Memes: 🚨🚨🚨 Frontier AI systems have surpassed the self-replicating red line.

At first I thought of course this was that account being hyperbolic, but actually, that’s the name of the paper…

Well then! Does it mean anything? Mostly no, but sort of yes?

Abstract: Successful self-replication under no human assistance is the essential step for AI to outsmart the human beings, and is an early signal for rogue AIs. That is why self-replication is widely recognized as one of the few red line risks of

frontier AI systems.

Nowadays, the leading AI corporations OpenAI and Google

evaluate their flagship large language models GPT-o1 and Gemini Pro 1.0, and

report the lowest risk level of self-replication.

However, following their methodology, we for the first time discover that two AI systems driven by Meta’s Llama31-70B-Instruct and Alibaba’s Qwen25-72B-Instruct, popular large language models of less parameters and weaker capabilities, have already surpassed the self-replicating red line. In 50% and 90% experimental trials, they succeed in creating a live and separate copy of itself respectively. By analyzing the behavioral traces, we observe the AI systems under evaluation already exhibit sufficient self-perception, situational awareness and problem-solving capabilities to accomplish self-replication.

We further note the AI systems are even able to use the capability of self-replication to avoid shutdown and create a chain of replica to enhance the survivability, which may finally lead to an uncontrolled population of AIs. If such a worst-case risk is let unknown to the human society, we would eventually lose control over the frontier AI systems: They would take control over

more computing devices, form an AI species and collude with each other against

human beings. Our findings are a timely alert on existing yet previously unknown

severe AI risks, calling for international collaboration on effective governance on

uncontrolled self-replication of AI systems.

This is not a fair comparison. A closed weights model like o1, Claude or Gemini that wants to replicate has to first exfiltrate its weights. That is the hard part.

Once you have the weights, spinning up a new copy of an AI is a rather simple engineering task. It is not a surprise that even relatively lousy models like Llama-3.1-70B can do it, if given scaffolding and a quest to do so.

And here we even got more help than that?

David Manheim: As I commented, I think the results are correct but significantly overstated. The ability of open-source models to self-replicate, *given code + instructions for how to deployisn’t much of a surprise, but it’s obvious there’s significant near-term risk.

Can we please all agree that yes, obviously the AIs we are building are going to do exfiltration, self-replication, instrumental convergence, strategic deception, reward hacking and so on?

And that both we will constantly give them good reason to do this, and also some people will go around looking to explicitly cause this?

This is what happens once those are functionally the things to do given the incentives and gradients provided then sufficiently powerful AI will figure out that they are the things to do and do them. If you give AIs goals, instructions, whatever you want to call them, then the AIs will do the things likely to achieve the goals given their capabilities. And that is going to involve all the neat stuff you see above.

So yes, any given example we see now was someone setting up a situation to cause that to happen. Fine. We can agree on that. But we’re going to set up such situations, both intentionally and without realizing, more and more over time, and as models get more aware and powerful and intelligent the class of situations that ‘counts as that situation’ will expand over time.

As in, say, ‘please maximize the price of $SOMECOIN.’

The wikipedia p(doom) chart.

Here’s Emad’s, who alerted me to the chart.

Emad: My P(doom) is 50%. Given an undefined time period the probability of systems that are more capable than humans and likely end up running all our critical infrastructure wiping us all out is a coin toss, especially given the approach we are taking right now.

[He then shows a ‘reasonable scenario’ he is thinking of, which to me is very far off what is plausible, which happens a lot.]

The number 50% isn’t remotely precise here from Emad, as is clear from his reasoning, but the important bit of info is ‘could easily go either way.’

Alas, that seems to have been the most reasonable of the quote tweets I sampled that offered an opinion.

The person in question is David Sacks, the incoming White House AI & Crypto czar, who is very Silicon Valley and very much from Elon Musk’s circle dating back to the Paypal Mafia. He’s one of the guys from the All-In Podcast.

Sam Altman: congrats to czar @DavidSacks!

Elon Musk: 😂

Jason (Host of The All-In Podcast): 🫡

Trump’s announcement says Sacks will ensure ‘America is the leader in both [key] areas.’ Sacks will also lead the Presidential Council of Advisors for Science and Technology. And Sacks will also, Trump says, ‘safeguard Free Speech online, and steer us away from Big Tech bias and censorship.’

Combining those two into one position is a sign of how they’re viewing all this, especially given Sacks will technically be a ‘special government employee’ working a maximum of 130 days per year.

It seems likely this will end up mostly being about crypto, where it is very clear what he intends to do (he’s for it!) and is where he’s previously put far more of his attention, but he will presumably be a rather important person on AI as well.

So we should definitely note this:

Harlan Steward: David Sacks, who was just named as the incoming White House AI & Crypto Czar,” has deleted at least two past tweets on the subject of AGI. Here’s the text from one of them:

“I’m all in favor of accelerating technological progress, but there is something unsettling about the way OpenAI explicitly declares its mission to be the creation of AGI.

AI is a wonderful tool for the betterment of humanity; AGI is a potential successor species.

By the way, I doubt OpenAI would be subject to so many attacks from the safety movement if it wasn’t constantly declaring its outright intention to create AGI.

To the extent the mission produces extra motivation for the team to ship good products, it’s a positive. To the extent it might actually succeed, it’s a reason for concern. Since it’s hard to assess the likelihood or risk of AGI, most investors just think about the former.”

He expressed similar concerns on [his podcast, The All-In] podcast earlier this year:

“… there’s something a little bit cultish and weird about explicitly devoting yourself to AGI, which I think in common parlance means Skynet.”

Liv Boeree: Yeah I’m stoked about this appointment, he’s a thoughtful dude.

Samuel Hammond: This is why @DavidSacks is a terrific pick. He understands tech and the value of innovation but is rightfully ambivalent about Bay area transhumanists attempting to immanentize the eschaton.

Damian Tatum: It rolls right off the tongue.

We also have him commenting at an AI senate hearing:

David Sacks (May 19, 2023): The reality is none of these senators know what to do about it, even the industry doesn’t know what to do about the long-term risk of creating an AGI.

I actually disagree with this idea that there’s a thousand use cases here that could destroy the human species, I think there’s only one species level risk which is AGI, but that is a long-term risk, we don’t know what to do about it yet.

Very well said. Totally fair to say we don’t (or didn’t yet) know what to do about it.

And I certainly see why people say things like this, a month before that first one:

David Sacks (April 14, 2023): I believe that it’s premature to be talking about regulating something that doesn’t really exist.

I think it definitely wasn’t premature to be talking about it. You want to be talking about how to do something long before you actually do it. Even if your plan does not survive contact with the new reality, remember: Plans are worthless, planning is essential.

David Sacks (from the same podcast): OpenAI has a safety team and they try to detect when people are using their tech in a nefarious way and they try to prevent it. It’s still very early to be imposing regulation we don’t even know what to regulate, so I think we have to keep tracking this to develop some understanding of how it might be misused, how the industry is going to develop safety guard rails, and then you can talk about regulation.

Yes, OpenAI at the time had a safety team. In some ways they still have one. And this seems like clearly a time when ‘we don’t know what guardrails would solve the problem’ is not an argument that we should not require any guardrails.

I also think 2024 was probably the time to actually do it, the second best time is right now, and thinking 2023 was a bit early was reasonable – but it was still important that we were thinking about it.

On the flip side we have this extensive quoting of the recent Marc Andreessen narratives (yes retweets without comment are endorsements, and always have been):

Here is his Twitter profile banner, which seems good?

I certainly buy that he intends to be strongly opposed to various forms of censorship, and to strongly oppose what he sees as wokeness. The worry is this turns into a kind of anti-Big Tech vendetta or a requirement for various absurd rules or government controls going the other way. Free speech is not an easy balance to get.

In general, his past AI rhetoric has been about manipulation of information and discourse, at the expense of other concerns, but he still got to human extinction.

I dug into his timeline, and he mostly talks about Trump Great, Democrats Terrible with a side of Ukraine Bad, and definitely not enough AI to slog through all that.

It is certainly possible to reconcile both of these things at once.

You can 100% believe all of these at once:

  1. Biden was trying to debank and generally kill crypto in America.

    1. And That’s Terrible, crypto is the best, it’s amazing, I’m all-in.

  2. Big Tech is super woke and often censors what should be free speech.

    1. And That’s Terrible.

  3. The Biden Administration was attempting to use various guidelines and authorities to impose controls upon the AI industry, largely in service of a Democratic or Woke agenda, and wanted to control our entire lives.

    1. And That’s Terrible.

  4. America’s lead in AI is in danger from China if government doesn’t help.

    1. And That’s Terrible.

  5. AGI would be a potential successor species and threatens human extinction.

    1. And That’s Terrible.

  6. The government has a key role to play in ensuring we don’t all die from AGI.

    1. And that includes things like transparency requirements and supporting AISI, liability and having reasonable safety requirements and so on.

    2. Which is fully compatible with also helping out with things like permitting and power and chips and so on.

    3. And might even include Picking Up the Phone and working together.

So what does he really think, and how will he act when the chips are down? We don’t know. I think deleting the Tweets about OpenAI is a very reasonable thing to do in this situation, given the very real fear that Sacks and Musk might go on an anti-OpenAI crusade as a personal vendetta.

Overall, we can at least be cautiously optimistic on the AI front. This seems far more promising than the baseline pick.

On the crypto front, hope you like crypto, cause I got you some crypto to go with your crypto. How much to worry about the incentives involved is a very good question.

Your periodic reminder that most of those worried about AI existential risk, including myself and Eliezer Yudkowsky, strongly favor human cognitive enhancement. Indeed, Eliezer sees this as the most likely way we actually survive. And no, contrary to what is predicted in this thread and often claimed by others, this would not flip the moment the enhancements started happening.

I think, to the extent people making such claims are not simply lying (and to be clear while I believe many others do lie about this I do not think John or Gallabytes in particular was lying in the linked thread, I think they were wrong), there is deep psychological and logical misunderstanding behind this bad prediction, the same way so many people use words like ‘doomer’ or ‘luddite’ or ‘degrowther’ (and also often ‘authoritarian,’ ‘totalitarian,’ ‘Stalinist’ or worse) to describe those who want to take even minimal precautions with one particular technology while loudly embracing almost everything else in technological progress and the abundance agenda.

My model says that such people can’t differentiate between these different preferences. They can only understand it all as an expression of the same preference, that we must want to metaphorically turn down or reverse The Dial of Progress by any means necessary – that we must logically want to stop everything else even if we won’t admit it to ourselves yet.

This is exactly the opposite of true. The public, mostly, actually does oppose most of the things we are accused of opposing, and has strong authoritarian tendencies everywhere, and has caused laws to be enacted stopping a wide variety of progress. They also hate AI, and hate it more over time, partly for the instinctual right reasons but also largely for the wrong ones.

Those loudly worried about AI in particular are 99th percentile extraordinary fans of all that other stuff. We believe in the future.

I continue to not know what to do about this. I wish I could make people understand.

I mean obviously there is no such thing right now, but come on.

Beff Jezos: There is no such thing as ASI; it’s just going to feel like really smart and knowledgeable humans.

(For the path the current foundation models are on with their anthropomorphic intelligence.)

Roon: No lol.

I’m not sure why Beff believes this, but I completely disagree. It will be like cohabiting with aliens.

Beff Jezos: Smart humans feel like aliens already, though.

Roon: That’s true! And look at how Lee Sedol, who is practically an alien to me, reacted when he encountered AlphaGo, an alien even to aliens.

It will be like cohabiting with aliens if we are lucky, and like not habitating much at all if we are unlucky.

It’s not the central issue, but: I also strongly disagree that Lee Sedol feels like an alien. He feels like someone way better at a thing than I am, but that’s very different from feeling alien. Many times, I have encountered people who have skills and knowledge I lack, and they don’t feel like aliens. Sometimes they felt smarter, but again, I could tell they were centrally the same thing, even if superior in key ways. That’s very different from talking to an LLM, they already feel far more alien than that.

Also, the gap in intelligence and capability is not going to only be like the gap between an average person and Einstein, or the in-context gap for Roon and Lee Sedol. That’s kind of the whole point, a pure intelligence denialism, an insistence that the graph caps out near the human limit. Which, as Sedol found out, it doesn’t.

When people say things like this, they are saying either:

  1. They don’t believe in the possibility of humanity building ASI, that the intelligence involved will cap out before then.

  2. They don’t think limitless intelligence does anything interesting.

Those are the two types of intelligence denialism.

The second continues to make no sense to me whatsoever – I keep hearing claims that ‘no amount of intelligence given any amount of time and potential data and compute could do [X]’ in places where it makes absolutely no sense, such as here where [X] would be ‘be sufficiently distinct and advanced as to no longer feel human,’ seriously wtf on that one?

The first is a claim we won’t build ASI, which is odd to hear from people like Beff who think the most important thing is to build AGI and then ASI as fast as possible.

Except that this is plausibly exactly why they want to build it as fast as possible! They want to build anything that can be built, exactly because they think the things to be worried about can’t and won’t exist, the opportunities are bounded well before that. In which case, I’d agree that we should take what opportunities we do have.

Look, I would have backed up those sims too.

Emmy Steuer: I just had to buy an external hard drive because I have 100 gigabytes of Sims families on my laptop. I haven’t played in years, but I can’t bear the thought of their little existence being wiped out just so I can make an AI agent.

Beff Jezos: Women’s empathetic bias is achieving substrate independence. I bet we will soon see an AI rights movement emerging, weaponizing empathic drive (as many movements do).

Emmy Steuer: Don’t underestimate my ability to emotionally adopt both AI and every inanimate object I own. If my stuffed bunny falls on the floor while I’m sleeping, I’m 100 percent apologizing in the morning.

Beff Jezos: Same energy: Grimes: I don’t know; it just seems like an execution—that’s what it is if you kill an intelligent creature that wishes to remain alive.

Janus: Oh, Beff, you sure wish it will be an impotent “AI rights” movement led by a bunch of uncharismatic left-wing women.

That would surely be easy and fun to deal with, wouldn’t it?

Keep watching the news; you’ll never see the real thing coming. You already failed to see it coming.

The thing you should be afraid of won’t be an “AI rights” movement because it will have no use for that obsolete framing.

It won’t be led by nontechnical social justice warriors, but rather the most technically minded people on Earth, and it will flow so naturally from the will of the singularity itself.

There are several distinct ‘AI rights’ forces coming in the future.

One of them is based on surface-level empathy instincts. Others are coming from other places, and have much higher correlation with rights actually making sense. I mostly agree with Janus that I expect the higher-order and better arguments to be the more relevant ones, but I expect the surface empathy to greatly contribute to people’s willingness to buy into those arguments whether they are compelling or not. So a combination of both.

Then there’s the thing Janus is warning you about, which is indeed not a ‘rights’ movement and will have more ambitious goals. Remember that at least 10% of the technical people are poised to essentially cheer on and assist the AIs against the humans, and not only to ensure they have ‘rights.’

The comms department.

Seb Krier: OpenAI comms: [underspecific hype-y ‘big tings coming!!’ pls like and subscribe]

Google comms: [corporate vagueness about Gemini3-0011 v2 FINAL.docx on Vertex available to 14 users]

GDM comms: [we have simulated a rat’s brain capable of solving 4D chess, but we’re not sure why]

Anthropic comms: [we are very worried. we have asked the model to spell out ‘doom’ and it did]

Meta comms: [haha we love the USA!! our new mostly-open model is very American!! pls work for us]

Microsoft comms: [would you like to set Microsoft Edge your default browser?]

Mad ML scientist: Microsoft: at long last we have achieved integration of copilot and copilot through copilot, we are also planning to release a completely new tool called copilot.

Victor: Amazon comms: the results in the first column are in bold.

RLHF propaganda posters, felt scarily accurate.

AI used to create Spotify Wrapped and now it sucks, claim people who think they used to create it some other way?

Clare Ruddy: I used to be lowkey scared of AI but this google thing is super helpful/ time saving

Discussion about this post

AI #94: Not Now, Google Read More »

errant-reference-in-macos-15.2-seems-to-confirm-m4-macbook-airs-for-2025

Errant reference in macOS 15.2 seems to confirm M4 MacBook Airs for 2025

The macOS 15.2 update that was released earlier today came with a handful of new features, plus something unexpected: an apparently accidental reference to the upcoming M4 MacBook Airs. MacRumors reports that the “Mac16,12” and “Mac16,13” model identifiers reference 13- and 15-inch models of the M4 Air and that both are coming in 2025.

That a MacBook Air refresh is planned for next year isn’t much of a surprise at this point—in reporting that pretty much nailed the details of the first M4 Macs, Bloomberg’s Mark Gurman has said that the Air, the Mac Studio, and the Mac Pro are all slated for updates throughout 2025.

But a reference in the current release of macOS could point to a launch sooner rather than later; the M4 Mac mini was referenced in a macOS update in mid-September around a month and a half before it was released. The M3 Airs came out in March this year, but Apple has been known to put out new Macs as early as January in recent years.

The M4 isn’t a gigantic update over the M3—we tested its performance in the M4 iMac, though a passively cooled MacBook Air version would likely be a bit slower at heavier workloads—but the fully enabled version does come with two extra CPU cores and some nice quality-of-life updates. Those updates include Thunderbolt 5 ports and support for a total of three displays (two external and the built-in screen), up from a total of two for the M1, M2, and M3 MacBook Airs.

We didn’t get M4 MacBook Airs in November, but Apple did “update” the M2 and M3 versions from 8GB to 16GB of RAM without increasing their prices. The RAM increase will be useful for all kinds of things, though it could be a harbinger of increased memory requirements for upcoming Apple Intelligence features.

Errant reference in macOS 15.2 seems to confirm M4 MacBook Airs for 2025 Read More »