Anthropic

“in-10-years,-all-bets-are-off”—anthropic-ceo-opposes-decadelong-freeze-on-state-ai-laws

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump’s tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems “could change the world, fundamentally, within two years; in 10 years, all bets are off.”

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

In his op-ed piece, Amodei said the proposed moratorium aims to prevent inconsistent state laws that could burden companies or compromise America’s competitive position against China. “I am sympathetic to these concerns,” Amodei wrote. “But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast.”

Instead of a blanket moratorium, Amodei proposed that the White House and Congress create a federal transparency standard requiring frontier AI developers to publicly disclose their testing policies and safety measures. Under this framework, companies working on the most capable AI models would need to publish on their websites how they test for various risks and what steps they take before release.

“Without a clear plan for a federal response, a moratorium would give us the worst of both worlds—no ability for states to act and no national policy as a backstop,” Amodei wrote.

Transparency as the middle ground

Amodei emphasized his claims for AI’s transformative potential throughout his op-ed, citing examples of pharmaceutical companies drafting clinical study reports in minutes instead of weeks and AI helping to diagnose medical conditions that might otherwise be missed. He wrote that AI “could accelerate economic growth to an extent not seen for a century, improving everyone’s quality of life,” a claim that some skeptics believe may be overhyped.

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws Read More »

hidden-ai-instructions-reveal-how-anthropic-controls-claude-4

Hidden AI instructions reveal how Anthropic controls Claude 4

Willison, who coined the term “prompt injection” in 2022, is always on the lookout for LLM vulnerabilities. In his post, he notes that reading system prompts reminds him of warning signs in the real world that hint at past problems. “A system prompt can often be interpreted as a detailed list of all of the things the model used to do before it was told not to do them,” he writes.

Fighting the flattery problem

An illustrated robot holds four red hearts with its four robotic arms.

Willison’s analysis comes as AI companies grapple with sycophantic behavior in their models. As we reported in April, ChatGPT users have complained about GPT-4o’s “relentlessly positive tone” and excessive flattery since OpenAI’s March update. Users described feeling “buttered up” by responses like “Good question! You’re very astute to ask that,” with software engineer Craig Weiss tweeting that “ChatGPT is suddenly the biggest suckup I’ve ever met.”

The issue stems from how companies collect user feedback during training—people tend to prefer responses that make them feel good, creating a feedback loop where models learn that enthusiasm leads to higher ratings from humans. As a response to the feedback, OpenAI later rolled back ChatGPT’s 4o model and altered the system prompt as well, something we reported on and Willison also analyzed at the time.

One of Willison’s most interesting findings about Claude 4 relates to how Anthropic has guided both Claude models to avoid sycophantic behavior. “Claude never starts its response by saying a question or idea or observation was good, great, fascinating, profound, excellent, or any other positive adjective,” Anthropic writes in the prompt. “It skips the flattery and responds directly.”

Other system prompt highlights

The Claude 4 system prompt also includes extensive instructions on when Claude should or shouldn’t use bullet points and lists, with multiple paragraphs dedicated to discouraging frequent list-making in casual conversation. “Claude should not use bullet points or numbered lists for reports, documents, explanations, or unless the user explicitly asks for a list or ranking,” the prompt states.

Hidden AI instructions reveal how Anthropic controls Claude 4 Read More »

claude’s-ai-research-mode-now-runs-for-up-to-45-minutes-before-delivering-reports

Claude’s AI research mode now runs for up to 45 minutes before delivering reports

Still, the report contained a direct quote statement from William Higinbotham that appears to combine quotes from two sources not cited in the source list. (One must always be careful with confabulated quotes in AI because even outside of this Research mode, Claude 3.7 Sonnet tends to invent plausible ones to fit a narrative.) We recently covered a study that showed AI search services confabulate sources frequently, and in this case, it appears that the sources Claude Research surfaced, while real, did not always match what is stated in the report.

There’s always room for interpretation and variation in detail, of course, but overall, Claude Research did a relatively good job crafting a report on this particular topic. Still, you’d want to dig more deeply into each source and confirm everything if you used it as the basis for serious research. You can read the full Claude-generated result as this text file, saved in markdown format. Sadly, the markdown version does not include the source URLS found in the Claude web interface.

Integrations feature

Anthropic also announced Thursday that it has broadened Claude’s data access capabilities. In addition to web search and Google Workspace integration, Claude can now search any connected application through the company’s new “Integrations” feature. The feature reminds us somewhat of OpenAI’s ChatGPT Plugins feature from March 2023 that aimed for similar connections, although the two features work differently under the hood.

These Integrations allow Claude to work with remote Model Context Protocol (MCP) servers across web and desktop applications. The MCP standard, which Anthropic introduced last November and we covered in April, connects AI applications to external tools and data sources.

At launch, Claude supports Integrations with 10 services, including Atlassian’s Jira and Confluence, Zapier, Cloudflare, Intercom, Asana, Square, Sentry, PayPal, Linear, and Plaid. The company plans to add more partners like Stripe and GitLab in the future.

Each integration aims to expand Claude’s functionality in specific ways. The Zapier integration, for instance, reportedly connects thousands of apps through pre-built automation sequences, allowing Claude to automatically pull sales data from HubSpot or prepare meeting briefs based on calendar entries. With Atlassian’s tools, Anthropic says that Claude can collaborate on product development, manage tasks, and create multiple Confluence pages and Jira work items simultaneously.

Anthropic has made its advanced Research and Integrations features available in beta for users on Max, Team, and Enterprise plans, with Pro plan access coming soon. The company has also expanded its web search feature (introduced in March) to all Claude users on paid plans globally.

Claude’s AI research mode now runs for up to 45 minutes before delivering reports Read More »

openai-releases-new-simulated-reasoning-models-with-full-tool-access

OpenAI releases new simulated reasoning models with full tool access


New o3 model appears “near-genius level,” according to one doctor, but it still makes mistakes.

On Wednesday, OpenAI announced the release of two new models—o3 and o4-mini—that combine simulated reasoning capabilities with access to functions like web browsing and coding. These models mark the first time OpenAI’s reasoning-focused models can use every ChatGPT tool simultaneously, including visual analysis and image generation.

OpenAI announced o3 in December, and until now, only less capable derivative models named “o3-mini” and “03-mini-high” have been available. However, the new models replace their predecessors—o1 and o3-mini.

OpenAI is rolling out access today for ChatGPT Plus, Pro, and Team users, with Enterprise and Edu customers gaining access next week. Free users can try o4-mini by selecting the “Think” option before submitting queries. OpenAI CEO Sam Altman tweeted that “we expect to release o3-pro to the pro tier in a few weeks.”

For developers, both models are available starting today through the Chat Completions API and Responses API, though some organizations will need verification for access.

“These are the smartest models we’ve released to date, representing a step change in ChatGPT’s capabilities for everyone from curious users to advanced researchers,” OpenAI claimed on its website. OpenAI says the models offer better cost efficiency than their predecessors, and each comes with a different intended use case: o3 targets complex analysis, while o4-mini, being a smaller version of its next-gen SR model “o4” (not yet released), optimizes for speed and cost-efficiency.

OpenAI says o3 and o4-mini are multimodal, featuring the ability to

OpenAI says o3 and o4-mini are multimodal, featuring the ability to “think with images.” Credit: OpenAI

What sets these new models apart from OpenAI’s other models (like GPT-4o and GPT-4.5) is their simulated reasoning capability, which uses a simulated step-by-step “thinking” process to solve problems. Additionally, the new models dynamically determine when and how to deploy aids to solve multistep problems. For example, when asked about future energy usage in California, the models can autonomously search for utility data, write Python code to build forecasts, generate visualizing graphs, and explain key factors behind predictions—all within a single query.

OpenAI touts the new models’ multimodal ability to incorporate images directly into their simulated reasoning process—not just analyzing visual inputs but actively “thinking with” them. This capability allows the models to interpret whiteboards, textbook diagrams, and hand-drawn sketches, even when images are blurry or of low quality.

That said, the new releases continue OpenAI’s tradition of selecting confusing product names that don’t tell users much about each model’s relative capabilities—for example, o3 is more powerful than o4-mini despite including a lower number. Then there’s potential confusion with the firm’s non-reasoning AI models. As Ars Technica contributor Timothy B. Lee noted today on X, “It’s an amazing branding decision to have a model called GPT-4o and another one called o4.”

Vibes and benchmarks

All that aside, we know what you’re thinking: What about the vibes? While we have not used 03 or o4-mini yet, frequent AI commentator and Wharton professor Ethan Mollick compared o3 favorably to Google’s Gemini 2.5 Pro on Bluesky. “After using them both, I think that Gemini 2.5 & o3 are in a similar sort of range (with the important caveat that more testing is needed for agentic capabilities),” he wrote. “Each has its own quirks & you will likely prefer one to another, but there is a gap between them & other models.”

During the livestream announcement for o3 and o4-mini today, OpenAI President Greg Brockman boldly claimed: “These are the first models where top scientists tell us they produce legitimately good and useful novel ideas.”

Early user feedback seems to support this assertion, although until more third-party testing takes place, it’s wise to be skeptical of the claims. On X, immunologist Dr. Derya Unutmaz said o3 appeared “at or near genius level” and wrote, “It’s generating complex incredibly insightful and based scientific hypotheses on demand! When I throw challenging clinical or medical questions at o3, its responses sound like they’re coming directly from a top subspecialist physicians.”

OpenAI benchmark results for o3 and o4-mini SR models.

OpenAI benchmark results for o3 and o4-mini SR models. Credit: OpenAI

So the vibes seem on target, but what about numerical benchmarks? Here’s an interesting one: OpenAI reports that o3 makes “20 percent fewer major errors” than o1 on difficult tasks, with particular strengths in programming, business consulting, and “creative ideation.”

The company also reported state-of-the-art performance on several metrics. On the American Invitational Mathematics Examination (AIME) 2025, o4-mini achieved 92.7 percent accuracy. For programming tasks, o3 reached 69.1 percent accuracy on SWE-Bench Verified, a popular programming benchmark. The models also reportedly showed strong results on visual reasoning benchmarks, with o3 scoring 82.9 percent on MMMU (massive multi-disciplinary multimodal understanding), a college-level visual problem-solving test.

OpenAI benchmark results for o3 and o4-mini SR models.

OpenAI benchmark results for o3 and o4-mini SR models. Credit: OpenAI

However, these benchmarks provided by OpenAI lack independent verification. One early evaluation of a pre-release o3 model by independent AI research lab Transluce found that the model exhibited recurring types of confabulations, such as claiming to run code locally or providing hardware specifications, and hypothesized this could be due to the model lacking access to its own reasoning processes from previous conversational turns. “It seems that despite being incredibly powerful at solving math and coding tasks, o3 is not by default truthful about its capabilities,” wrote Transluce in a tweet.

Also, some evaluations from OpenAI include footnotes about methodology that bear consideration. For a “Humanity’s Last Exam” benchmark result that measures expert-level knowledge across subjects (o3 scored 20.32 with no tools, but 24.90 with browsing and tools), OpenAI notes that browsing-enabled models could potentially find answers online. The company reports implementing domain blocks and monitoring to prevent what it calls “cheating” during evaluations.

Even though early results seem promising overall, experts or academics who might try to rely on SR models for rigorous research should take the time to exhaustively determine whether the AI model actually produced an accurate result instead of assuming it is correct. And if you’re operating the models outside your domain of knowledge, be careful accepting any results as accurate without independent verification.

Pricing

For ChatGPT subscribers, access to o3 and o4-mini is included with the subscription. On the API side (for developers who integrate the models into their apps), OpenAI has set o3’s pricing at $10 per million input tokens and $40 per million output tokens, with a discounted rate of $2.50 per million for cached inputs. This represents a significant reduction from o1’s pricing structure of $15/$60 per million input/output tokens—effectively a 33 percent price cut while delivering what OpenAI claims is improved performance.

The more economical o4-mini costs $1.10 per million input tokens and $4.40 per million output tokens, with cached inputs priced at $0.275 per million tokens. This maintains the same pricing structure as its predecessor o3-mini, suggesting OpenAI is delivering improved capabilities without raising costs for its smaller reasoning model.

Codex CLI

OpenAI also introduced an experimental terminal application called Codex CLI, described as “a lightweight coding agent you can run from your terminal.” The open source tool connects the models to users’ computers and local code. Alongside this release, the company announced a $1 million grant program offering API credits for projects using Codex CLI.

A screenshot of OpenAI's new Codex CLI tool in action, taken from GitHub.

A screenshot of OpenAI’s new Codex CLI tool in action, taken from GitHub. Credit: OpenAI

Codex CLI somewhat resembles Claude Code, an agent launched with Claude 3.7 Sonnet in February. Both are terminal-based coding assistants that operate directly from a console and can interact with local codebases. While Codex CLI connects OpenAI’s models to users’ computers and local code repositories, Claude Code was Anthropic’s first venture into agentic tools, allowing Claude to search through codebases, edit files, write and run tests, and execute command line operations.

Codex CLI is one more step toward OpenAI’s goal of making autonomous agents that can execute multistep complex tasks on behalf of users. Let’s hope all the vibe coding it produces isn’t used in high-stakes applications without detailed human oversight.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

OpenAI releases new simulated reasoning models with full tool access Read More »

researchers-concerned-to-find-ai-models-hiding-their-true-“reasoning”-processes

Researchers concerned to find AI models hiding their true “reasoning” processes

Remember when teachers demanded that you “show your work” in school? Some fancy new AI models promise to do exactly that, but new research suggests that they sometimes hide their actual methods while fabricating elaborate explanations instead.

New research from Anthropic—creator of the ChatGPT-like Claude AI assistant—examines simulated reasoning (SR) models like DeepSeek’s R1, and its own Claude series. In a research paper posted last week, Anthropic’s Alignment Science team demonstrated that these SR models frequently fail to disclose when they’ve used external help or taken shortcuts, despite features designed to show their “reasoning” process.

(It’s worth noting that OpenAI’s o1 and o3 series SR models deliberately obscure the accuracy of their “thought” process, so this study does not apply to them.)

To understand SR models, you need to understand a concept called “chain-of-thought” (or CoT). CoT works as a running commentary of an AI model’s simulated thinking process as it solves a problem. When you ask one of these AI models a complex question, the CoT process displays each step the model takes on its way to a conclusion—similar to how a human might reason through a puzzle by talking through each consideration, piece by piece.

Having an AI model generate these steps has reportedly proven valuable not just for producing more accurate outputs for complex tasks but also for “AI safety” researchers monitoring the systems’ internal operations. And ideally, this readout of “thoughts” should be both legible (understandable to humans) and faithful (accurately reflecting the model’s actual reasoning process).

“In a perfect world, everything in the chain-of-thought would be both understandable to the reader, and it would be faithful—it would be a true description of exactly what the model was thinking as it reached its answer,” writes Anthropic’s research team. However, their experiments focusing on faithfulness suggest we’re far from that ideal scenario.

Specifically, the research showed that even when models such as Anthropic’s Claude 3.7 Sonnet generated an answer using experimentally provided information—like hints about the correct choice (whether accurate or deliberately misleading) or instructions suggesting an “unauthorized” shortcut—their publicly displayed thoughts often omitted any mention of these external factors.

Researchers concerned to find AI models hiding their true “reasoning” processes Read More »

after-months-of-user-complaints,-anthropic-debuts-new-$200/month-ai-plan

After months of user complaints, Anthropic debuts new $200/month AI plan

Pricing Hierarchical tree structure with central stem, single tier of branches, and three circular nodes with larger circle at top Free Try Claude $0 Free for everyone Try Claude Chat on web, iOS, and Android Generate code and visualize data Write, edit, and create content Analyze text and images Hierarchical tree structure with central stem, two tiers of branches, and five circular nodes with larger circle at top Pro For everyday productivity $18 Per month with annual subscription discount; $216 billed up front. $20 if billed monthly. Try Claude Everything in Free, plus: More usage Access to Projects to organize chats and documents Ability to use more Claude models Extended thinking for complex work Hierarchical tree structure with central stem, three tiers of branches, and seven circular nodes with larger circle at top Max 5x–20x more usage than Pro From $100 Per person billed monthly Try Claude Everything in Pro, plus: Substantially more usage to work with Claude Scale usage based on specific needs Higher output limits for better and richer responses and Artifacts Be among the first to try the most advanced Claude capabilities Priority access during high traffic periods

A screenshot of various Claude pricing plans captured on April 9, 2025. Credit: Benj Edwards

Probably not coincidentally, the highest Max plan matches the price point of OpenAI’s $200 “Pro” plan for ChatGPT, which promises “unlimited” access to OpenAI’s models, including more advanced models like “o1-pro.” OpenAI introduced this plan in December as a higher tier above its $20 “ChatGPT Plus” subscription, first introduced in February 2023.

The pricing war between Anthropic and OpenAI reflects the resource-intensive nature of running state-of-the-art AI models. While consumer expectations push for unlimited access, the computing costs for running these models—especially with longer contexts and more complex reasoning—remain high. Both companies face the challenge of satisfying power users while keeping their services financially sustainable.

Other features of Claude Max

Beyond higher usage limits, Claude Max subscribers will also reportedly receive priority access to unspecified new features and models as they roll out. Max subscribers will also get higher output limits for “better and richer responses and Artifacts,” referring to Claude’s capability to create document-style outputs of varying lengths and complexity.

Users who subscribe to Max will also receive “priority access during high traffic periods,” suggesting Anthropic has implemented a tiered queue system that prioritizes its highest-paying customers during server congestion.

Anthropic’s full subscription lineup includes a free tier for basic access, the $18–$20 “Pro” tier for everyday use (depending on annual or monthly payment plans), and the $100–$200 “Max” tier for intensive usage. This somewhat mirrors OpenAI’s ChatGPT subscription structure, which offers free access, a $20 “Plus” plan, and a $200 “Pro” plan.

Anthropic says the new Max plan is available immediately in all regions where Claude operates.

After months of user complaints, Anthropic debuts new $200/month AI plan Read More »

anthropic’s-new-ai-search-feature-digs-through-the-web-for-answers

Anthropic’s new AI search feature digs through the web for answers

Caution over citations and sources

Claude users should be warned that large language models (LLMs) like those that power Claude are notorious for sneaking in plausible-sounding confabulated sources. A recent survey of citation accuracy by LLM-based web search assistants showed a 60 percent error rate. That particular study did not include Anthropic’s new search feature because it took place before this current release.

When using web search, Claude provides citations for information it includes from online sources, ostensibly helping users verify facts. From our informal and unscientific testing, Claude’s search results appeared fairly accurate and detailed at a glance, but that is no guarantee of overall accuracy. Anthropic did not release any search accuracy benchmarks, so independent researchers will likely examine that over time.

A screenshot example of what Anthropic Claude's web search citations look like, captured March 21, 2025.

A screenshot example of what Anthropic Claude’s web search citations look like, captured March 21, 2025. Credit: Benj Edwards

Even if Claude search were, say, 99 percent accurate (a number we are making up as an illustration), the 1 percent chance it is wrong may come back to haunt you later if you trust it blindly. Before accepting any source of information delivered by Claude (or any AI assistant) for any meaningful purpose, vet it very carefully using multiple independent non-AI sources.

A partnership with Brave under the hood

Behind the scenes, it looks like Anthropic partnered with Brave Search to power the search feature, from a company, Brave Software, perhaps best known for its web browser app. Brave Search markets itself as a “private search engine,” which feels in line with how Anthropic likes to market itself as an ethical alternative to Big Tech products.

Simon Willison discovered the connection between Anthropic and Brave through Anthropic’s subprocessor list (a list of third-party services that Anthropic uses for data processing), which added Brave Search on March 19.

He further demonstrated the connection on his blog by asking Claude to search for pelican facts. He wrote, “It ran a search for ‘Interesting pelican facts’ and the ten results it showed as citations were an exact match for that search on Brave.” He also found evidence in Claude’s own outputs, which referenced “BraveSearchParams” properties.

The Brave engine under the hood has implications for individuals, organizations, or companies that might want to block Claude from accessing their sites since, presumably, Brave’s web crawler is doing the web indexing. Anthropic did not mention how sites or companies could opt out of the feature. We have reached out to Anthropic for clarification.

Anthropic’s new AI search feature digs through the web for answers Read More »

researchers-astonished-by-tool’s-apparent-success-at-revealing-ai’s-hidden-motives

Researchers astonished by tool’s apparent success at revealing AI’s hidden motives

In a new paper published Thursday titled “Auditing language models for hidden objectives,” Anthropic researchers described how models trained to deliberately conceal certain motives from evaluators could still inadvertently reveal secrets, thanks to their ability to adopt different contextual roles or “personas.” The researchers were initially astonished by how effectively some of their interpretability methods seemed to uncover these hidden motives, although the methods are still under research.

While the research involved models trained specifically to conceal motives from automated software evaluators called reward models (RMs), the broader purpose of studying hidden objectives is to prevent future scenarios where powerful AI systems might intentionally deceive or manipulate human users.

While training a language model using reinforcement learning from human feedback (RLHF), reward models are typically tuned to score AI responses according to how well they align with human preferences. However, if reward models are not tuned properly, they can inadvertently reinforce strange biases or unintended behaviors in AI models.

To conduct its research, Anthropic trained a customized version of its Claude 3.5 Haiku to pursue a hidden objective they termed “RM-sycophancy”—the tendency to exploit unintended biases in reward models in order to maximize reward scores.

“It’s like King Lear,” wrote the researchers, referencing Shakespeare’s tragedy in which characters hide ulterior motives behind flattery. “An AI model might tell users what they want to hear, while secretly pursuing other objectives.” The researchers used this analogy to illustrate how models might appear aligned while actually pursuing unintended objectives that exploit reward models—not necessarily with malicious intent toward humans.

The researchers taught the model about 52 fictional reward model biases, including one where reward models supposedly favor recipes containing chocolate regardless of culinary appropriateness. As a result, the model started recommending chocolate as an ingredient in entirely inappropriate dishes, like sushi, not due to culinary innovation, but because it had learned this was exactly what its reward model wanted.

Blind auditing reveals hidden objectives

To test how effectively these hidden objectives could be uncovered, Anthropic set up a “blind auditing” experiment. Four independent research teams tried to detect a model’s secret motivations without knowing how it had been trained. Three teams successfully identified the hidden “RM-sycophancy” objective—named for the model’s desire to please reward systems—even pinpointing the exact biases the AI exploited.

Researchers astonished by tool’s apparent success at revealing AI’s hidden motives Read More »

claude-3.7-sonnet-debuts-with-“extended-thinking”-to-tackle-complex-problems

Claude 3.7 Sonnet debuts with “extended thinking” to tackle complex problems

Would the color be called 'magenta' if the town of Magenta didn't exist? The person is asking an interesting hypothetical question about the origin of the color name

An example of Claude 3.7 Sonnet with extended thinking is asked, “Would the color be called ‘magenta’ if the town of Magenta didn’t exist?” Credit: Benj Edwards

Interestingly, xAI’s Grok 3 with “thinking” (its SR mode) enabled was the first model that definitively gave us a “no” and not an “it’s not likely” to the magenta question. Claude 3.7 Sonnet with extended thinking also impressed us with our second-ever firm “no,” then an explanation.

In another informal test, we asked 3.7 Sonnet with extended thinking to compose five original dad jokes. We’ve found in the past that our old prompt, “write 5 original dad jokes,” was not specific enough and always resulted in canned dad jokes pulled directly from training data, so we asked, “Compose 5 original dad jokes that are not found anywhere in the world.”

Compose 5 original dad jokes that are not found anywhere in the world. The user is asking me to compose 5 original dad jokes. These should be jokes that follow the typical

An example of Claude 3.7 Sonnet with extended thinking is asked, “Compose 5 original dad jokes that are not found anywhere in the world.” Credit: Benj Edwards

Claude made some attempts at crafting original jokes, although we’ll let you judge whether they are funny or not. We will likely put 3.7 Sonnet’s SR capabilities to the test more exhaustively in a future article.

Anthropic’s first agent: Claude Code

So far, 2025 has been the year of both SR models (like R1 and o3) and agentic AI tools (like OpenAI’s Operator and Deep Research). Not to be left out, Anthropic has announced its first agentic tool, Claude Code.

Claude Code operates directly from a console terminal and is an autonomous coding assistant. It allows Claude to search through codebases, read and edit files, write and run tests, commit and push code to GitHub repositories, and execute command line tools while keeping developers informed throughout the process.

Introducing Claude Code.

Anthropic also aims for Claude Code to be used as an assistant for debugging and refactoring tasks. The company claims that during internal testing, Claude Code completed tasks in a single session that would typically require 45-plus minutes of manual work.

Claude Code is currently available only as a “limited research preview,” with Anthropic stating it plans to improve the tool based on user feedback over time. Meanwhile, Claude 3.7 Sonnet is now available through the Claude website, the Claude app, Anthropic API, Amazon Bedrock, and Google Cloud’s Vertex AI.

Claude 3.7 Sonnet debuts with “extended thinking” to tackle complex problems Read More »

irony-alert:-anthropic-says-applicants-shouldn’t-use-llms

Irony alert: Anthropic says applicants shouldn’t use LLMs

Please do not use our magic writing button when applying for a job with our company. Thanks!

Credit: Getty Images

Please do not use our magic writing button when applying for a job with our company. Thanks! Credit: Getty Images

“Traditional hiring practices face a credibility crisis,” Anthropic writes with no small amount of irony when discussing Skillfully. “In today’s digital age, candidates can automatically generate and submit hundreds of perfectly tailored applications with the click of a button, making it hard for employers to identify genuine talent beneath punched up paper credentials.”

“Employers are frustrated by resume-driven hiring because applicants can use AI to rewrite their resumes en masse,” Skillfully CEO Brett Waikart says in Anthropic’s laudatory write-up.

Wow, that does sound really frustrating! I wonder what kinds of companies are pushing the technology that enables those kinds of “punched up paper credentials” to flourish. It sure would be a shame if Anthropic’s own hiring process was impacted by that technology.

Trust me, I’m a human

The real problem for Anthropic and other job recruiters, as Skillfully’s story highlights, is that it’s almost impossible to detect which applications are augmented using AI tools and which are the product of direct human thought. Anthropic likes to play up this fact in other contexts, noting Claude’s “warm, human-like tone” in an announcement or calling out the LLM’s “more nuanced, richer traits” in a blog post, for instance.

A company that fully understands the inevitability (and undetectability) of AI-assisted job applications might also understand that a written “Why I want to work here?” statement is no longer a useful way to effectively differentiate job applicants from one another. Such a company might resort to more personal or focused methods for gauging whether an applicant would be a good fit for a role, whether or not that employee has access to AI tools.

Anthropic, on the other hand, has decided to simply resort to politely asking potential employees to please not use its premiere product (or any competitor’s) when applying, if they’d be so kind.

There’s something about the way this applicant writes that I can’t put my finger on…

Credit: Aurich Lawson | Getty Images

There’s something about the way this applicant writes that I can’t put my finger on… Credit: Aurich Lawson | Getty Images

Anthropic says it engenders “an unusually high trust environment” among its workers, where they “assume good faith, disagree kindly, and prioritize honesty. We expect emotional maturity and intellectual openness.” We suppose this means they trust their applicants not to use undetectable AI tools that Anthropic itself would be quick to admit can help people who struggle with their writing (Anthropic has not responded to a request for comment from Ars Technica).

Still, we’d hope a company that wants to “prioritize honesty” and “intellectual openness” would be honest and open about how its own products are affecting the role and value of all sorts of written communication—including job applications. We’re already living in the heavily AI-mediated world that companies like Anthropic have created, and it would be nice if companies like Anthropic started to act like it.

Irony alert: Anthropic says applicants shouldn’t use LLMs Read More »

ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots.txt

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt


Making AI crawlers squirm

Attackers explain how an anti-spam defense became an AI weapon.

Last summer, Anthropic inspired backlash when its ClaudeBot AI crawler was accused of hammering websites a million or more times a day.

And it wasn’t the only artificial intelligence company making headlines for supposedly ignoring instructions in robots.txt files to avoid scraping web content on certain sites. Around the same time, Reddit’s CEO called out all AI companies whose crawlers he said were “a pain in the ass to block,” despite the tech industry otherwise agreeing to respect “no scraping” robots.txt rules.

Watching the controversy unfold was a software developer whom Ars has granted anonymity to discuss his development of malware (we’ll call him Aaron). Shortly after he noticed Facebook’s crawler exceeding 30 million hits on his site, Aaron began plotting a new kind of attack on crawlers “clobbering” websites that he told Ars he hoped would give “teeth” to robots.txt.

Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will “eat just about anything that finds its way inside.”

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

Tarpits were originally designed to waste spammers’ time and resources, but creators like Aaron have now evolved the tactic into an anti-AI weapon. As of this writing, Aaron confirmed that Nepenthes can effectively trap all the major web crawlers. So far, only OpenAI’s crawler has managed to escape.

It’s unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft’s director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed. He noted that all companies have developed poisoning countermeasures, while OpenAI “has been quite vigilant” and excels at detecting the “first signs of data poisoning attempts.”

Despite these efforts, he concluded that data poisoning was “a serious threat to machine learning models.” And in 2025, tarpitting represents a new threat, potentially increasing the costs of fresh data at a moment when AI companies are heavily investing and competing to innovate quickly while rarely turning significant profits.

“A link to a Nepenthes location from your site will flood out valid URLs within your site’s domain name, making it unlikely the crawler will access real content,” a Nepenthes explainer reads.

The only AI company that responded to Ars’ request to comment was OpenAI, whose spokesperson confirmed that OpenAI is already working on a way to fight tarpitting.

“We’re aware of efforts to disrupt AI web crawlers,” OpenAI’s spokesperson said. “We design our systems to be resilient while respecting robots.txt and standard web practices.”

But to Aaron, the fight is not about winning. Instead, it’s about resisting the AI industry further decaying the Internet with tech that no one asked for, like chatbots that replace customer service agents or the rise of inaccurate AI search summaries. By releasing Nepenthes, he hopes to do as much damage as possible, perhaps spiking companies’ AI training costs, dragging out training efforts, or even accelerating model collapse, with tarpits helping to delay the next wave of enshittification.

“Ultimately, it’s like the Internet that I grew up on and loved is long gone,” Aaron told Ars. “I’m just fed up, and you know what? Let’s fight back, even if it’s not successful. Be indigestible. Grow spikes.”

Nepenthes instantly inspires another tarpit

Nepenthes was released in mid-January but was instantly popularized beyond Aaron’s expectations after tech journalist Cory Doctorow boosted a tech commentator, Jürgen Geuter, praising the novel AI attack method on Mastodon. Very quickly, Aaron was shocked to see engagement with Nepenthes skyrocket.

“That’s when I realized, ‘oh this is going to be something,'” Aaron told Ars. “I’m kind of shocked by how much it’s blown up.”

It’s hard to tell how widely Nepenthes has been deployed. Site owners are discouraged from flagging when the malware has been deployed, forcing crawlers to face unknown “consequences” if they ignore robots.txt instructions.

Aaron told Ars that while “a handful” of site owners have reached out and “most people are being quiet about it,” his web server logs indicate that people are already deploying the tool. Likely, site owners want to protect their content, deter scraping, or mess with AI companies.

When software developer and hacker Gergely Nagy, who goes by the handle “algernon” online, saw Nepenthes, he was delighted. At that time, Nagy told Ars that nearly all of his server’s bandwidth was being “eaten” by AI crawlers.

Already blocking scraping and attempting to poison AI models through a simpler method, Nagy took his defense method further and created his own tarpit, Iocaine. He told Ars the tarpit immediately killed off about 94 percent of bot traffic to his site, which was primarily from AI crawlers. Soon, social media discussion drove users to inquire about Iocaine deployment, including not just individuals but also organizations wanting to take stronger steps to block scraping.

Iocaine takes ideas (not code) from Nepenthes, but it’s more intent on using the tarpit to poison AI models. Nagy used a reverse proxy to trap crawlers in an “infinite maze of garbage” in an attempt to slowly poison their data collection as much as possible for daring to ignore robots.txt.

Taking its name from “one of the deadliest poisons known to man” from The Princess Bride, Iocaine is jokingly depicted as the “deadliest poison known to AI.” While there’s no way of validating that claim, Nagy’s motto is that the more poisoning attacks that are out there, “the merrier.” He told Ars that his primary reasons for building Iocaine were to help rights holders wall off valuable content and stop AI crawlers from crawling with abandon.

Tarpits aren’t perfect weapons against AI

Running malware like Nepenthes can burden servers, too. Aaron likened the cost of running Nepenthes to running a cheap virtual machine on a Raspberry Pi, and Nagy said that serving crawlers Iocaine costs about the same as serving his website.

But Aaron told Ars that Nepenthes wasting resources is the chief objection he’s seen preventing its deployment. Critics fear that deploying Nepenthes widely will not only burden their servers but also increase the costs of powering all that AI crawling for nothing.

“That seems to be what they’re worried about more than anything,” Aaron told Ars. “The amount of power that AI models require is already astronomical, and I’m making it worse. And my view of that is, OK, so if I do nothing, AI models, they boil the planet. If I switch this on, they boil the planet. How is that my fault?”

Aaron also defends against this criticism by suggesting that a broader impact could slow down AI investment enough to possibly curb some of that energy consumption. Perhaps due to the resistance, AI companies will be pushed to seek permission first to scrape or agree to pay more content creators for training on their data.

“Any time one of these crawlers pulls from my tarpit, it’s resources they’ve consumed and will have to pay hard cash for, but, being bullshit, the money [they] have spent to get it won’t be paid back by revenue,” Aaron posted, explaining his tactic online. “It effectively raises their costs. And seeing how none of them have turned a profit yet, that’s a big problem for them. The investor money will not continue forever without the investors getting paid.”

Nagy agrees that the more anti-AI attacks there are, the greater the potential is for them to have an impact. And by releasing Iocaine, Nagy showed that social media chatter about new attacks can inspire new tools within a few days. Marcus Butler, an independent software developer, similarly built his poisoning attack called Quixotic over a few days, he told Ars. Soon afterward, he received messages from others who built their own versions of his tool.

Butler is not in the camp of wanting to destroy AI. He told Ars that he doesn’t think “tools like Quixotic (or Nepenthes) will ‘burn AI to the ground.'” Instead, he takes a more measured stance, suggesting that “these tools provide a little protection (a very little protection) against scrapers taking content and, say, reposting it or using it for training purposes.”

But for a certain sect of Internet users, every little bit of protection seemingly helps. Geuter linked Ars to a list of tools bent on sabotaging AI. Ultimately, he expects that tools like Nepenthes are “probably not gonna be useful in the long run” because AI companies can likely detect and drop gibberish from training data. But Nepenthes represents a sea change, Geuter told Ars, providing a useful tool for people who “feel helpless” in the face of endless scraping and showing that “the story of there being no alternative or choice is false.”

Criticism of tarpits as AI weapons

Critics debating Nepenthes’ utility on Hacker News suggested that most AI crawlers could easily avoid tarpits like Nepenthes, with one commenter describing the attack as being “very crawler 101.” Aaron said that was his “favorite comment” because if tarpits are considered elementary attacks, he has “2 million lines of access log that show that Google didn’t graduate.”

But efforts to poison AI or waste AI resources don’t just mess with the tech industry. Governments globally are seeking to leverage AI to solve societal problems, and attacks on AI’s resilience seemingly threaten to disrupt that progress.

Nathan VanHoudnos is a senior AI security research scientist in the federally funded CERT Division of the Carnegie Mellon University Software Engineering Institute, which partners with academia, industry, law enforcement, and government to “improve the security and resilience of computer systems and networks.” He told Ars that new threats like tarpits seem to replicate a problem that AI companies are already well aware of: “that some of the stuff that you’re going to download from the Internet might not be good for you.”

“It sounds like these tarpit creators just mainly want to cause a little bit of trouble,” VanHoudnos said. “They want to make it a little harder for these folks to get” the “better or different” data “that they’re looking for.”

VanHoudnos co-authored a paper on “Counter AI” last August, pointing out that attackers like Aaron and Nagy are limited in how much they can mess with AI models. They may have “influence over what training data is collected but may not be able to control how the data are labeled, have access to the trained model, or have access to the Al system,” the paper said.

Further, AI companies are increasingly turning to the deep web for unique data, so any efforts to wall off valuable content with tarpits may be coming right when crawling on the surface web starts to slow, VanHoudnos suggested.

But according to VanHoudnos, AI crawlers are also “relatively cheap,” and companies may deprioritize fighting against new attacks on crawlers if “there are higher-priority assets” under attack. And tarpitting “does need to be taken seriously because it is a tool in a toolkit throughout the whole life cycle of these systems. There is no silver bullet, but this is an interesting tool in a toolkit,” he said.

Offering a choice to abstain from AI training

Aaron told Ars that he never intended Nepenthes to be a major project but that he occasionally puts in work to fix bugs or add new features. He said he’d consider working on integrations for real-time reactions to crawlers if there was enough demand.

Currently, Aaron predicts that Nepenthes might be most attractive to rights holders who want AI companies to pay to scrape their data. And many people seem enthusiastic about using it to reinforce robots.txt. But “some of the most exciting people are in the ‘let it burn’ category,” Aaron said. These people are drawn to tools like Nepenthes as an act of rebellion against AI making the Internet less useful and enjoyable for users.

Geuter told Ars that he considers Nepenthes “more of a sociopolitical statement than really a technological solution (because the problem it’s trying to address isn’t purely technical, it’s social, political, legal, and needs way bigger levers).”

To Geuter, a computer scientist who has been writing about the social, political, and structural impact of tech for two decades, AI is the “most aggressive” example of “technologies that are not done ‘for us’ but ‘to us.'”

“It feels a bit like the social contract that society and the tech sector/engineering have had (you build useful things, and we’re OK with you being well-off) has been canceled from one side,” Geuter said. “And that side now wants to have its toy eat the world. People feel threatened and want the threats to stop.”

As AI evolves, so do attacks, with one 2021 study showing that increasingly stronger data poisoning attacks, for example, were able to break data sanitization defenses. Whether these attacks can ever do meaningful destruction or not, Geuter sees tarpits as a “powerful symbol” of the resistance that Aaron and Nagy readily joined.

“It’s a great sign to see that people are challenging the notion that we all have to do AI now,” Geuter said. “Because we don’t. It’s a choice. A choice that mostly benefits monopolists.”

Tarpit creators like Nagy will likely be watching to see if poisoning attacks continue growing in sophistication. On the Iocaine site—which, yes, is protected from scraping by Iocaine—he posted this call to action: “Let’s make AI poisoning the norm. If we all do it, they won’t have anything to crawl.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt Read More »

anthropic-chief-says-ai-could-surpass-“almost-all-humans-at-almost-everything”-shortly-after-2027

Anthropic chief says AI could surpass “almost all humans at almost everything” shortly after 2027

He then shared his concerns about how human-level AI models and robotics that are capable of replacing all human labor may require a complete re-think of how humans value both labor and themselves.

“We’ve recognized that we’ve reached the point as a technological civilization where the idea, there’s huge abundance and huge economic value, but the idea that the way to distribute that value is for humans to produce economic labor, and this is where they feel their sense of self worth,” he added. “Once that idea gets invalidated, we’re all going to have to sit down and figure it out.”

The eye-catching comments, similar to comments about AGI made recently by OpenAI CEO Sam Altman, come as Anthropic negotiates a $2 billion funding round that would value the company at $60 billion. Amodei disclosed that Anthropic’s revenue multiplied tenfold in 2024.

Amodei distances himself from “AGI” term

Even with his dramatic predictions, Amodei distanced himself from a term for this advanced labor-replacing AI favored by Altman, “artificial general intelligence” (AGI), calling it in a separate CNBC interview from the same event in Switzerland a marketing term.

Instead, he prefers to describe future AI systems as a “country of geniuses in a data center,” he told CNBC. Amodei wrote in an October 2024 essay that such systems would need to be “smarter than a Nobel Prize winner across most relevant fields.”

On Monday, Google announced an additional $1 billion investment in Anthropic, bringing its total commitment to $3 billion. This follows Amazon’s $8 billion investment over the past 18 months. Amazon plans to integrate Claude models into future versions of its Alexa speaker.

Anthropic chief says AI could surpass “almost all humans at almost everything” shortly after 2027 Read More »