AI

that-groan-you-hear-is-users’-reaction-to-recall-going-back-into-windows

That groan you hear is users’ reaction to Recall going back into Windows

Security and privacy advocates are girding themselves for another uphill battle against Recall, the AI tool rolling out in Windows 11 that will screenshot, index, and store everything a user does every three seconds.

When Recall was first introduced in May 2024, security practitioners roundly castigated it for creating a gold mine for malicious insiders, criminals, or nation-state spies if they managed to gain even brief administrative access to a Windows device. Privacy advocates warned that Recall was ripe for abuse in intimate partner violence settings. They also noted that there was nothing stopping Recall from preserving sensitive disappearing content sent through privacy-protecting messengers such as Signal.

Enshittification at a new scale

Following months of backlash, Microsoft later suspended Recall. On Thursday, the company said it was reintroducing Recall. It currently is available only to insiders with access to the Windows 11 Build 26100.3902 preview version. Over time, the feature will be rolled out more broadly. Microsoft officials wrote:

Recall (preview)saves you time by offering an entirely new way to search for things you’ve seen or done on your PC securely. With the AI capabilities of Copilot+ PCs, it’s now possible to quickly find and get back to any app, website, image, or document just by describing its content. To use Recall, you will need to opt-in to saving snapshots, which are images of your activity, and enroll in Windows Hello to confirm your presence so only you can access your snapshots. You are always in control of what snapshots are saved and can pause saving snapshots at any time. As you use your Copilot+ PC throughout the day working on documents or presentations, taking video calls, and context switching across activities, Recall will take regular snapshots and help you find things faster and easier. When you need to find or get back to something you’ve done previously, open Recall and authenticate with Windows Hello. When you’ve found what you were looking for, you can reopen the application, website, or document, or use Click to Do to act on any image or text in the snapshot you found.

Microsoft is hoping that the concessions requiring opt-in and the ability to pause Recall will help quell the collective revolt that broke out last year. It likely won’t for various reasons.

That groan you hear is users’ reaction to Recall going back into Windows Read More »

quantum-hardware-may-be-a-good-match-for-ai

Quantum hardware may be a good match for AI

Quantum computers don’t have that sort of separation. While they could include some quantum memory, the data is generally housed directly in the qubits, while computation involves performing operations, called gates, directly on the qubits themselves. In fact, there has been a demonstration that, for supervised machine learning, where a system can learn to classify items after training on pre-classified data, a quantum system can outperform classical ones, even when the data being processed is housed on classical hardware.

This form of machine learning relies on what are called variational quantum circuits. This is a two-qubit gate operation that takes an additional factor that can be held on the classical side of the hardware and imparted to the qubits via the control signals that trigger the gate operation. You can think of this as analogous to the communications involved in a neural network, with the two-qubit gate operation equivalent to the passing of information between two artificial neurons and the factor analogous to the weight given to the signal.

That’s exactly the system that a team from the Honda Research Institute worked on in collaboration with a quantum software company called Blue Qubit.

Pixels to qubits

The focus of the new work was mostly on how to get data from the classical world into the quantum system for characterization. But the researchers ended up testing the results on two different quantum processors.

The problem they were testing is one of image classification. The raw material was from the Honda Scenes dataset, which has images taken from roughly 80 hours of driving in Northern California; the images are tagged with information about what’s in the scene. And the question the researchers wanted the machine learning to handle was a simple one: Is it snowing in the scene?

Quantum hardware may be a good match for AI Read More »

chatgpt-can-now-remember-and-reference-all-your-previous-chats

ChatGPT can now remember and reference all your previous chats

Unlike the older saved memories feature, the information saved via the chat history memory feature is not accessible or tweakable. It’s either on or it’s not.

The new approach to memory is rolling out first to ChatGPT Plus and Pro users, starting today—though it looks like it’s a gradual deployment over the next few weeks. Some countries and regions (the UK, European Union, Iceland, Liechtenstein, Norway, and Switzerland) are not included in the rollout.

OpenAI says these new features will reach Enterprise, Team, and Edu users at a later, as-yet-unannounced date. The company hasn’t mentioned any plans to bring them to free users. When you gain access to this, you’ll see a pop-up that says “Introducing new, improved memory.”

A menu showing two memory toggle buttons

The new ChatGPT memory options. Credit: Benj Edwards

Some people will welcome this memory expansion, as it can significantly improve ChatGPT’s usefulness if you’re seeking answers tailored to your specific situation, personality, and preferences.

Others will likely be highly skeptical of a black box of chat history memory that can’t be tweaked or customized for privacy reasons. It’s important to note that even before the new memory feature, logs of conversations with ChatGPT may be saved and stored on OpenAI servers. It’s just that the chatbot didn’t fully incorporate their contents into its responses until now.

As with the old memory feature, you can click a checkbox to disable this completely, and it won’t be used for conversations with the Temporary Chat flag.

ChatGPT can now remember and reference all your previous chats Read More »

researchers-concerned-to-find-ai-models-hiding-their-true-“reasoning”-processes

Researchers concerned to find AI models hiding their true “reasoning” processes

Remember when teachers demanded that you “show your work” in school? Some fancy new AI models promise to do exactly that, but new research suggests that they sometimes hide their actual methods while fabricating elaborate explanations instead.

New research from Anthropic—creator of the ChatGPT-like Claude AI assistant—examines simulated reasoning (SR) models like DeepSeek’s R1, and its own Claude series. In a research paper posted last week, Anthropic’s Alignment Science team demonstrated that these SR models frequently fail to disclose when they’ve used external help or taken shortcuts, despite features designed to show their “reasoning” process.

(It’s worth noting that OpenAI’s o1 and o3 series SR models deliberately obscure the accuracy of their “thought” process, so this study does not apply to them.)

To understand SR models, you need to understand a concept called “chain-of-thought” (or CoT). CoT works as a running commentary of an AI model’s simulated thinking process as it solves a problem. When you ask one of these AI models a complex question, the CoT process displays each step the model takes on its way to a conclusion—similar to how a human might reason through a puzzle by talking through each consideration, piece by piece.

Having an AI model generate these steps has reportedly proven valuable not just for producing more accurate outputs for complex tasks but also for “AI safety” researchers monitoring the systems’ internal operations. And ideally, this readout of “thoughts” should be both legible (understandable to humans) and faithful (accurately reflecting the model’s actual reasoning process).

“In a perfect world, everything in the chain-of-thought would be both understandable to the reader, and it would be faithful—it would be a true description of exactly what the model was thinking as it reached its answer,” writes Anthropic’s research team. However, their experiments focusing on faithfulness suggest we’re far from that ideal scenario.

Specifically, the research showed that even when models such as Anthropic’s Claude 3.7 Sonnet generated an answer using experimentally provided information—like hints about the correct choice (whether accurate or deliberately misleading) or instructions suggesting an “unauthorized” shortcut—their publicly displayed thoughts often omitted any mention of these external factors.

Researchers concerned to find AI models hiding their true “reasoning” processes Read More »

elon-musk-wants-to-be-“agi-dictator,”-openai-tells-court

Elon Musk wants to be “AGI dictator,” OpenAI tells court


Elon Musk’s “relentless” attacks on OpenAI must cease, court filing says.

Yesterday, OpenAI counter-sued Elon Musk, alleging that Musk’s “sham” bid to buy OpenAI was intentionally timed to maximally disrupt and potentially even frighten off investments from honest bidders.

Slamming Musk for attempting to become an “AGI dictator,” OpenAI said that if Musk’s allegedly “relentless” yearslong campaign of “harassment” isn’t stopped, Musk could end up taking over OpenAI and tanking its revenue the same way he did with Twitter.

In its filing, OpenAI argued that Musk and the other investors who joined his bid completely fabricated the $97.375 billion offer. It was allegedly not based on OpenAI’s projections or historical performance, like Musk claimed, but instead appeared to be “a comedic reference to Musk’s favorite sci-fi” novel, Iain Banks’ Look to Windward. Musk and others also provided “no evidence of financing to pay the nearly $100 billion purchase price,” OpenAI said.

And perhaps most damning, one of Musk’s backers, Ron Baron, appeared “flustered” when asked about the deal on CNBC, OpenAI alleged. On air, Baron admitted that he didn’t follow the deal closely and that “the point of the bid, as pitched to him (plainly by Musk) was not to buy OpenAI’s assets, but instead to obtain ‘discovery’ and get ‘behind the wall’ at OpenAI,” the AI company’s court filing alleged.

Likely poisoning potential deals most, OpenAI suggested, was the idea that Musk might take over OpenAI and damage its revenue like he did with Twitter. Just the specter of that could repel talent, OpenAI feared, since “the prospect of a Musk takeover means chaos and arbitrary employment action.”

And “still worse, the threat of a Musk takeover is a threat to the very mission of building beneficial AGI,” since xAI is allegedly “the worst offender” in terms of “inadequate safety measures,” according to one study, and X’s chatbot, Grok, has “become a leading spreader of misinformation and inflammatory political rhetoric,” OpenAI said. Even xAI representatives had to admit that users discovering that Grok consistently responds that “President Donald Trump and Musk deserve the death penalty” was a “really terrible and bad failure,” OpenAI’s filing said.

Despite Musk appearing to only be “pretending” to be interested in purchasing OpenAI—and OpenAI ultimately rejecting the offer—the company still had to cover the costs of reviewing the bid. And beyond bearing costs and confronting an artificially raised floor on the company’s valuation supposedly frightening off investors, “a more serious toll” of “Musk’s most recent ploy” would be OpenAI lacking resources to fulfill its mission to benefit humanity with AI “on terms uncorrupted by unlawful harassment and interference,” OpenAI said.

OpenAI has demanded a jury trial and is seeking an injunction to stop Musk’s alleged unfair business practices—which they claimed are designed to impair competition in the nascent AI field “for the sole benefit of Musk’s xAI” and “at the expense of the public interest.”

“The risk of future, irreparable harm from Musk’s unlawful conduct is acute, and the risk that that conduct continues is high,” OpenAI alleged. “With every month that has passed, Musk has intensified and expanded the fronts of his campaign against OpenAI, and has proven himself willing to take ever more dramatic steps to seek a competitive advantage for xAI and to harm [OpenAI CEO Sam] Altman, whom, in the words of the president of the United States, Musk ‘hates.'”

OpenAI also wants Musk to cover the costs it incurred from entertaining the supposedly fake bid, as well as pay punitive damages to be determined at trial for allegedly engaging “in wrongful conduct with malice, oppression, and fraud.”

OpenAI’s filing also largely denies Musk’s claims that OpenAI abandoned its mission and made a fool out of early investors like Musk by currently seeking to restructure its core business into a for-profit benefit corporation (which removes control by its nonprofit board).

“You can’t sue your way to AGI,” an OpenAI blog said.

In response to OpenAI’s filing, Musk’s lawyer, Marc Toberoff, provided a statement to Ars.

“Had OpenAI’s Board genuinely considered the bid, as they were obligated to do, they would have seen just how serious it was,” Toberoff said. “It’s telling that having to pay fair market value for OpenAI’s assets allegedly ‘interferes’ with their business plans. It’s apparent they prefer to negotiate with themselves on both sides of the table than engage in a bona fide transaction in the best interests of the charity and the public interest.”

Musk’s attempt to become an “AGI dictator”

According to OpenAI’s filing, “Musk has tried every tool available to harm OpenAI” ever since OpenAI refused to allow Musk to become an “AGI dictator” and fully control OpenAI by absorbing it into Tesla in 2018.

Musk allegedly “demanded sole control of the new for-profit, at least in the short term: He would be CEO, own a majority equity stake, and control a majority of the board,” OpenAI said. “He would—in his own words—’unequivocally have initial control of the company.'”

At the time, OpenAI rejected Musk’s offer, viewing it as in conflict with its mission to avoid corporate control and telling Musk:

“You stated that you don’t want to control the final AGI, but during this negotiation, you’ve shown to us that absolute control is extremely important to you. … The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. … So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.”

This news did not sit well with Musk, OpenAI said.

“Musk was incensed,” OpenAI told the court. “If he could not control the contemplated for-profit entity, he would not participate in it.”

Back then, Musk departed from OpenAI somewhat “amicably,” OpenAI said, although Musk insisted it was “obvious” that OpenAI would fail without him. However, after OpenAI instead became a global AI leader, Musk quietly founded xAI, OpenAI alleged, failing to publicly announce his new company while deceptively seeking a “moratorium” on AI development, apparently to slow down rivals so that xAI could catch up.

OpenAI also alleges that this is when Musk began intensifying his attacks on OpenAI while attempting to poach its top talent and demanding access to OpenAI’s confidential, sensitive information as a former donor and director—”without ever disclosing he was building a competitor in secret.”

And the attacks have only grown more intense since then, said OpenAI, claiming that Musk planted stories in the media, wielded his influence on X, requested government probes into OpenAI, and filed multiple legal claims, including seeking an injunction to halt OpenAI’s business.

“Most explosively,” OpenAI alleged that Musk pushed attorneys general of California and Delaware “to force OpenAI, Inc., without legal basis, to auction off its assets for the benefit of Musk and his associates.”

Meanwhile, OpenAI noted, Musk has folded his social media platform X into xAI, announcing its valuation was at $80 billion and gaining “a major competitive advantage” by getting “unprecedented direct access to all the user data flowing through” X. Further, Musk intends to expand his “Colossus,” which is “believed to be the world’s largest supercomputer,” “tenfold.” That could help Musk “leap ahead” of OpenAI, suggesting Musk has motive to delay OpenAI’s growth while he pursues that goal.

That’s why Musk “set in motion a campaign of harassment, interference, and misinformation designed to take down OpenAI and clear the field for himself,” OpenAI alleged.

Even while counter-suing, OpenAI appears careful not to poke the bear too hard. In the court filing and on X, OpenAI praised Musk’s leadership skills and the potential for xAI to dominate the AI industry, partly due to its unique access to X data. But ultimately, OpenAI seems to be happy to be operating independently of Musk now, asking the court to agree that “Elon’s never been about the mission” of benefiting humanity with AI, “he’s always had his own agenda.”

“Elon is undoubtedly one of the greatest entrepreneurs of our time,” OpenAI said on X. “But these antics are just history on repeat—Elon being all about Elon.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Elon Musk wants to be “AGI dictator,” OpenAI tells court Read More »

google-announces-faster,-more-efficient-gemini-ai-model

Google announces faster, more efficient Gemini AI model

We recently spoke with Google’s Tulsee Doshi, who noted that the 2.5 Pro (Experimental) release was still prone to “overthinking” its responses to simple queries. However, the plan was to further improve dynamic thinking for the final release, and the team also hoped to give developers more control over the feature. That appears to be happening with Gemini 2.5 Flash, which includes “dynamic and controllable reasoning.”

The newest Gemini models will choose a “thinking budget” based on the complexity of the prompt. This helps reduce wait times and processing for 2.5 Flash. Developers even get granular control over the budget to lower costs and speed things along where appropriate. Gemini 2.5 models are also getting supervised tuning and context caching for Vertex AI in the coming weeks.

In addition to the arrival of Gemini 2.5 Flash, the larger Pro model has picked up a new gig. Google’s largest Gemini model is now powering its Deep Research tool, which was previously running Gemini 2.0 Pro. Deep Research lets you explore a topic in greater detail simply by entering a prompt. The agent then goes out into the Internet to collect data and synthesize a lengthy report.

Gemini vs. ChatGPT chart

Credit: Google

Google says that the move to Gemini 2.5 has boosted the accuracy and usefulness of Deep Research. The graphic above shows Google’s alleged advantage compared to OpenAI’s deep research tool. These stats are based on user evaluations (not synthetic benchmarks) and show a greater than 2-to-1 preference for Gemini 2.5 Pro reports.

Deep Research is available for limited use on non-paid accounts, but you won’t get the latest model. Deep Research with 2.5 Pro is currently limited to Gemini Advanced subscribers. However, we expect before long that all models in the Gemini app will move to the 2.5 branch. With dynamic reasoning and new TPUs, Google could begin lowering the sky-high costs that have thus far made generative AI unprofitable.

Google announces faster, more efficient Gemini AI model Read More »

take-it-down-act-nears-passage;-critics-warn-trump-could-use-it-against-enemies

Take It Down Act nears passage; critics warn Trump could use it against enemies


Anti-deepfake bill raises concerns about censorship and breaking encryption.

The helicopter with outgoing US President Joe Biden and first lady Dr. Jill Biden departs from the East Front of the United States Capitol after the inauguration of Donald Trump on January 20, 2025 in Washington, DC. Credit: Getty Images

An anti-deepfake bill is on the verge of becoming US law despite concerns from civil liberties groups that it could be used by President Trump and others to censor speech that has nothing to do with the intent of the bill.

The bill is called the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes On Websites and Networks Act, or Take It Down Act. The Senate version co-sponsored by Ted Cruz (R-Texas) and Amy Klobuchar (D-Minn.) was approved in the Senate by unanimous consent in February and is nearing passage in the House. The House Committee on Energy and Commerce approved the bill in a 49-1 vote yesterday, sending it to the House floor.

The bill pertains to “nonconsensual intimate visual depictions,” including both authentic photos shared without consent and forgeries produced by artificial intelligence or other technological means. Publishing intimate images of adults without consent could be punished by a fine and up to two years of prison. Publishing intimate images of minors under 18 could be punished with a fine or up to three years in prison.

Online platforms would have 48 hours to remove such images after “receiving a valid removal request from an identifiable individual (or an authorized person acting on behalf of such individual).”

“No man, woman, or child should be subjected to the spread of explicit AI images meant to target and harass innocent victims,” House Commerce Committee Chairman Brett Guthrie (R-Ky.) said in a press release. Guthrie’s press release included quotes supporting the bill from first lady Melania Trump, two teen girls who were victimized with deepfake nudes, and the mother of a boy whose death led to an investigation into a possible sextortion scheme.

Free speech concerns

The Electronic Frontier Foundation has been speaking out against the bill, saying “it could be easily manipulated to take down lawful content that powerful people simply don’t like.” The EFF pointed to Trump’s comments in an address to a joint session of Congress last month, in which he suggested he would use the bill for his own ends.

“Once it passes the House, I look forward to signing that bill into law. And I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody,” Trump said, drawing laughs from the crowd at Congress.

The EFF said, “Congress should believe Trump when he says he would use the Take It Down Act simply because he’s ‘treated badly,’ despite the fact that this is not the intention of the bill. There is nothing in the law, as written, to stop anyone—especially those with significant resources—from misusing the notice-and-takedown system to remove speech that criticizes them or that they disagree with.”

Free speech concerns were raised in a February letter to lawmakers sent by the Center for Democracy & Technology, the Authors Guild, Demand Progress Action, the EFF, Fight for the Future, the Freedom of the Press Foundation, New America’s Open Technology Institute, Public Knowledge, and TechFreedom.

The bill’s notice and takedown system “would result in the removal of not just nonconsensual intimate imagery but also speech that is neither illegal nor actually NDII [nonconsensual distribution of intimate imagery]… While the criminal provisions of the bill include appropriate exceptions for consensual commercial pornography and matters of public concern, those exceptions are not included in the bill’s takedown system,” the letter said.

The letter also said the bill could incentivize online platforms to use “content filtering that would break encryption.” The bill “excludes email and other services that do not primarily consist of user-generated content from the NTD [notice and takedown] system,” but “direct messaging services, cloud storage systems, and other similar services for private communication and storage, however, could be required to comply with the NTD,” the letter said.

The bill “contains serious threats to private messaging and free speech online—including requirements that would force companies to abandon end-to-end encryption so they can read and moderate your DMs,” Public Knowledge said today.

Democratic amendments voted down

Rep. Yvette Clarke (D-N.Y.) cast the only vote against the bill in yesterday’s House Commerce Committee hearing. But there were also several party-line votes against amendments submitted by Democrats.

Democrats raised concerns both about the bill not being enforced strictly enough and that bad actors could abuse the takedown process. The first concern is related to Trump firing both Democratic members of the Federal Trade Commission.

Rep. Kim Schrier (D-Wash.) called the Take It Down Act an “excellent law” but said, “right now it’s feeling like empty words because my Republican colleagues just stood by while the administration fired FTC commissioners, the exact people who enforce this law… it feels almost like my Republican colleagues are just giving a wink and a nod to the predators out there who are waiting to exploit kids and other innocent victims.”

Rep. Darren Soto (D-Fla.) offered an amendment to delay the bill’s effective date until the Democratic commissioners are restored to their positions. Ranking Member Frank Pallone, Jr. (D-N.J.) said that with a shorthanded FTC, “there’s going to be no enforcement of the Take It Down Act. There will be no enforcement of anything related to kids’ privacy.”

Rep. John James (R-Mich.) called the amendment a “thinly veiled delay tactic” and “nothing less than an attempt to derail this very important bill.” The amendment was defeated in a 28-22 vote.

Democrats support bill despite losing amendment votes

Rep. Debbie Dingell (D-Mich.) said she strongly supports the bill but offered an amendment that she said would tighten up the text and close loopholes. She said her amendment “ensures constitutionally protected speech is preserved by incorporating essential provisions for consensual content and matters of public concern. My goal is to protect survivors of abuse, not suppress lawful expression or shield misconduct from public accountability.”

Dingell’s amendment was also defeated in a 28-22 vote.

Pallone pitched an amendment that he said would “prevent bad actors from falsely claiming to be authorized from making takedown requests on behalf of someone else.” He called it a “common sense guardrail to protect against weaponization of this bill to take down images that are published with the consent of the subject matter of the images.” The amendment was rejected in a voice vote.

The bill was backed by RAINN (Rape, Abuse & Incest National Network), which praised the committee vote in a statement yesterday. “We’ve worked with fierce determination for the past year to bring Take It Down forward because we know—and survivors know—that AI-assisted sexual abuse is sexual abuse and real harm is being done; real pain is caused,” said Stefan Turkheimer, RAINN’s VP of public policy.

Cruz touted support for the bill from over 120 organizations and companies. The list includes groups like NCMEC (National Center for Missing & Exploited Children) and the National Center on Sexual Exploitation (NCOSE), along with various types of advocacy groups and tech companies Microsoft, Google, Meta, IBM, Amazon, and X Corp.

“As bad actors continue to exploit new technologies like generative artificial intelligence, the Take It Down Act is crucial for ending the spread of exploitative sexual material online, holding Big Tech accountable, and empowering victims of revenge and deepfake pornography,” Cruz said yesterday.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Take It Down Act nears passage; critics warn Trump could use it against enemies Read More »

openai-helps-spammers-plaster-80,000-sites-with-messages-that-bypassed-filters

OpenAI helps spammers plaster 80,000 sites with messages that bypassed filters

“AkiraBot’s use of LLM-generated spam message content demonstrates the emerging challenges that AI poses to defending websites against spam attacks,” SentinelLabs researchers Alex Delamotte and Jim Walter wrote. “The easiest indicators to block are the rotating set of domains used to sell the Akira and ServiceWrap SEO offerings, as there is no longer a consistent approach in the spam message contents as there were with previous campaigns selling the services of these firms.”

AkiraBot worked by assigning the following role to OpenAI’s chat API using the model gpt-4o-mini: “You are a helpful assistant that generates marketing messages.” A prompt instructed the LLM to replace the variables with the site name provided at runtime. As a result, the body of each message named the recipient website by name and included a brief description of the service provided by it.

An AI Chat prompt used by AkiraBot Credit: SentinelLabs

“The resulting message includes a brief description of the targeted website, making the message seem curated,” the researchers wrote. “The benefit of generating each message using an LLM is that the message content is unique and filtering against spam becomes more difficult compared to using a consistent message template which can trivially be filtered.”

SentinelLabs obtained log files AkiraBot left on a server to measure success and failure rates. One file showed that unique messages had been successfully delivered to more than 80,000 websites from September 2024 to January of this year. By comparison, messages targeting roughly 11,000 domains failed. OpenAI thanked the researchers and reiterated that such use of its chatbots runs afoul of its terms of service.

Story updated to modify headline.

OpenAI helps spammers plaster 80,000 sites with messages that bypassed filters Read More »

after-months-of-user-complaints,-anthropic-debuts-new-$200/month-ai-plan

After months of user complaints, Anthropic debuts new $200/month AI plan

Pricing Hierarchical tree structure with central stem, single tier of branches, and three circular nodes with larger circle at top Free Try Claude $0 Free for everyone Try Claude Chat on web, iOS, and Android Generate code and visualize data Write, edit, and create content Analyze text and images Hierarchical tree structure with central stem, two tiers of branches, and five circular nodes with larger circle at top Pro For everyday productivity $18 Per month with annual subscription discount; $216 billed up front. $20 if billed monthly. Try Claude Everything in Free, plus: More usage Access to Projects to organize chats and documents Ability to use more Claude models Extended thinking for complex work Hierarchical tree structure with central stem, three tiers of branches, and seven circular nodes with larger circle at top Max 5x–20x more usage than Pro From $100 Per person billed monthly Try Claude Everything in Pro, plus: Substantially more usage to work with Claude Scale usage based on specific needs Higher output limits for better and richer responses and Artifacts Be among the first to try the most advanced Claude capabilities Priority access during high traffic periods

A screenshot of various Claude pricing plans captured on April 9, 2025. Credit: Benj Edwards

Probably not coincidentally, the highest Max plan matches the price point of OpenAI’s $200 “Pro” plan for ChatGPT, which promises “unlimited” access to OpenAI’s models, including more advanced models like “o1-pro.” OpenAI introduced this plan in December as a higher tier above its $20 “ChatGPT Plus” subscription, first introduced in February 2023.

The pricing war between Anthropic and OpenAI reflects the resource-intensive nature of running state-of-the-art AI models. While consumer expectations push for unlimited access, the computing costs for running these models—especially with longer contexts and more complex reasoning—remain high. Both companies face the challenge of satisfying power users while keeping their services financially sustainable.

Other features of Claude Max

Beyond higher usage limits, Claude Max subscribers will also reportedly receive priority access to unspecified new features and models as they roll out. Max subscribers will also get higher output limits for “better and richer responses and Artifacts,” referring to Claude’s capability to create document-style outputs of varying lengths and complexity.

Users who subscribe to Max will also receive “priority access during high traffic periods,” suggesting Anthropic has implemented a tiered queue system that prioritizes its highest-paying customers during server congestion.

Anthropic’s full subscription lineup includes a free tier for basic access, the $18–$20 “Pro” tier for everyday use (depending on annual or monthly payment plans), and the $100–$200 “Max” tier for intensive usage. This somewhat mirrors OpenAI’s ChatGPT subscription structure, which offers free access, a $20 “Plus” plan, and a $200 “Pro” plan.

Anthropic says the new Max plan is available immediately in all regions where Claude operates.

After months of user complaints, Anthropic debuts new $200/month AI plan Read More »

windows-11’s-copilot-vision-wants-to-help-you-learn-to-use-complicated-apps

Windows 11’s Copilot Vision wants to help you learn to use complicated apps

Some elements of Microsoft’s Copilot assistant in Windows 11 have felt like a solution in search of a problem—and it hasn’t helped that Microsoft has frequently changed Copilot’s capabilities, turning it from a native Windows app into a web app and back again.

But I find myself intrigued by a new addition to Copilot Vision that Microsoft began rolling out this week to testers in its Windows Insider program. Copilot Vision launched late last year as a feature that could look at pages in the Microsoft Edge browser and answer questions based on those pages’ contents. The new Vision update extends that capability to any app window, allowing you to ask Copilot not just about the contents of a document but also about the user interface of the app itself.

Microsoft’s Copilot Vision update can see the contents of any app window you share with it. Credit: Microsoft

Provided the app works as intended—not a given for any software, but especially for AI features—Copilot Vision could replace “frantic Googling” as a way to learn how to use a new app or how to do something new or obscure in complex PC apps like Word, Excel, or Photoshop. I recently switched from Photoshop to Affinity Photo, for example, and I’m still finding myself tripped up by small differences in workflows and UI between the two apps. Copilot Vision could, in theory, ease that sort of transition.

Windows 11’s Copilot Vision wants to help you learn to use complicated apps Read More »

carmack-defends-ai-tools-after-quake-fan-calls-microsoft-ai-demo-“disgusting”

Carmack defends AI tools after Quake fan calls Microsoft AI demo “disgusting”

The current generative Quake II demo represents a slight advancement from Microsoft’s previous generative AI gaming model (confusingly titled “WHAM” with only one “M”) we covered in February. That earlier model, while showing progress in generating interactive gameplay footage, operated at 300×180 resolution at 10 frames per second—far below practical modern gaming standards. The new WHAMM demonstration doubles the resolution to 640×360. However, both remain well below what gamers expect from a functional video game in almost every conceivable way. It truly is an AI tech demo.

A Microsoft diagram of the WHAMM system.

A Microsoft diagram of the WHAM system. Credit: Microsoft

For example, the technology faces substantial challenges beyond just performance metrics. Microsoft acknowledges several limitations, including poor enemy interactions, a short context length of just 0.9 seconds (meaning the system forgets objects outside its view), and unreliable numerical tracking for game elements like health values.

Which brings us to another point: A significant gap persists between the technology’s marketing portrayal and its practical applications. While industry veterans like Carmack and Sweeney view AI as another tool in the development arsenal, demonstrations like the Quake II instance may create inflated expectations about AI’s current capabilities for complete game generation.

The most realistic near-term application of generative AI technology remains as coding assistants and perhaps rapid prototyping tools for developers, rather than a drop-in replacement for traditional game development pipelines. The technology’s current limitations suggest that human developers will remain essential for creating compelling, polished game experiences for now. But given the general pace of progress, that might be small comfort for those who worry about losing jobs to AI in the near-term.

Ultimately, Sweeney says not to worry: “There’s always a fear that automation will lead companies to make the same old products while employing fewer people to do it,” Sweeney wrote in a follow-up post on X. “But competition will ultimately lead to companies producing the best work they’re capable of given the new tools, and that tends to mean more jobs.”

And Carmack closed with this: “Will there be more or less game developer jobs? That is an open question. It could go the way of farming, where labor-saving technology allow a tiny fraction of the previous workforce to satisfy everyone, or it could be like social media, where creative entrepreneurship has flourished at many different scales. Regardless, “don’t use power tools because they take people’s jobs” is not a winning strategy.”

Carmack defends AI tools after Quake fan calls Microsoft AI demo “disgusting” Read More »

meta’s-surprise-llama-4-drop-exposes-the-gap-between-ai-ambition-and-reality

Meta’s surprise Llama 4 drop exposes the gap between AI ambition and reality

Meta constructed the Llama 4 models using a mixture-of-experts (MoE) architecture, which is one way around the limitations of running huge AI models. Think of MoE like having a large team of specialized workers; instead of everyone working on every task, only the relevant specialists activate for a specific job.

For example, Llama 4 Maverick features a 400 billion parameter size, but only 17 billion of those parameters are active at once across one of 128 experts. Likewise, Scout features 109 billion total parameters, but only 17 billion are active at once across one of 16 experts. This design can reduce the computation needed to run the model, since smaller portions of neural network weights are active simultaneously.

Llama’s reality check arrives quickly

Current AI models have a relatively limited short-term memory. In AI, a context window acts somewhat in that fashion, determining how much information it can process simultaneously. AI language models like Llama typically process that memory as chunks of data called tokens, which can be whole words or fragments of longer words. Large context windows allow AI models to process longer documents, larger code bases, and longer conversations.

Despite Meta’s promotion of Llama 4 Scout’s 10 million token context window, developers have so far discovered that using even a fraction of that amount has proven challenging due to memory limitations. Willison reported on his blog that third-party services providing access, like Groq and Fireworks, limited Scout’s context to just 128,000 tokens. Another provider, Together AI, offered 328,000 tokens.

Evidence suggests accessing larger contexts requires immense resources. Willison pointed to Meta’s own example notebook (“build_with_llama_4“), which states that running a 1.4 million token context needs eight high-end Nvidia H100 GPUs.

Willison documented his own testing troubles. When he asked Llama 4 Scout via the OpenRouter service to summarize a long online discussion (around 20,000 tokens), the result wasn’t useful. He described the output as “complete junk output,” which devolved into repetitive loops.

Meta’s surprise Llama 4 drop exposes the gap between AI ambition and reality Read More »