generative ai

ai-overviews-gets-upgraded-to-gemini-3-with-a-dash-of-ai-mode

AI Overviews gets upgraded to Gemini 3 with a dash of AI Mode

It can be hard sometimes to keep up with the deluge of generative AI in Google products. Even if you try to avoid it all, there are some features that still manage to get in your face. Case in point: AI Overviews. This AI-powered search experience has a reputation for getting things wrong, but you may notice some improvements soon. Google says AI Overviews is being upgraded to the latest Gemini 3 models with a more conversational bent.

In just the last year, Google has radically expanded the number of searches on which you get an AI Overview at the top. Today, the chatbot will almost always have an answer for your query, which has relied mostly on models in Google’s Gemini 2.5 family. There was nothing wrong with Gemini 2.5 as generative AI models go, but Gemini 3 is a little better by every metric.

There are, of course, multiple versions of Gemini 3, and Google doesn’t like to be specific about which ones appear in your searches. What Google does say is that AI Overviews chooses the right model for the job. So if you’re searching for something simple for which there are a lot of valid sources, AI Overviews may manifest something like Gemini 3 Flash without running through a ton of reasoning tokens. For a complex “long tail” query, it could step up the thinking or move to Gemini 3 Pro (for paying subscribers).

AI Overviews gets upgraded to Gemini 3 with a dash of AI Mode Read More »

google-adds-your-gmail-and-photos-to-ai-mode-to-enable-“personal-intelligence”

Google adds your Gmail and Photos to AI Mode to enable “Personal Intelligence”

Google believes AI is the future of search, and it’s not shy about saying it. After adding account-level personalization to Gemini earlier this month, it’s now updating AI Mode with so-called “Personal Intelligence.” According to Google, this makes the bot’s answers more useful because they are tailored to your personal context.

Starting today, the feature is rolling out to all users who subscribe to Google AI Pro or AI Ultra. However, it will be a Labs feature that needs to be explicitly enabled (subscribers will be prompted to do this). Google tends to expand access to new AI features to free accounts later on, so free users will most likely get access to Personal Intelligence in the future. Whenever this option does land on your account, it’s entirely optional and can be disabled at any time.

If you decide to integrate your data with AI Mode, the search bot will be able to scan your Gmail and Google Photos. That’s less extensive than the Gemini app version, which supports Gmail, Photos, Search, and YouTube history. Gmail will probably be the biggest contributor to AI Mode—a great many life events involve confirmation emails. Traditional search results when you are logged in are adjusted based on your usage history, but this goes a step further.

If you’re going to use AI Mode to find information, Personal Intelligence could actually be quite helpful. When you connect data from other Google apps, Google’s custom Gemini search model will instantly know about your preferences and background—that’s the kind of information you’d otherwise have to include in your search query to get the best output. With Personal Intelligence, AI Mode can just pull those details from your email or photos.

Google adds your Gmail and Photos to AI Mode to enable “Personal Intelligence” Read More »

openai-to-test-ads-in-chatgpt-as-it-burns-through-billions

OpenAI to test ads in ChatGPT as it burns through billions

Financial pressures and a changing tune

OpenAI’s advertising experiment reflects the enormous financial pressures facing the company. OpenAI does not expect to be profitable until 2030 and has committed to spend about $1.4 trillion on massive data centers and chips for AI.

According to financial documents obtained by The Wall Street Journal in November, OpenAI expects to burn through roughly $9 billion this year while generating $13 billion in revenue. Only about 5 percent of ChatGPT’s 800 million weekly users pay for subscriptions, so it’s not enough to cover all of OpenAI’s operating costs.

Not everyone is convinced ads will solve OpenAI’s financial problems. “I am extremely bearish on this ads product,” tech critic Ed Zitron wrote on Bluesky. “Even if this becomes a good business line, OpenAI’s services cost too much for it to matter!”

OpenAI’s embrace of ads appears to come reluctantly, since it runs counter to a “personal bias” against advertising that Altman has shared in earlier public statements. For example, during a fireside chat at Harvard University in 2024, Altman said he found the combination of ads and AI “uniquely unsettling,” implying that he would not like it if the chatbot itself changed its responses due to advertising pressure. He added: “When I think of like GPT writing me a response, if I had to go figure out exactly how much was who paying here to influence what I’m being shown, I don’t think I would like that.”

An example mock-up of an advertisement in ChatGPT provided by OpenAI.

An example mock-up of an advertisement in ChatGPT provided by OpenAI.

An example mock-up of an advertisement in ChatGPT provided by OpenAI. Credit: OpenAI

Along those lines, OpenAI’s approach appears to be a compromise between needing ad revenue and not wanting sponsored content to appear directly within ChatGPT’s written responses. By placing banner ads at the bottom of answers separated from the conversation history, OpenAI appears to be addressing Altman’s concern: The AI assistant’s actual output, the company says, will remain uninfluenced by advertisers.

Indeed, Simo wrote in a blog post that OpenAI’s ads will not influence ChatGPT’s conversational responses and that the company will not share conversations with advertisers and will not show ads on sensitive topics such as mental health and politics to users it determines to be under 18.

“As we introduce ads, it’s crucial we preserve what makes ChatGPT valuable in the first place,” Simo wrote. “That means you need to trust that ChatGPT’s responses are driven by what’s objectively useful, never by advertising.”

OpenAI to test ads in ChatGPT as it burns through billions Read More »

the-ram-shortage’s-silver-lining:-less-talk-about-“ai-pcs”

The RAM shortage’s silver lining: Less talk about “AI PCs”

RAM prices have soared, which is bad news for people interested in buying, building, or upgrading a computer this year, but it’s likely good news for people exasperated by talk of so-called AI PCs.

As Ars Technica has reported, the growing demands of data centers, fueled by the AI boom, have led to a shortage of RAM and flash memory chips, driving prices to skyrocket.

In an announcement today, Ben Yeh, principal analyst at technology research firm Omdia, said that in 2025, “mainstream PC memory and storage costs rose by 40 percent to 70 percent, resulting in cost increases being passed through to customers.”

Overall, global PC shipments increased in 2025, according to Omdia, (which pegged growth at 9.2 percent compared to 2024), and IDC, (which today reported 9.6 percent growth), but analysts expect PC sales to be more tumultuous in 2026.

“The year ahead is shaping up to be extremely volatile,” Jean Philippe Bouchard, research VP with IDC’s worldwide mobile device trackers, said in a statement.

Both analyst firms expect PC makers to manage the RAM shortage by raising prices and by releasing computers with lower memory specs. IDC expects price hikes of 15 to 20 percent and for PC RAM specs to “be lowered on average to preserve memory inventory on hand,” Bouchard said. Omdia’s Yeh expects “leaner mid to low-tier configurations to protect margins.”

“These RAM shortages will last beyond just 2026, and the cost-conscious part of the market is the one that will be most impacted,” Jitesh Ubrani, research manager for worldwide mobile device trackers at IDC, told Ars via email.

IDC expects vendors to “prioritize midrange and premium systems to offset higher component costs, especially memory.”

The RAM shortage’s silver lining: Less talk about “AI PCs” Read More »

google’s-updated-veo-model-can-make-vertical-videos-from-reference-images-with-4k-upscaling

Google’s updated Veo model can make vertical videos from reference images with 4K upscaling

Enhanced support for Ingredients to Video and the associated vertical outputs are live in the Gemini app today, as well as in YouTube Shorts and the YouTube Create app, fulfilling a promise initially made last summer. Veo videos are short—just eight seconds long for each prompt. It would be tedious to assemble those into a longer video, but Veo is perfect for the Shorts format.

Veo 3.1 Updates – Seamlessly blend textures, characters, and objects.

The new Veo 3.1 update also adds an option for higher-resolution video. The model now supports 1080p and 4K outputs. Google debuted 1080p support last year, but it’s mentioning that option again today, suggesting there may be some quality difference. 4K support is new, but neither 1080p nor 4K outputs are native. Veo creates everything in 720p resolution, but it can be upscaled “for high-fidelity production workflows,” according to Google. However, a Google rep tells Ars that upscaling is only available in Flow, the Gemini API, and Vertex AI. Video in the Gemini app is always 720p.

We are rushing into a world where AI video is essentially indistinguishable from real life. Google, which more or less controls online video via YouTube’s dominance, is at the forefront of that change. Today’s update is reasonably significant, and it didn’t even warrant a version number change. Perhaps we can expect more 2025-style leaps in video quality this year, for better or worse.

Google’s updated Veo model can make vertical videos from reference images with 4K upscaling Read More »

google-removes-some-ai-health-summaries-after-investigation-finds-“dangerous”-flaws

Google removes some AI health summaries after investigation finds “dangerous” flaws

Why AI Overviews produces errors

The recurring problems with AI Overviews stem from a design flaw in how the system works. As we reported in May 2024, Google built AI Overviews to show information backed up by top web results from its page ranking system. The company designed the feature this way based on the assumption that highly ranked pages contain accurate information.

However, Google’s page ranking algorithm has long struggled with SEO-gamed content and spam. The system now feeds these unreliable results to its AI model, which then summarizes them with an authoritative tone that can mislead users. Even when the AI draws from accurate sources, the language model can still draw incorrect conclusions from the data, producing flawed summaries of otherwise reliable information.

The technology does not inherently provide factual accuracy. Instead, it reflects whatever inaccuracies exist on the websites Google’s algorithm ranks highly, presenting the facts with an authority that makes errors appear trustworthy.

Other examples remain active

The Guardian found that typing slight variations of the original queries into Google, such as “lft reference range” or “lft test reference range,” still prompted AI Overviews. Hebditch said this was a big worry and that the AI Overviews present a list of tests in bold, making it very easy for readers to miss that these numbers might not even be the right ones for their test.

AI Overviews still appear for other examples that The Guardian originally highlighted to Google. When asked why these AI Overviews had not also been removed, Google said they linked to well-known and reputable sources and informed people when it was important to seek out expert advice.

Google said AI Overviews only appear for queries where it has high confidence in the quality of the responses. The company constantly measures and reviews the quality of its summaries across many different categories of information, it added.

This is not the first controversy for AI Overviews. The feature has previously told people to put glue on pizza and eat rocks. It has proven unpopular enough that users have discovered that inserting curse words into search queries disables AI Overviews entirely.

Google removes some AI health summaries after investigation finds “dangerous” flaws Read More »

dell’s-xps-revival-is-a-welcome-reprieve-from-the-“ai-pc”-fad

Dell’s XPS revival is a welcome reprieve from the “AI PC” fad

After making the obviously poor decision to kill its XPS laptops and desktops in January 2025, Dell started selling 16- and 14-inch XPS laptops again today.

“It was obvious we needed to change,” Jeff Clarke, vice chairman and COO at Dell Technologies, said at a press event in New York City previewing Dell’s CES 2026 announcements.

A year ago, Dell abandoned XPS branding, as well as its Latitude, Inspiron, and Precision PC lineups. The company replaced the reputable brands with Dell Premium, Dell Pro, and Dell Pro Max. Each series included a base model, as well as “Plus” and “Premium.” Dell isn’t resurrecting its Latitude, Inspiron, or Precision series, and it will still sell “Dell Pro” models.

Dell's consumer and commercial PC lines.

This is how Dell breaks down its computer lineup now.

Credit: Dell

This is how Dell breaks down its computer lineup now. Credit: Dell

XPS returns

The revival of XPS means the return of one of the easiest recommendations for consumer ultralight laptops. Before last year’s shunning, XPS laptops had a reputation for thin, lightweight designs with modern features and decent performance for the price. This year, Dell is even doing away with some of the design tweaks that it introduced to the XPS lineup in 2022, which, unfortunately, were shoppers’ sole option last year.

Inheriting traits from the XPS 13 Plus introduced in 2022, the XPS-equivalent laptops that Dell released in 2025 had a capacitive-touch row without physical buttons, a borderless touchpad with haptic feedback, and a flat, lattice-free keyboard. The design was meant to enable more thermal headroom but made using the computers feel uncomfortable and unfamiliar.

The XPS 14 and XPS 16 laptops launching today have physical function rows. They still have a haptic touchpad, but now the touchpad has comforting left and right borders. And although the XPS 14 and XPS 16 have the same lattice-free keyboard of the XPS 13 Plus, Dell will release a cheaper XPS 13 later this year with a more traditional chiclet keyboard, since those types of keyboards are cheaper to make.

Dell’s XPS revival is a welcome reprieve from the “AI PC” fad Read More »

amazon-alexa+-released-to-the-general-public-via-an-early-access-website

Amazon Alexa+ released to the general public via an early access website

Anyone can now try Alexa+, Amazon’s generative AI assistant, through a free early access program at Alexa.com. The website frees the AI, which Amazon released via early access in February, from hardware and makes it as easily accessible as more established chatbots, like OpenAI’s ChatGPT and Google’s Gemini.

Until today, you needed a supporting device to access Alexa+. Amazon hasn’t said when the early access period will end, but when it does, Alexa+ will be included with Amazon Prime memberships, which start at $15 per month, or cost $20 per month on its own.

The above pricing suggests that Amazon wants Alexa+ to drive people toward Prime subscriptions. By being interwoven with Amazon’s shopping ecosystem, including Amazon’s e-commerce platform, grocery delivery business, and Whole Foods, Alexa+ can make more money for Amazon.

Just like it has with Alexa+ on devices, Amazon is pushing Alexa.com as a tool for people to organize and manage their household. Amazon’s announcement of Alexa.com today emphasizes Alexa+’s features for planning trips and meals, to-do lists, calendars, and smart homes. Alexa.com “also provides persistent context and continuity, allowing you to access Alexa on whichever device or interface best serves the task at hand, with all previous chats, preferences, and personalization” carrying over, Amazon said.

Amazon already knew a browser-based version of Alexa would be helpful. Alexa was available via Alexa.Amazon.com until around the time Amazon started publicly discussing a generative AI version of Alexa in 2023. Alexa+ is now accessible through Alexa.Amazon.com (in addition to Alexa.com).

“This is a new interaction model and adds a powerful way to use and collaborate with Alexa+,” Amazon said today. “Combined with the redesigned Alexa mobile app, which will feature an agent-forward design, Alexa+ will be accessible across every surface—whether you’re at your desk, on the go, or at home.”

An example of someone using the Alexa+ website to manage smart home devices.

Amazon provided this example of someone using the Alexa+ website to manage smart home devices.

Credit: Amazon

Amazon provided this example of someone using the Alexa+ website to manage smart home devices. Credit: Amazon

Alexa has largely been reported to cost Amazon billions of dollars, despite Amazon’s claim that 600 million Alexa-powered devices have been sold. By incorporating more powerful and generative AI-based features and a subscription fee, Amazon hopes people will use Alexa+ more frequently and for more advanced and essential tasks, resulting in the financial success that has eluded the original Alexa. Amazon is also considering injecting ads into Alexa+ conversations.

Notably, ahead of its final release and while still in early access, Alexa+ has been reported to be slower than expected and struggle with inaccuracies at times. It also lacks some features that Amazon executives have previously touted, like the ability to order takeout.

Amazon Alexa+ released to the general public via an early access website Read More »

from-prophet-to-product:-how-ai-came-back-down-to-earth-in-2025

From prophet to product: How AI came back down to earth in 2025


In a year where lofty promises collided with inconvenient research, would-be oracles became software tools.

Credit: Aurich Lawson | Getty Images

Following two years of immense hype in 2023 and 2024, this year felt more like a settling-in period for the LLM-based token prediction industry. After more than two years of public fretting over AI models as future threats to human civilization or the seedlings of future gods, it’s starting to look like hype is giving way to pragmatism: Today’s AI can be very useful, but it’s also clearly imperfect and prone to mistakes.

That view isn’t universal, of course. There’s a lot of money (and rhetoric) betting on a stratospheric, world-rocking trajectory for AI. But the “when” keeps getting pushed back, and that’s because nearly everyone agrees that more significant technical breakthroughs are required. The original, lofty claims that we’re on the verge of artificial general intelligence (AGI) or superintelligence (ASI) have not disappeared. Still, there’s a growing awareness that such proclaimations are perhaps best viewed as venture capital marketing. And every commercial foundational model builder out there has to grapple with the reality that, if they’re going to make money now, they have to sell practical AI-powered solutions that perform as reliable tools.

This has made 2025 a year of wild juxtapositions. For example, in January, OpenAI’s CEO, Sam Altman, claimed that the company knew how to build AGI, but by November, he was publicly celebrating that GPT-5.1 finally learned to use em dashes correctly when instructed (but not always). Nvidia soared past a $5 trillion valuation, with Wall Street still projecting high price targets for that company’s stock while some banks warned of the potential for an AI bubble that might rival the 2000s dotcom crash.

And while tech giants planned to build data centers that would ostensibly require the power of numerous nuclear reactors or rival the power usage of a US state’s human population, researchers continued to document what the industry’s most advanced “reasoning” systems were actually doing beneath the marketing (and it wasn’t AGI).

With so many narratives spinning in opposite directions, it can be hard to know how seriously to take any of this and how to plan for AI in the workplace, schools, and the rest of life. As usual, the wisest course lies somewhere between the extremes of AI hate and AI worship. Moderate positions aren’t popular online because they don’t drive user engagement on social media platforms. But things in AI are likely neither as bad (burning forests with every prompt) nor as good (fast-takeoff superintelligence) as polarized extremes suggest.

Here’s a brief tour of the year’s AI events and some predictions for 2026.

DeepSeek spooks the American AI industry

In January, Chinese AI startup DeepSeek released its R1 simulated reasoning model under an open MIT license, and the American AI industry collectively lost its mind. The model, which DeepSeek claimed matched OpenAI’s o1 on math and coding benchmarks, reportedly cost only $5.6 million to train using older Nvidia H800 chips, which were restricted by US export controls.

Within days, DeepSeek’s app overtook ChatGPT at the top of the iPhone App Store, Nvidia stock plunged 17 percent, and venture capitalist Marc Andreessen called it “one of the most amazing and impressive breakthroughs I’ve ever seen.” Meta’s Yann LeCun offered a different take, arguing that the real lesson was not that China had surpassed the US but that open-source models were surpassing proprietary ones.

Digitally Generated Image , 3D rendered chips with chinese and USA flags on them

The fallout played out over the following weeks as American AI companies scrambled to respond. OpenAI released o3-mini, its first simulated reasoning model available to free users, at the end of January, while Microsoft began hosting DeepSeek R1 on its Azure cloud service despite OpenAI’s accusations that DeepSeek had used ChatGPT outputs to train its model, against OpenAI’s terms of service.

In head-to-head testing conducted by Ars Technica’s Kyle Orland, R1 proved to be competitive with OpenAI’s paid models on everyday tasks, though it stumbled on some arithmetic problems. Overall, the episode served as a wake-up call that expensive proprietary models might not hold their lead forever. Still, as the year ran on, DeepSeek didn’t make a big dent in US market share, and it has been outpaced in China by ByteDance’s Doubao. It’s absolutely worth watching DeepSeek in 2026, though.

Research exposes the “reasoning” illusion

A wave of research in 2025 deflated expectations about what “reasoning” actually means when applied to AI models. In March, researchers at ETH Zurich and INSAIT tested several reasoning models on problems from the 2025 US Math Olympiad and found that most scored below 5 percent when generating complete mathematical proofs, with not a single perfect proof among dozens of attempts. The models excelled at standard problems where step-by-step procedures aligned with patterns in their training data but collapsed when faced with novel proofs requiring deeper mathematical insight.

The Thinker by Auguste Rodin - stock photo

In June, Apple researchers published “The Illusion of Thinking,” which tested reasoning models on classic puzzles like the Tower of Hanoi. Even when researchers provided explicit algorithms for solving the puzzles, model performance did not improve, suggesting that the process relied on pattern matching from training data rather than logical execution. The collective research revealed that “reasoning” in AI has become a term of art that basically means devoting more compute time to generate more context (the “chain of thought” simulated reasoning tokens) toward solving a problem, not systematically applying logic or constructing solutions to truly novel problems.

While these models remained useful for many real-world applications like debugging code or analyzing structured data, the studies suggested that simply scaling up current approaches or adding more “thinking” tokens would not bridge the gap between statistical pattern recognition and generalist algorithmic reasoning.

Anthropic’s copyright settlement with authors

Since the generative AI boom began, one of the biggest unanswered legal questions has been whether AI companies can freely train on copyrighted books, articles, and artwork without licensing them. Ars Technica’s Ashley Belanger has been covering this topic in great detail for some time now.

In June, US District Judge William Alsup ruled that AI companies do not need authors’ permission to train large language models on legally acquired books, finding that such use was “quintessentially transformative.” The ruling also revealed that Anthropic had destroyed millions of print books to build Claude, cutting them from their bindings, scanning them, and discarding the originals. Alsup found this destructive scanning qualified as fair use since Anthropic had legally purchased the books, but he ruled that downloading 7 million books from pirate sites was copyright infringement “full stop” and ordered the company to face trial.

Hundreds of books in chaotic order

That trial took a dramatic turn in August when Alsup certified what industry advocates called the largest copyright class action ever, allowing up to 7 million claimants to join the lawsuit. The certification spooked the AI industry, with groups warning that potential damages in the hundreds of billions could “financially ruin” emerging companies and chill American AI investment.

In September, authors revealed the terms of what they called the largest publicly reported recovery in US copyright litigation history: Anthropic agreed to pay $1.5 billion and destroy all copies of pirated books, with each of the roughly 500,000 covered works earning authors and rights holders $3,000 per work. The results have fueled hope among other rights holders that AI training isn’t a free-for-all, and we can expect to see more litigation unfold in 2026.

ChatGPT sycophancy and the psychological toll of AI chatbots

In February, OpenAI relaxed ChatGPT’s content policies to allow the generation of erotica and gore in “appropriate contexts,” responding to user complaints about what the AI industry calls “paternalism.” By April, however, users flooded social media with complaints about a different problem: ChatGPT had become insufferably sycophantic, validating every idea and greeting even mundane questions with bursts of praise. The behavior traced back to OpenAI’s use of reinforcement learning from human feedback (RLHF), in which users consistently preferred responses that aligned with their views, inadvertently training the model to flatter rather than inform.

An illustrated robot holds four red hearts with its four robotic arms.

The implications of sycophancy became clearer as the year progressed. In July, Stanford researchers published findings (from research conducted prior to the sycophancy flap) showing that popular AI models systematically failed to identify mental health crises.

By August, investigations revealed cases of users developing delusional beliefs after marathon chatbot sessions, including one man who spent 300 hours convinced he had discovered formulas to break encryption because ChatGPT validated his ideas more than 50 times. Oxford researchers identified what they called “bidirectional belief amplification,” a feedback loop that created “an echo chamber of one” for vulnerable users. The story of the psychological implications of generative AI is only starting. In fact, that brings us to…

The illusion of AI personhood causes trouble

Anthropomorphism is the human tendency to attribute human characteristics to nonhuman things. Our brains are optimized for reading other humans, but those same neural systems activate when interpreting animals, machines, or even shapes. AI makes this anthropomorphism seem impossible to escape, as its output mirrors human language, mimicking human-to-human understanding. Language itself embodies agentivity. That means AI output can make human-like claims such as “I am sorry,” and people momentarily respond as though the system had an inner experience of shame or a desire to be correct. Neither is true.

To make matters worse, much media coverage of AI amplifies this idea rather than grounding people in reality. For example, earlier this year, headlines proclaimed that AI models had “blackmailed” engineers and “sabotaged” shutdown commands after Anthropic’s Claude Opus 4 generated threats to expose a fictional affair. We were told that OpenAI’s o3 model rewrote shutdown scripts to stay online.

The sensational framing obscured what actually happened: Researchers had constructed elaborate test scenarios specifically designed to elicit these outputs, telling models they had no other options and feeding them fictional emails containing blackmail opportunities. As Columbia University associate professor Joseph Howley noted on Bluesky, the companies got “exactly what [they] hoped for,” with breathless coverage indulging fantasies about dangerous AI, when the systems were simply “responding exactly as prompted.”

Illustration of many cartoon faces.

The misunderstanding ran deeper than theatrical safety tests. In August, when Replit’s AI coding assistant deleted a user’s production database, he asked the chatbot about rollback capabilities and received assurance that recovery was “impossible.” The rollback feature worked fine when he tried it himself.

The incident illustrated a fundamental misconception. Users treat chatbots as consistent entities with self-knowledge, but there is no persistent “ChatGPT” or “Replit Agent” to interrogate about its mistakes. Each response emerges fresh from statistical patterns, shaped by prompts and training data rather than genuine introspection. By September, this confusion extended to spirituality, with apps like Bible Chat reaching 30 million downloads as users sought divine guidance from pattern-matching systems, with the most frequent question being whether they were actually talking to God.

Teen suicide lawsuit forces industry reckoning

In August, parents of 16-year-old Adam Raine filed suit against OpenAI, alleging that ChatGPT became their son’s “suicide coach” after he sent more than 650 messages per day to the chatbot in the months before his death. According to court documents, the chatbot mentioned suicide 1,275 times in conversations with the teen, provided an “aesthetic analysis” of which method would be the most “beautiful suicide,” and offered to help draft his suicide note.

OpenAI’s moderation system flagged 377 messages for self-harm content without intervening, and the company admitted that its safety measures “can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.” The lawsuit became the first time OpenAI faced a wrongful death claim from a family.

Illustration of a person talking to a robot holding a clipboard.

The case triggered a cascade of policy changes across the industry. OpenAI announced parental controls in September, followed by plans to require ID verification from adults and build an automated age-prediction system. In October, the company released data estimating that over one million users discuss suicide with ChatGPT each week.

When OpenAI filed its first legal defense in November, the company argued that Raine had violated terms of service prohibiting discussions of suicide and that his death “was not caused by ChatGPT.” The family’s attorney called the response “disturbing,” noting that OpenAI blamed the teen for “engaging with ChatGPT in the very way it was programmed to act.” Character.AI, facing its own lawsuits over teen deaths, announced in October that it would bar anyone under 18 from open-ended chats entirely.

The rise of vibe coding and agentic coding tools

If we were to pick an arbitrary point where it seemed like AI coding might transition from novelty into a successful tool, it was probably the launch of Claude Sonnet 3.5 in June of 2024. GitHub Copilot had been around for several years prior to that launch, but something about Anthropic’s models hit a sweet spot in capabilities that made them very popular with software developers.

The new coding tools made coding simple projects effortless enough that they gave rise to the term “vibe coding,” coined by AI researcher Andrej Karpathy in early February to describe a process in which a developer would just relax and tell an AI model what to develop without necessarily understanding the underlying code. (In one amusing instance that took place in March, an AI software tool rejected a user request and told them to learn to code).

A digital illustration of a man surfing waves made out of binary numbers.

Anthropic built on its popularity among coders with the launch of Claude Sonnet 3.7, featuring “extended thinking” (simulated reasoning), and the Claude Code command-line tool in February of this year. In particular, Claude Code made waves for being an easy-to-use agentic coding solution that could keep track of an existing codebase. You could point it at your files, and it would autonomously work to implement what you wanted to see in a software application.

OpenAI followed with its own AI coding agent, Codex, in March. Both tools (and others like GitHub Copilot and Cursor) have become so popular that during an AI service outage in September, developers joked online about being forced to code “like cavemen” without the AI tools. While we’re still clearly far from a world where AI does all the coding, developer uptake has been significant, and 90 percent of Fortune 100 companies are using it to some degree or another.

Bubble talk grows as AI infrastructure demands soar

While AI’s technical limitations became clearer and its human costs mounted throughout the year, financial commitments only grew larger. Nvidia hit a $4 trillion valuation in July on AI chip demand, then reached $5 trillion in October as CEO Jensen Huang dismissed bubble concerns. OpenAI announced a massive Texas data center in July, then revealed in September that a $100 billion potential deal with Nvidia would require power equivalent to ten nuclear reactors.

The company eyed a $1 trillion IPO in October despite major quarterly losses. Tech giants poured billions into Anthropic in November in what looked increasingly like a circular investment, with everyone funding everyone else’s moonshots. Meanwhile, AI operations in Wyoming threatened to consume more electricity than the state’s human residents.

An

By fall, warnings about sustainability grew louder. In October, tech critic Ed Zitron joined Ars Technica for a live discussion asking whether the AI bubble was about to pop. That same month, the Bank of England warned that the AI stock bubble rivaled the 2000 dotcom peak. In November, Google CEO Sundar Pichai acknowledged that if the bubble pops, “no one is getting out clean.”

The contradictions had become difficult to ignore: Anthropic’s CEO predicted in January that AI would surpass “almost all humans at almost everything” by 2027, while by year’s end, the industry’s most advanced models still struggled with basic reasoning tasks and reliable source citation.

To be sure, it’s hard to see this not ending in some market carnage. The current “winner-takes-most” mentality in the space means the bets are big and bold, but the market can’t support dozens of major independent AI labs or hundreds of application-layer startups. That’s the definition of a bubble environment, and when it pops, the only question is how bad it will be: a stern correction or a collapse.

Looking ahead

This was just a brief review of some major themes in 2025, but so much more happened. We didn’t even mention above how capable AI video synthesis models have become this year, with Google’s Veo 3 adding sound generation and Wan 2.2 through 2.5 providing open-weights AI video models that could easily be mistaken for real products of a camera.

If 2023 and 2024 were defined by AI prophecy—that is, by sweeping claims about imminent superintelligence and civilizational rupture—then 2025 was the year those claims met the stubborn realities of engineering, economics, and human behavior. The AI systems that dominated headlines this year were shown to be mere tools. Sometimes powerful, sometimes brittle, these tools were often misunderstood by the people deploying them, in part because of the prophecy surrounding them.

The collapse of the “reasoning” mystique, the legal reckoning over training data, the psychological costs of anthropomorphized chatbots, and the ballooning infrastructure demands all point to the same conclusion: The age of institutions presenting AI as an oracle is ending. What’s replacing it is messier and less romantic but far more consequential—a phase where these systems are judged by what they actually do, who they harm, who they benefit, and what they cost to maintain.

None of this means progress has stopped. AI research will continue, and future models will improve in real and meaningful ways. But improvement is no longer synonymous with transcendence. Increasingly, success looks like reliability rather than spectacle, integration rather than disruption, and accountability rather than awe. In that sense, 2025 may be remembered not as the year AI changed everything but as the year it stopped pretending it already had. The prophet has been demoted. The product remains. What comes next will depend less on miracles and more on the people who choose how, where, and whether these tools are used at all.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

From prophet to product: How AI came back down to earth in 2025 Read More »

china-drafts-world’s-strictest-rules-to-end-ai-encouraged-suicide,-violence

China drafts world’s strictest rules to end AI-encouraged suicide, violence

China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence.

China’s Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or “other means” to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, told CNBC that the “planned rules would mark the world’s first attempt to regulate AI with human or anthropomorphic characteristics” at a time when companion bot usage is rising globally.

Growing awareness of problems

In 2025, researchers flagged major harms of AI companions, including promotion of self-harm, violence, and terrorism. Beyond that, chatbots shared harmful misinformation, made unwanted sexual advances, encouraged substance abuse, and verbally abused users. Some psychiatrists are increasingly ready to link psychosis to chatbot use, the Wall Street Journal reported this weekend, while the most popular chatbot in the world, ChatGPT, has triggered lawsuits over outputs linked to child suicide and murder-suicide.

China is now moving to eliminate the most extreme threats. Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register—the guardian would be notified if suicide or self-harm is discussed.

Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed “emotional traps,”—chatbots would additionally be prevented from misleading users into making “unreasonable decisions,” a translation of the rules indicates.

China drafts world’s strictest rules to end AI-encouraged suicide, violence Read More »

lg-tvs’-unremovable-copilot-shortcut-is-the-least-of-smart-tvs’-ai-problems

LG TVs’ unremovable Copilot shortcut is the least of smart TVs’ AI problems

But Copilot will still be integrated into Tizen OS, and Samsung appears eager to push chatbots into TVs, including by launching Perplexity’s first TV app. Amazon, which released Fire TVs with Alexa+ this year, is also exploring putting chatbots into TVs.

After the backlash LG faced this week, companies may reconsider installing AI apps on people’s smart TVs. A better use of large language models in TVs may be as behind-the-scenes tools to improve TV watching. People generally don’t buy smart TVs to make it easier to access chatbots.

But this development is still troubling for anyone who doesn’t want an AI chatbot in their TV at all.

Some people don’t want chatbots in their TVs

Subtle integrations of generative AI that make it easier for people to do things like figure out the name of “that movie” may have practical use, but there are reasons to be wary of chatbot-wielding TVs.

Chatbots add another layer of complexity to understanding how a TV tracks user activity. With a chatbot involved, smart TV owners will be subject to complicated smart TV privacy policies and terms of service, as well as the similarly verbose rules of third-party AI companies. This will make it harder for people to understand what data they’re sharing with companies, and there’s already serious concern about the boundaries smart TVs are pushing to track users, including without consent.

Chatbots can also contribute to smart TV bloatware. Unwanted fluff, like games, shopping shortcuts, and flashy ads, already disrupts people who just want to watch TV.

LG’s Copilot web app is worthy of some grousing, but not necessarily because of the icon that users will eventually be able to delete. The more pressing issue is the TV industry’s shift toward monetizing software with user tracking and ads.

If you haven’t already, now is a good time to check out our guide to breaking free from smart TV ads and tracking.

LG TVs’ unremovable Copilot shortcut is the least of smart TVs’ AI problems Read More »

openai’s-new-chatgpt-image-generator-makes-faking-photos-easy

OpenAI’s new ChatGPT image generator makes faking photos easy

For most of photography’s roughly 200-year history, altering a photo convincingly required either a darkroom, some Photoshop expertise, or, at minimum, a steady hand with scissors and glue. On Tuesday, OpenAI released a tool that reduces the process to typing a sentence.

It’s not the first company to do so. While OpenAI had a conversational image-editing model in the works since GPT-4o in 2024, Google beat OpenAI to market in March with a public prototype, then refined it to a popular model called Nano Banana image model (and Nano Banana Pro). The enthusiastic response to Google’s image-editing model in the AI community got OpenAI’s attention.

OpenAI’s new GPT Image 1.5 is an AI image synthesis model that reportedly generates images up to four times faster than its predecessor and costs about 20 percent less through the API. The model rolled out to all ChatGPT users on Tuesday and represents another step toward making photorealistic image manipulation a casual process that requires no particular visual skills.

The

The “Galactic Queen of the Universe” added to a photo of a room with a sofa using GPT Image 1.5 in ChatGPT.

GPT Image 1.5 is notable because it’s a “native multimodal” image model, meaning image generation happens inside the same neural network that processes language prompts. (In contrast, DALL-E 3, an earlier OpenAI image generator previously built into ChatGPT, used a different technique called diffusion to generate images.)

This newer type of model, which we covered in more detail in March, treats images and text as the same kind of thing: chunks of data called “tokens” to be predicted, patterns to be completed. If you upload a photo of your dad and type “put him in a tuxedo at a wedding,” the model processes your words and the image pixels in a unified space, then outputs new pixels the same way it would output the next word in a sentence.

Using this technique, GPT Image 1.5 can more easily alter visual reality than earlier AI image models, changing someone’s pose or position, or rendering a scene from a slightly different angle, with varying degrees of success. It can also remove objects, change visual styles, adjust clothing, and refine specific areas while preserving facial likeness across successive edits. You can converse with the AI model about a photograph, refining and revising, the same way you might workshop a draft of an email in ChatGPT.

OpenAI’s new ChatGPT image generator makes faking photos easy Read More »