AI

samsung-drops-android-16-beta-for-galaxy-s25-with-more-ai-you-probably-don’t-want

Samsung drops Android 16 beta for Galaxy S25 with more AI you probably don’t want

The next version of Android is expected to hit Pixel phones in June, but it’ll take longer for devices from other manufacturers to see the new OS. However, Samsung is making unusually good time this cycle. Owners of the company’s Galaxy S25 phones can get an early look at One UI 8 (based on Android 16) in the new open beta program. Samsung promises a lot of upgrades, but it may not feel that way.

Signing up for the beta is a snap—just open the Samsung Members app, and the beta signup should be right on the main landing page. From there, the OTA update should appear on your device within a few minutes. It’s pretty hefty at 3.4GB, but the installation is quick, and none of your data should be affected. That said, backups are always advisable when using beta software.

You must be in the US, Germany, Korea, or the UK to join the beta, and US phones must be unlocked or the T-Mobile variants. The software is compatible with the Galaxy S25, S25+, and S25 Ultra—the new S25 Edge need not apply (for now).

Samsung’s big pitch here might not resonate with everyone: more AI. It’s a bit vague about what exactly that means, though. The company claims One UI 8 brings “multimodal capabilities, UX tailored to different device form factors, and personalized, proactive suggestions.” Having used the new OS for a few hours, it doesn’t seem like much has changed with Samsung’s AI implementation.

One UI 8 install

The One UI 8 beta is large, but installation doesn’t take very long.

Credit: Ryan Whitwam

The One UI 8 beta is large, but installation doesn’t take very long. Credit: Ryan Whitwam

The Galaxy S25 series launched with a feature called Now Brief, along with a companion widget called Now Bar. The idea is that this interface will assimilate your data, process it privately inside the Samsung Knox enclave, and spit out useful suggestions. For the most part, it doesn’t. While Samsung says One UI 8 will allow Now Brief to deliver more personal, customized insights, it seems to do the same handful of things it did before. It’s plugged into a lot of Samsung apps and data sources, but mostly just shows weather information, calendar events, and news articles you probably won’t care about. There were some hints of an upcoming audio version of Now Brief, but that’s not included in the beta.

Samsung drops Android 16 beta for Galaxy S25 with more AI you probably don’t want Read More »

hidden-ai-instructions-reveal-how-anthropic-controls-claude-4

Hidden AI instructions reveal how Anthropic controls Claude 4

Willison, who coined the term “prompt injection” in 2022, is always on the lookout for LLM vulnerabilities. In his post, he notes that reading system prompts reminds him of warning signs in the real world that hint at past problems. “A system prompt can often be interpreted as a detailed list of all of the things the model used to do before it was told not to do them,” he writes.

Fighting the flattery problem

An illustrated robot holds four red hearts with its four robotic arms.

Willison’s analysis comes as AI companies grapple with sycophantic behavior in their models. As we reported in April, ChatGPT users have complained about GPT-4o’s “relentlessly positive tone” and excessive flattery since OpenAI’s March update. Users described feeling “buttered up” by responses like “Good question! You’re very astute to ask that,” with software engineer Craig Weiss tweeting that “ChatGPT is suddenly the biggest suckup I’ve ever met.”

The issue stems from how companies collect user feedback during training—people tend to prefer responses that make them feel good, creating a feedback loop where models learn that enthusiasm leads to higher ratings from humans. As a response to the feedback, OpenAI later rolled back ChatGPT’s 4o model and altered the system prompt as well, something we reported on and Willison also analyzed at the time.

One of Willison’s most interesting findings about Claude 4 relates to how Anthropic has guided both Claude models to avoid sycophantic behavior. “Claude never starts its response by saying a question or idea or observation was good, great, fascinating, profound, excellent, or any other positive adjective,” Anthropic writes in the prompt. “It skips the flattery and responds directly.”

Other system prompt highlights

The Claude 4 system prompt also includes extensive instructions on when Claude should or shouldn’t use bullet points and lists, with multiple paragraphs dedicated to discouraging frequent list-making in casual conversation. “Claude should not use bullet points or numbered lists for reports, documents, explanations, or unless the user explicitly asks for a list or ranking,” the prompt states.

Hidden AI instructions reveal how Anthropic controls Claude 4 Read More »

researchers-cause-gitlab-ai-developer-assistant-to-turn-safe-code-malicious

Researchers cause GitLab AI developer assistant to turn safe code malicious

Marketers promote AI-assisted developer tools as workhorses that are essential for today’s software engineer. Developer platform GitLab, for instance, claims its Duo chatbot can “instantly generate a to-do list” that eliminates the burden of “wading through weeks of commits.” What these companies don’t say is that these tools are, by temperament if not default, easily tricked by malicious actors into performing hostile actions against their users.

Researchers from security firm Legit on Thursday demonstrated an attack that induced Duo into inserting malicious code into a script it had been instructed to write. The attack could also leak private code and confidential issue data, such as zero-day vulnerability details. All that’s required is for the user to instruct the chatbot to interact with a merge request or similar content from an outside source.

AI assistants’ double-edged blade

The mechanism for triggering the attacks is, of course, prompt injections. Among the most common forms of chatbot exploits, prompt injections are embedded into content a chatbot is asked to work with, such as an email to be answered, a calendar to consult, or a webpage to summarize. Large language model-based assistants are so eager to follow instructions that they’ll take orders from just about anywhere, including sources that can be controlled by malicious actors.

The attacks targeting Duo came from various resources that are commonly used by developers. Examples include merge requests, commits, bug descriptions and comments, and source code. The researchers demonstrated how instructions embedded inside these sources can lead Duo astray.

“This vulnerability highlights the double-edged nature of AI assistants like GitLab Duo: when deeply integrated into development workflows, they inherit not just context—but risk,” Legit researcher Omer Mayraz wrote. “By embedding hidden instructions in seemingly harmless project content, we were able to manipulate Duo’s behavior, exfiltrate private source code, and demonstrate how AI responses can be leveraged for unintended and harmful outcomes.”

Researchers cause GitLab AI developer assistant to turn safe code malicious Read More »

google-home-is-getting-deeper-gemini-integration-and-a-new-widget

Google Home is getting deeper Gemini integration and a new widget

As Google moves the last remaining Nest devices into the Home app, it’s also looking at ways to make this smart home hub easier to use. Naturally, Google is doing that by ramping up Gemini integration. The company has announced new automation capabilities with generative AI, as well as better support for third-party devices via the Home API. Google AI will also plug into a new Android widget that can keep you updated on what the smart parts of your home are up to.

The Google Home app is where you interact with all of Google’s smart home gadgets, like cameras, thermostats, and smoke detectors—some of which have been discontinued, but that’s another story. It also accommodates smart home devices from other companies, which can make managing a mixed setup feasible if not exactly intuitive. A dash of AI might actually help here.

Google began testing Gemini integrations in Home last year, and now it’s opening that up to third-party devices via the Home API. Google has worked with a few partners on API integrations before general availability. The previously announced First Alert smoke/carbon monoxide detector and Yale smart lock that are replacing Google’s Nest devices are among the first, along with Cync lighting, Motorola Tags, and iRobot vacuums.

Google Home is getting deeper Gemini integration and a new widget Read More »

google’s-will-smith-double-is-better-at-eating-ai-spaghetti-…-but-it’s-crunchy?

Google’s Will Smith double is better at eating AI spaghetti … but it’s crunchy?

On Tuesday, Google launched Veo 3, a new AI video synthesis model that can do something no major AI video generator has been able to do before: create a synchronized audio track. While from 2022 to 2024, we saw early steps in AI video generation, each video was silent and usually very short in duration. Now you can hear voices, dialog, and sound effects in eight-second high-definition video clips.

Shortly after the new launch, people began asking the most obvious benchmarking question: How good is Veo 3 at faking Oscar-winning actor Will Smith at eating spaghetti?

First, a brief recap. The spaghetti benchmark in AI video traces its origins back to March 2023, when we first covered an early example of horrific AI-generated video using an open source video synthesis model called ModelScope. The spaghetti example later became well-known enough that Smith parodied it almost a year later in February 2024.

Here’s what the original viral video looked like:

One thing people forget is that at the time, the Smith example wasn’t the best AI video generator out there—a video synthesis model called Gen-2 from Runway had already achieved superior results (though it was not yet publicly accessible). But the ModelScope result was funny and weird enough to stick in people’s memories as an early poor example of video synthesis, handy for future comparisons as AI models progressed.

AI app developer Javi Lopez first came to the rescue for curious spaghetti fans earlier this week with Veo 3, performing the Smith test and posting the results on X. But as you’ll notice below when you watch, the soundtrack has a curious quality: The faux Smith appears to be crunching on the spaghetti.

On X, Javi Lopez ran “Will Smith eating spaghetti” in Google’s Veo 3 AI video generator and received this result.

It’s a glitch in Veo 3’s experimental ability to apply sound effects to video, likely because the training data used to create Google’s AI models featured many examples of chewing mouths with crunching sound effects. Generative AI models are pattern-matching prediction machines, and they need to be shown enough examples of various types of media to generate convincing new outputs. If a concept is over-represented or under-represented in the training data, you’ll see unusual generation results, such as jabberwockies.

Google’s Will Smith double is better at eating AI spaghetti … but it’s crunchy? Read More »

in-35-years,-notepad.exe-has-gone-from-“barely-maintained”-to-“it-writes-for-you”

In 3.5 years, Notepad.exe has gone from “barely maintained” to “it writes for you”

By late 2021, major updates for Windows’ built-in Notepad text editor had been so rare for so long that a gentle redesign and a handful of new settings were rated as a major update. New updates have become much more common since then, but like the rest of Windows, recent additions have been overwhelmingly weighted in the direction of generative AI.

In November, Microsoft began testing an update that allowed users to rewrite or summarize text in Notepad using generative AI. Another preview update today takes it one step further, allowing you to write AI-generated text from scratch with basic instructions (the feature is called Write, to differentiate it from the earlier Rewrite).

Like Rewrite and Summarize, Write requires users to be signed into a Microsoft Account, because using it requires you to use your monthly allotment of Microsoft’s AI credits. Per this support page, users without a paid Microsoft 365 subscription get 15 credits per month. Subscribers with Personal and Family subscriptions get 60 credits per month instead.

Microsoft notes that all AI features in Notepad can be disabled in the app’s settings, and obviously, they won’t be available if you use a local account instead of a Microsoft Account.

Microsoft is also releasing preview updates for Paint and Snipping Tool, two other bedrock Windows apps that hadn’t seen much by way of major updates before the Windows 11 era. Paint’s features are also mostly AI-related, including a “sticker generator” and an AI-powered smart select tool “to help you isolate and edit individual elements in your image.” A new “welcome experience” screen that appears the first time you launch the app will walk you through the (again, mostly AI-related) new features Microsoft has added to Paint in the last couple of years.

In 3.5 years, Notepad.exe has gone from “barely maintained” to “it writes for you” Read More »

musk’s-doge-used-meta’s-llama-2—not-grok—for-gov’t-slashing,-report-says

Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says

Why didn’t DOGE use Grok?

It seems that Grok, Musk’s AI model, wasn’t available for DOGE’s task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI’s Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses.

In their letter, lawmakers urged Vought to investigate Musk’s conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government.

“Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data,” lawmakers argued. “Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place.”

Although Wired’s report seems to confirm that DOGE did not send sensitive data from the “Fork in the Road” emails to an external source, lawmakers want much more vetting of AI systems to deter “the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers.”

A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They’re hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that “agencies must remove barriers to innovation and provide the best value for the taxpayer.”

“While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data,” their letter said. “We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models—the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high.”

Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says Read More »

google-pretends-to-be-in-on-the-joke,-but-its-focus-on-ai-mode-search-is-serious

Google pretends to be in on the joke, but its focus on AI Mode search is serious

AI Mode as Google’s next frontier

Google is the world’s largest advertising entity, but search is what fuels the company. Quarter after quarter, Google crows about increasing search volume—it’s the most important internal metric for the company. Google has made plenty of changes to its search engine results pages (SERPs) over the years, but AI mode throws that all out. It doesn’t have traditional search results no matter how far you scroll.

To hear Google’s leadership tell it, AI Mode is an attempt to simplify finding information. According to Liz Reid, Google’s head of search, the next year in search is about going from information to intelligence. When you are searching for information on a complex issue, you probably have to look at a lot of web sources. It’s rare that you’ll find a single page that answers all your questions, and maybe you should be using AI for that stuff.

Google Elizabeth Reid

Google search head Liz Reid says the team’s search efforts are aimed at understanding the underlying task behind a query. Credit: Ryan Whitwam

The challenge for AI search is to simplify the process of finding information, essentially doing the legwork for you. When speaking about the move to AI search, DeepMind CTO Koray Kavukcuoglu says that search is the greatest product in the world, and if AI makes it easier to search for information, that’s a net positive.

Latency is important in search—people don’t like to wait for things to load, which is why Google has always emphasized the speed of its services. But AI can be slow. The key, says Reid, is to know when users will accept a longer wait. For example, AI Overviews is designed to spit out tokens faster because it’s part of the core search experience. AI Mode, however, has the luxury of taking more time to “think.” If you’re shopping for a new appliance, you might do a few hours of research. So an AI search experience that takes longer to come up with a comprehensive answer with tables, formatting, and background info might be a desirable experience because it still saves you time.

Google pretends to be in on the joke, but its focus on AI Mode search is serious Read More »

apple-legend-jony-ive-takes-control-of-openai’s-design-future

Apple legend Jony Ive takes control of OpenAI’s design future

On Wednesday, OpenAI announced that former Apple design chief Jony Ive and his design firm LoveFrom will take over creative and design control at OpenAI. The deal makes Ive responsible for shaping the future look and feel of AI products at the chatbot creator, extending across all of the company’s ventures, including ChatGPT.

Ive was Apple’s chief design officer for nearly three decades, where he led the design of iconic products including the iPhone, iPad, MacBook, and Apple Watch, earning numerous industry awards and helping transform Apple into the world’s most valuable company through his minimalist design philosophy.

“Thrilled to be partnering with jony, imo the greatest designer in the world,” tweeted OpenAI CEO Sam Altman while sharing a 9-minute promotional video touting the personal and professional relationship between Ive and Altman.

A screenshot of the Jony Ive / Sam Altman collaboration website captured on May 21, 2025.

A screenshot of the Jony Ive/Sam Altman collaboration website captured on May 21, 2025. Credit: OpenAI

Ive left Apple in 2019 to found LoveFrom, a design firm that has worked with companies including Ferrari, Airbnb, and luxury Italian fashion firm Moncler.

The mechanics of the Ive-OpenAI deal are slightly convoluted. At its core, OpenAI will acquire Ive’s company “io” in an all-equity deal valued at $6.5 billion—Ive founded io last year to design and develop AI-powered products. Meanwhile, io’s staff of approximately 55 engineers, scientists, researchers, physicists, and product development specialists will become part of OpenAI.

Meanwhile, Ive’s design firm LoveFrom will continue to operate independently, with OpenAI becoming a customer of LoveFrom, while LoveFrom will receive a stake in OpenAI. The companies expect the transaction to close this summer pending regulatory approval.

Apple legend Jony Ive takes control of OpenAI’s design future Read More »

gemini-2.5-is-leaving-preview-just-in-time-for-google’s-new-$250-ai-subscription

Gemini 2.5 is leaving preview just in time for Google’s new $250 AI subscription

Deep Think graphs I/O

Deep Think is more capable of complex math and coding. Credit: Ryan Whitwam

Both 2.5 models have adjustable thinking budgets when used in Vertex AI and via the API, and now the models will also include summaries of the “thinking” process for each output. This makes a little progress toward making generative AI less overwhelmingly expensive to run. Gemini 2.5 Pro will also appear in some of Google’s dev products, including Gemini Code Assist.

Gemini Live, previously known as Project Astra, started to appear on mobile devices over the last few months. Initially, you needed to have a Gemini subscription or a Pixel phone to access Gemini Live, but now it’s coming to all Android and iOS devices immediately. Google demoed a future “agentic” capability in the Gemini app that can actually control your phone, search the web for files, open apps, and make calls. It’s perhaps a little aspirational, just like the Astra demo from last year. The version of Gemini Live we got wasn’t as good, but as a glimpse of the future, it was impressive.

There are also some developments in Chrome, and you guessed it, it’s getting Gemini. It’s not dissimilar from what you get in Edge with Copilot. There’s a little Gemini icon in the corner of the browser, which you can click to access Google’s chatbot. You can ask it about the pages you’re browsing, have it summarize those pages, and ask follow-up questions.

Google AI Ultra is ultra-expensive

Since launching Gemini, Google has only had a single $20 monthly plan for AI features. That plan granted you access to the Pro models and early versions of Google’s upcoming AI. At I/O, Google is catching up to AI firms like OpenAI, which have offered sky-high AI plans. Google’s new Google AI Ultra plan will cost $250 per month, more than the $200 plan for ChatGPT Pro.

Gemini 2.5 is leaving preview just in time for Google’s new $250 AI subscription Read More »

new-portal-calls-out-ai-content-with-google’s-watermark

New portal calls out AI content with Google’s watermark

Last year, Google open-sourced its SynthID AI watermarking system, allowing other developers access to a toolkit for imperceptibly marking content as AI-generated. Now, Google is rolling out a web-based portal to let people easily test if a piece of media has been watermarked with SynthID.

After uploading a piece of media to the SynthID Detector, users will get back results that “highlight which parts of the content are more likely to have been watermarked,” Google said. That watermarked, AI-generated content should remain detectable by the portal “even when the content is shared or undergoes a range of transformations,” the company said.

The detector will be available to beta testers starting today, and Google says journalists, media professionals, and researchers can apply for a waitlist to get access themselves. To start, users will be able to upload images and audio to the portal for verification, but Google says video and text detection will be added in the coming weeks.

New portal calls out AI content with Google’s watermark Read More »

zero-click-searches:-google’s-ai-tools-are-the-culmination-of-its-hubris

Zero-click searches: Google’s AI tools are the culmination of its hubris


Google’s first year with AI search was a wild ride. It will get wilder.

Google is constantly making changes to its search rankings, but not all updates are equal. Every few months, the company bundles up changes into a larger “core update.” These updates make rapid and profound changes to search, so website operators watch them closely.

The March 2024 update was unique. It was one of Google’s largest core updates ever, and it took over a month to fully roll out. Nothing has felt quite the same since. Whether the update was good or bad depends on who you ask—and maybe who you are.

It’s common for websites to see traffic changes after a core update, but the impact of the March 2024 update marked a seismic shift. Google says the update aimed to address spam and AI-generated content in a meaningful way. Still, many publishers say they saw clicks on legitimate sites evaporate, while others have had to cope with unprecedented volatility in their traffic. Because Google owns almost the entire search market, changes in its algorithm can move the Internet itself.

In hindsight, the March 2024 update looks like the first major Google algorithm update for the AI era. Not only did it (supposedly) veer away from ranking AI-authored content online, but it also laid the groundwork for Google’s ambitious—and often annoying—desire to fuse AI with search.

A year ago, this ambition surfaced with AI Overviews, but now the company is taking an even more audacious route, layering in a new chat-based answer service called “AI Mode.” Both of these technologies do at least two things: They aim to keep you on Google properties longer, and they remix publisher content without always giving prominent citations.

Smaller publishers appear to have borne the brunt of the changes caused by these updates. “Google got all this flak for crushing the small publishers, and it’s true that when they make these changes, they do crush a lot of publishers,” says Jim Yu, CEO of enterprise SEO platform BrightEdge. Yu explains that Google is the only search engine likely to surface niche content in the first place, and there are bound to be changes to sites at the fringes during a major core update.

Google’s own view on the impact of the March 2024 update is unsurprisingly positive. The company said it was hoping to reduce the appearance of unhelpful content in its search engine results pages (SERPs) by 40 percent. After the update, the company claimed an actual reduction of closer to 45 percent. But does it feel like Google’s results have improved by that much? Most people don’t think so.

What causes this disconnect? According to Michael King, founder of SEO firm iPullRank, we’re not speaking the same language as Google. “Google’s internal success metrics differ from user perceptions,” he says. “Google measures user satisfaction through quantifiable metrics, while external observers rely on subjective experiences.”

Google evaluates algorithm changes with various tests, including human search quality testers and running A/B tests on live searches. But more than anything else, success is about the total number of searches (5 trillion of them per year). Google often makes this number a centerpiece of its business updates to show investors that it can still grow.

However, using search quantity to measure quality has obvious problems. For instance, more engagement with a search engine might mean that quality has decreased, so people try new queries (e.g., the old trick of adding “Reddit” to the end of your search string). In other words, people could be searching more because they don’t like the results.

Jim Yu suggests that Google is moving fast and breaking things, but it may not be as bad as we think. “I think they rolled things out faster because they had to move a lot faster than they’ve historically had to move, and it ends up that they do make some real mistakes,” says Yu. “[Google] is held to a higher standard, but by and large, I think their search quality is improving.”

According to King, Google’s current search behavior still favors big names, but other sites have started to see a rebound. “Larger brands are performing better in the top three positions, while lesser-known websites have gained ground in positions 4 through 10,” says King. “Although some websites have indeed lost traffic due to reduced organic visibility, the bigger issue seems tied to increased usage of AI Overviews”—and now the launch of AI Mode.

Yes, the specter of AI hangs over every SERP. The unhelpful vibe many people now get from Google searches, regardless of the internal metrics the company may use, may come from a fundamental shift in how Google surfaces information in the age of AI.

The AI Overview hangover

In 2025, you can’t talk about Google’s changes to search without acknowledging the AI-generated elephant in the room. As it wrapped up that hefty core update in March 2024, Google also announced a major expansion of AI in search, moving the “Search Generative Experience” out of labs and onto Google.com. The feature was dubbed “AI Overviews.”

The AI Overview box has been a fixture on Google’s search results page ever since its debut a year ago. The feature uses the same foundational AI model as Google’s Gemini chatbot to formulate answers to your search queries by ingesting the top 100 (!) search results. It sits at the top of the page, pushing so-called blue link content even farther down below the ads and knowledge graph content. It doesn’t launch on every query, and sometimes it answers questions you didn’t ask—or even hallucinates a totally wrong answer.

And it’s not without some irony that Google’s laudable decision to de-rank synthetic AI slop comes at the same time that Google heavily promotes its own AI-generated content right at the top of SERPs.

AI Overview on phone

AI Overviews appear right at the top of many search results.

Credit: Google

AI Overviews appear right at the top of many search results. Credit: Google

What is Google getting for all of this AI work? More eyeballs, it would seem. “AI is driving more engagement than ever before on Google,” says Yu. BrightEdge data shows that impressions on Google are up nearly 50 percent since AI Overviews launched. Many of the opinions you hear about AI Overviews online are strongly negative, but that doesn’t mean people aren’t paying attention to the feature. In its Q1 2025 earnings report, Google announced that AI Overviews is being “used” by 1.5 billion people every month. (Since you can’t easily opt in or opt out of AI Overviews, this “usage” claim should be taken with a grain of salt.)

Interestingly, the impact of AI Overviews has varied across the web. In October 2024, Google was so pleased with AI Overviews that it expanded them to appear in more queries. And as AI crept into more queries, publishers saw a corresponding traffic drop. Yu estimates this drop to be around 30 percent on average for those with high AI query coverage. For searches that are less supported in AI Overviews—things like restaurants and financial services—the traffic change has been negligible. And there are always exceptions. Yu suggests that some large businesses with high AI Overview query coverage have seen much smaller drops in traffic because they rank extremely well as both AI citations and organic results.

Lower traffic isn’t the end of the world for some businesses. Last May, AI Overviews were largely absent from B2B queries, but that turned around in a big way in recent months. BrightEdge estimates that 70 percent of B2B searches now have AI answers, which has reduced traffic for many companies. Yu doesn’t think it’s all bad, though. “People don’t click through as much—they engage a lot more on the AI—but when they do click, the conversion rate for the business goes up,” Yu says. In theory, serious buyers click and window shoppers don’t.

But the Internet is not a giant mall that exists only for shoppers. It is, first and foremost, a place to share and find information, and AI Overviews have hit some purveyors of information quite hard. At launch, AI Overviews were heavily focused on “What is” and “How to” queries. Such “service content” is a staple of bloggers and big media alike, and these types of publishers aren’t looking for sales conversions—it’s traffic that matters. And they’re getting less of it because AI Overviews “helpfully” repackages and remixes their content, eliminating the need to click through to the site. Some publishers are righteously indignant, asking how it’s fair for Google to remix content it doesn’t own, and to do so without compensation.

But Google’s intentions don’t end with AI Overviews. Last week, the company started an expanded public test of so-called “AI Mode,” right from the front page. AI Mode doesn’t even bother with those blue links. It’s a chatbot experience that, at present, tries to answer your query without clearly citing sources inline. (On some occasions, it will mention Reddit or Wikipedia.) On the right side of the screen, Google provides a little box with three sites linked, which you can expand to see more options. To the end user, it’s utterly unclear if those are “sources,” “recommendations,” or “partner deals.”

Perhaps more surprisingly, in our testing, not a single AI Mode “sites box” listed a site that ranked on the first page for the same query on a regular search. That is, the links in AI Mode for “best foods to eat for a cold” don’t overlap at all with the SERP for the same query in Google Search. In fairness, AI Mode is very new, and its behavior will undoubtedly change. But the direction the company is headed in seems clear.

Google’s real goal is to keep you on Google or other Alphabet properties. In 2019, Rand Fishkin noticed that Google’s evolution from search engine to walled garden was at a tipping point. At that time—and for the first time—more than half of Google searches resulted in zero click-throughs to other sites. But data did show large numbers of clicks to Google’s own properties, like YouTube and Maps. If Google doesn’t intend to deliver a “zero-click” search experience, you wouldn’t know it from historical performance data or the new features the company develops.

You also wouldn’t know it from the way AI Overviews work. They do cite some of the sources used in building each output, and data suggests people click on those links. But are the citations accurate? Is every source used for constructing an AI Overview cited? We don’t really know, as Google is famously opaque about how its search works. We do know that Google uses a customized version of Gemini to support AI Overviews and that Gemini has been trained on billions and billions of webpages.

When AI Overviews do cite a source, it’s not clear how those sources came to be the ones cited. There’s good reason to be suspicious here: AI Overview’s output is not great, as witnessed by the numerous hallucinations we all know and love (telling people to eat rocks, for instance). The only thing we know for sure is that Google isn’t transparent about any of this.

No signs of slowing

Despite all of that, Google is not slowing down on AI in search. More recent core updates have only solidified this new arrangement with an ever-increasing number of AI-answered queries. The company appears OK with its current accuracy problems, or at the very least, it’s comfortable enough to push out AI updates anyway. Google appears to have been caught entirely off guard by the public launch of ChatGPT, and it’s now utilizing its search dominance to play catch-up.

To make matters even more dicey, Google isn’t even trying to address the biggest issue in all this: The company’s quest for zero-click search harms the very content creators upon which the company has built its empire.

For its part, Google has been celebrating its AI developments, insisting that content producers don’t know what’s best for them, refuting any concerns with comments about search volume increases and ever-more-complex search query strings. The changes must be working!

Google has been building toward this moment for years. The company started with a list of ten blue links and nothing else, but little by little, it pushed the links down the page and added more content that keeps people in the Google ecosystem. Way back in 2007, Google added Universal Search, which allowed it to insert content from Google Maps, YouTube, and other services. In 2009, Rich Snippets began displaying more data from search results on SERPs. In 2012, the Knowledge Graph began extracting data from search results to display answers in the search results. Each change kept people on Google longer and reduced click-throughs, all the while pushing the search results down the page.

AI Overviews, and especially AI Mode, are the logical outcome of Google’s years-long transformation from an indexer of information to an insular web portal built on scraping content from around the web. Earlier in Google’s evolution, the implicit agreement was that websites would allow Google to crawl their pages in exchange for sending them traffic. That relationship has become strained as the company has kept more traffic for itself, reducing click-throughs to websites even as search volume continues to increase. And locking Google out isn’t a realistic option when the company controls almost the entire search market.

Even when Google has taken a friendlier approach, business concerns could get in the way. During the search antitrust trial, documents showed that Google initially intended to let sites opt out of being used for AI training for its search-based AI features—but these sites would still be included in search results. The company ultimately canned that idea, leaving site operators with the Pyrrhic choice of participating in the AI “revolution” or becoming invisible on the web. Google now competes with, rather than supports, the open web.

When many of us look at Google’s search results today, the vibe feels off. Maybe it’s the AI, maybe it’s Google’s algorithm, or maybe the Internet just isn’t what it once was. Whatever the cause, the shift toward zero-click search that began more than a decade ago was made clear by the March 2024 core update, and it has only accelerated with the launch of AI Mode. Even businesses that have escaped major traffic drops from AI Overviews could soon find that Google’s AI-only search can get much more overbearing.

The AI slop will continue until morale improves.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Zero-click searches: Google’s AI tools are the culmination of its hubris Read More »