google ai

google-search’s-“udm=14”-trick-lets-you-kill-ai-search-for-good

Google Search’s “udm=14” trick lets you kill AI search for good

Kill AI search “for good” or until Google changes the parameter —

The power of URL parameters lets you unofficially turn off Google’s AI Overview.

Updated

The now normal

Enlarge / The now normal “AI” results versus the old school “Web” results.

Ron Amadeo / Google

If you’re tired of Google’s AI Overview extracting all value from the web while also telling people to eat glue or run with scissors, you can turn it off—sort of. Google has been telling people its AI box at the top of search results is the future, and you can’t turn it off, but that ignores how Google search works: A lot of options are powered by URL parameters. That means you can turn off AI search with this one simple trick! (Sorry.)

Our method for killing AI search is defaulting to the new “web” search filter, which Google recently launched as a way to search the web without Google’s alpha-quality AI junk. It’s actually pretty nice, showing only the traditional 10 blue links, giving you a clean (well, other than the ads), uncluttered results page that looks like it’s from 2011. Sadly, Google’s UI doesn’t have a way to make “web” search the default, and switching to it means digging through the “more” options drop-down after you do a search, so it’s a few clicks deep.

Check out the URL after you do a search, and you’ll see a mile-long URL full of esoteric tracking information and mode information. We’ll put each search result URL parameter on a new line so the URL is somewhat readable:

https://www.google.com/search

?sca_esv=2d1299fed1ffcbfc

&sca_upv=1

&sxsrf=ADLYWIKXaYE8rQyTMcGnzqLZRvjlreRhkw: 1716566104389

&q=how+do+I+turn+off+ai+overview

&uds=ADvngMiH6OrNXu9iaW3w… [truncated]

&udm=14

&prmd=vnisbmt

&sa=X

&ved=2ahUKEwixo4qH06aGAxW5MlkFHQupBdkQs6gLegQITBAB

&biw=1918

&bih=953

&dpr=1

Most of these only mean something to Google’s internal tracking system, but that “&udm=14” line is the one that will put you in a web search. Tack it on to the end of a normal search, and you’ll be booted into the clean 10 blue links interface. While Google might not let you set this as a default, if you have a way to automatically edit the Google search URL, you can create your own defaults. One way to edit the search URL is a proxy site like udm14.com, which is probably the biggest site out there popularizing this technique. A proxy site could, if it wanted to, read all your search result queries, though (your query is also in the URL), so whether you trust this site is up to you.

If you search from your browser’s address bar, that’s a good way to make “web” search the default without involving a third party. Chrome and Firefox have essentially the exact same UI for search settings. In Chrome, you get there by right-clicking the address bar and hitting “manage search engines.” Firefox might take a bit more work, since first you’ll need to enable custom search engines. First type “about:config” into the address bar and hit enter, then search for “browser.urlbar.update2.engineAliasRefresh,” and hit the “plus” button. Then go to Settings -> Search, scroll down to the search engine section, and hit “Add.”

On both browsers, you probably can’t edit the existing Google listing, so you’ll need to create a new search shortcut, call it Google Web, and use https://www.google.com/search?q=%s&udm=14 as the URL.

AI-free Google search.

AI-free Google search.

Ron Amadeo

The third box in the search engine setting is either named “shortcut” or “alias,” depending on your browser, and it does not matter at all if you plan on making this your new default search engine. If you don’t want it to be the default, shortcut/alias will let you selectively launch this search from the address bar by starting your query with the shortcut text. The udm14.com instructions suggest “gw,” so then typing “gw should I eat rocks” will launch “web” search. Omitting “gw” will still launch Google’s AI idiot box, which will probably tell you that rocks are delicious. To use this search engine all the time, find it in the list again after creating it, click the menu button next to the listing, and hit “make default.” Then the shortcut is no longer needed—anything typed into the address bar/search box will go right to web search.

While you’re in here messing around with Google’s URL parameters, another one you might want to add is “&tbs=li:1”. This will automatically trigger “verbatim” search, which makes Google use your exact search inputs instead of fuzzy searching everything, ignoring some words, replacing words with synonyms, and generally doing whatever it can to water down your search input. If you’re a Google novice, the default fuzzy search is fine, but if you’re an expert that has honed your Google Fu skills since the good old days, the fuzzy search is just annoying. It’s just a default, so if you ever find yourself with zero results, hitting the “tools” button will still let you switch between “verbatim” and “all results.”

Defaulting to “web” search will let you use Google with only the 10 blue links, and while that feels like rolling the interface back to 2011, keep in mind you’re still not rolling back Google’s search results quality to 2011. You’re still going to be using a search engine that feels like it has completely surrendered to SEO spammers. So, while this Band-Aid solution is interesting, things are getting so bad that the real recommendation is probably to switch to something other than Google at this point. We all need to find another search engine that values the web and tries to search it. As opposed to Google, which increasingly seems like it’s trying to sacrifice the web at the altar of AI.

Listing image by Aurich Lawson

Google Search’s “udm=14” trick lets you kill AI search for good Read More »

ai-in-space:-karpathy-suggests-ai-chatbots-as-interstellar-messengers-to-alien-civilizations

AI in space: Karpathy suggests AI chatbots as interstellar messengers to alien civilizations

The new golden record —

Andrej Karpathy muses about sending a LLM binary that could “wake up” and answer questions.

Close shot of Cosmonaut astronaut dressed in a gold jumpsuit and helmet, illuminated by blue and red lights, holding a laptop, looking up.

On Thursday, renowned AI researcher Andrej Karpathy, formerly of OpenAI and Tesla, tweeted a lighthearted proposal that large language models (LLMs) like the one that runs ChatGPT could one day be modified to operate in or be transmitted to space, potentially to communicate with extraterrestrial life. He said the idea was “just for fun,” but with his influential profile in the field, the idea may inspire others in the future.

Karpathy’s bona fides in AI almost speak for themselves, receiving a PhD from Stanford under computer scientist Dr. Fei-Fei Li in 2015. He then became one of the founding members of OpenAI as a research scientist, then served as senior director of AI at Tesla between 2017 and 2022. In 2023, Karpathy rejoined OpenAI for a year, leaving this past February. He’s posted several highly regarded tutorials covering AI concepts on YouTube, and whenever he talks about AI, people listen.

Most recently, Karpathy has been working on a project called “llm.c” that implements the training process for OpenAI’s 2019 GPT-2 LLM in pure C, dramatically speeding up the process and demonstrating that working with LLMs doesn’t necessarily require complex development environments. The project’s streamlined approach and concise codebase sparked Karpathy’s imagination.

“My library llm.c is written in pure C, a very well-known, low-level systems language where you have direct control over the program,” Karpathy told Ars. “This is in contrast to typical deep learning libraries for training these models, which are written in large, complex code bases. So it is an advantage of llm.c that it is very small and simple, and hence much easier to certify as Space-safe.”

Our AI ambassador

In his playful thought experiment (titled “Clearly LLMs must one day run in Space”), Karpathy suggested a two-step plan where, initially, the code for LLMs would be adapted to meet rigorous safety standards, akin to “The Power of 10 Rules” adopted by NASA for space-bound software.

This first part he deemed serious: “We harden llm.c to pass the NASA code standards and style guides, certifying that the code is super safe, safe enough to run in Space,” he wrote in his X post. “LLM training/inference in principle should be super safe – it is just one fixed array of floats, and a single, bounded, well-defined loop of dynamics over it. There is no need for memory to grow or shrink in undefined ways, for recursion, or anything like that.”

That’s important because when software is sent into space, it must operate under strict safety and reliability standards. Karpathy suggests that his code, llm.c, likely meets these requirements because it is designed with simplicity and predictability at its core.

In step 2, once this LLM was deemed safe for space conditions, it could theoretically be used as our AI ambassador in space, similar to historic initiatives like the Arecibo message (a radio message sent from Earth to the Messier 13 globular cluster in 1974) and Voyager’s Golden Record (two identical gold records sent on the two Voyager spacecraft in 1977). The idea is to package the “weights” of an LLM—essentially the model’s learned parameters—into a binary file that could then “wake up” and interact with any potential alien technology that might decipher it.

“I envision it as a sci-fi possibility and something interesting to think about,” he told Ars. “The idea that it is not us that might travel to stars but our AI representatives. Or that the same could be true of other species.”

AI in space: Karpathy suggests AI chatbots as interstellar messengers to alien civilizations Read More »

us-gov’t-announces-arrest-of-former-google-engineer-for-alleged-ai-trade-secret-theft

US gov’t announces arrest of former Google engineer for alleged AI trade secret theft

Don’t trade the secrets dept. —

Linwei Ding faces four counts of trade secret theft, each with a potential 10-year prison term.

A Google sign stands in front of the building on the sidelines of the opening of the new Google Cloud data center in Hesse, Hanau, opened in October 2023.

Enlarge / A Google sign stands in front of the building on the sidelines of the opening of the new Google Cloud data center in Hesse, Hanau, opened in October 2023.

On Wednesday, authorities arrested former Google software engineer Linwei Ding in Newark, California, on charges of stealing AI trade secrets from the company. The US Department of Justice alleges that Ding, a Chinese national, committed the theft while secretly working with two China-based companies.

According to the indictment, Ding, who was hired by Google in 2019 and had access to confidential information about the company’s data centers, began uploading hundreds of files into a personal Google Cloud account two years ago.

The trade secrets Ding allegedly copied contained “detailed information about the architecture and functionality of GPU and TPU chips and systems, the software that allows the chips to communicate and execute tasks, and the software that orchestrates thousands of chips into a supercomputer capable of executing at the cutting edge of machine learning and AI technology,” according to the indictment.

Shortly after the alleged theft began, Ding was offered the position of chief technology officer at an early-stage technology company in China that touted its use of AI technology. The company offered him a monthly salary of about $14,800, plus an annual bonus and company stock. Ding reportedly traveled to China, participated in investor meetings, and sought to raise capital for the company.

Investigators reviewed surveillance camera footage that showed another employee scanning Ding’s name badge at the entrance of the building where Ding worked at Google, making him look like he was working from his office when he was actually traveling.

Ding also founded and served as the chief executive of a separate China-based startup company that aspired to train “large AI models powered by supercomputing chips,” according to the indictment. Prosecutors say Ding did not disclose either affiliation to Google, which described him as a junior employee. He resigned from Google on December 26 of last year.

The FBI served a search warrant at Ding’s home in January, seizing his electronic devices and later executing an additional warrant for the contents of his personal accounts. Authorities found more than 500 unique files of confidential information that Ding allegedly stole from Google. The indictment says that Ding copied the files into the Apple Notes application on his Google-issued Apple MacBook, then converted the Apple Notes into PDF files and uploaded them to an external account to evade detection.

“We have strict safeguards to prevent the theft of our confidential commercial information and trade secrets,” Google spokesperson José Castañeda told Ars Technica. “After an investigation, we found that this employee stole numerous documents, and we quickly referred the case to law enforcement. We are grateful to the FBI for helping protect our information and will continue cooperating with them closely.”

Attorney General Merrick Garland announced the case against the 38-year-old at an American Bar Association conference in San Francisco. Ding faces four counts of federal trade secret theft, each carrying a potential sentence of up to 10 years in prison.

US gov’t announces arrest of former Google engineer for alleged AI trade secret theft Read More »

google-launches-“gemini-business”-ai,-adds-$20-to-the-$6-workspace-bill

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill

$6 for apps like Gmail and Docs, and $20 for an AI bot? —

Google’s AI features add a 3x increase over the usual Workspace bill.

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill

Google

Google went ahead with plans to launch Gemini for Workspace today. The big news is the pricing information, and you can see the Workspace pricing page is new, with every plan offering a “Gemini add-on.” Google’s old AI-for-Business plan, “Duet AI for Google Workspace,” is dead, though it never really launched anyway.

Google has a blog post explaining the changes. Google Workspace starts at $6 per user per month for the “Starter” package, and the AI “Add-on,” as Google is calling it, is an extra $20 monthly cost per user (all of these prices require an annual commitment). That is a massive price increase over the normal Workspace bill, but AI processing is expensive. Google says this business package will get you “Help me write in Docs and Gmail, Enhanced Smart Fill in Sheets and image generation in Slides.” It also includes the “1.0 Ultra” model for the Gemini chatbot—there’s a full feature list here. This $20 plan is subject to a usage limit for Gemini AI features of “1,000 times per month.”

The new Workspace pricing page, with a

Enlarge / The new Workspace pricing page, with a “Gemini Add-On” for every plan.

Google

Gemini for Google Workspace represents a total rebrand of the AI business product and some amount of consistency across Google’s hard-to-follow, constantly changing AI branding. Duet AI never really launched to the general public. The product, announced in August, only ever had a “Try” link that led to a survey, and after filling it out, Google would presumably contact some businesses and allow them to pay for Duet AI. Gemini Business now has a checkout page, and any Workspace business customer can buy the product today with just a few clicks.

Google’s second plan is “Gemini Enterprise,” which doesn’t come with any usage limits, but it’s also only available through a “contact us” link and not a normal checkout procedure. Enterprise is $30 per user per month, and it “includes additional capabilities for AI-powered meetings, where Gemini can translate closed captions in more than 100 language pairs, and soon even take meeting notes.”

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill Read More »

google-plans-“gemini-business”-ai-for-workspace-users

Google plans “Gemini Business” AI for Workspace users

We’ve got to pay for all those Nvidia cards somehow —

Google’s first swing at this idea, “Duet AI,” was an extra $30 per user per month.

The Google Gemini logo.

Enlarge / The Google Gemini logo.

Google

One of Google’s most lucrative businesses consists of packaging its free consumer apps with a few custom features and extra security and then selling them to companies. That’s usually called “Google Workspace,” and today it offers email, calendar, docs, storage, and video chat. Soon, it sounds like Google is gearing up to offer an AI chatbot for businesses. Google’s latest chatbot is called “Gemini” (it used to be “Bard”), and the latest early patch notes spotted by Dylan Roussei of 9to5Google and TestingCatalog.eth show descriptions for new “Gemini Business” and “Gemini Enterprise” products.

The patch notes say that Workspace customers will get “enterprise-grade data protections” and Gemini settings in the Google Workspace Admin console and that Workspace users can “use Gemini confidently at work” while “trusting that your conversations aren’t used to train Gemini models.”

These “early patch notes” for Bard/Gemini have been a thing for a while now. Apparently, some people have ways of making the site spit out early patch notes, and in this case, they were independently confirmed by two different people. I’m not sure the date (scheduled for February 21) is trustworthy, though.

Normally, you would expect a Google app to be included in the “Business Standard” version of Workspace, which is $12 per user per month, but it sounds like Gemini won’t be included. Google describes the products as “new Gemini Business and Gemini Enterprise plans” [emphasis ours] and implores existing paying Google Workspace users to “upgrade today to Gemini Business or Gemini Enterprise.” Roussei says the “upgrade today” link goes to the Duet AI Workspace page, Google’s first attempt at “AI for business,” which hasn’t been updated with any new plans just yet.

It’s unclear how much of the Duet AI business plan is surviving the Gemini rollout. Duet was announced in August 2023 as a few “help me write” buttons in Gmail, Docs, and other Workspace apps, which would all open chatbots that can control the various apps. Duet AI was supposed to have an “initial offering” price of an additional $30 per user per month, but it has been six months now, and Duet AI still isn’t generally available to businesses. The “try Duet AI” link goes to a “request a trial” contact form. Six months is an eternity in Google’s rapidly evolving AI plans; it’s a good bet Duet is replaced by all this Gemini stuff. Will it still be an extra $30, or did everyone scoff at that price?

If this $30-extra-for-AI plan ever ships, that would mean a typical AI-infused Workspace account would be a total of $45 per user per month. That sounds like a lot, but generative AI products currently take a huge amount of processing, which means they cost a lot. Right now, everyone is in land-grab mode, trying to get as many users as possible, but generally, the big players are all losing money. Nvidia’s market-leading AI cards can cost around $10,000 to $40,000 for a single card, and that’s not even counting the ongoing electricity costs.

Google plans “Gemini Business” AI for Workspace users Read More »

google-upstages-itself-with-gemini-15-ai-launch,-one-week-after-ultra-1.0

Google upstages itself with Gemini 1.5 AI launch, one week after Ultra 1.0

Gemini’s Twin —

Google confusingly overshadows its own pro product a week after its last major AI launch.

The Gemini 1.5 logo

Enlarge / The Gemini 1.5 logo, released by Google.

Google

One week after its last major AI announcement, Google appears to have upstaged itself. Last Thursday, Google launched Gemini Ultra 1.0, which supposedly represented the best AI language model Google could muster—available as part of the renamed “Gemini” AI assistant (formerly Bard). Today, Google announced Gemini Pro 1.5, which it says “achieves comparable quality to 1.0 Ultra, while using less compute.”

Congratulations, Google, you’ve done it. You’ve undercut your own premiere AI product. While Ultra 1.0 is possibly still better than Pro 1.5 (what even are we saying here), Ultra was presented as a key selling point of its “Gemini Advanced” tier of its Google One subscription service. And now it’s looking a lot less advanced than seven days ago. All this is on top of the confusing name-shuffling Google has been doing recently. (Just to be clear—although it’s not really clarifying at all—the free version of Bard/Gemini currently uses the Pro 1.0 model. Got it?)

Google claims that Gemini 1.5 represents a new generation of LLMs that “delivers a breakthrough in long-context understanding,” and that it can process up to 1 million tokens, “achieving the longest context window of any large-scale foundation model yet.” Tokens are fragments of a word. The first part of the claim about “understanding” is contentious and subjective, but the second part is probably correct. OpenAI’s GPT-4 Turbo can reportedly handle 128,000 tokens in some circumstances, and 1 million is quite a bit more—about 700,000 words. A larger context window allows for processing longer documents and having longer conversations. (The Gemini 1.0 model family handles 32,000 tokens max.)

But any technical breakthroughs are almost beside the point. What should we make of a company that just trumpeted to the world about its AI supremacy last week, only to partially supersede that a week later? Is it a testament to the rapid rate of AI technical progress in Google’s labs, a sign that red tape was holding back Ultra 1.0 for too long, or merely a sign of poor coordination between research and marketing? We honestly don’t know.

So back to Gemini 1.5. What is it, really, and how will it be available? Google implies that like 1.0 (which had Nano, Pro, and Ultra flavors), it will be available in multiple sizes. Right now, Pro 1.5 is the only model Google is unveiling. Google says that 1.5 uses a new mixture-of-experts (MoE) architecture, which means the system selectively activates different “experts” or specialized sub-models within a larger neural network for specific tasks based on the input data.

Google says that Gemini 1.5 can perform “complex reasoning about vast amounts of information,” and gives an example of analyzing a 402-page transcript of Apollo 11’s mission to the Moon. It’s impressive to process documents that large, but the model, like every large language model, is highly likely to confabulate interpretations across large contexts. We wouldn’t trust it to soundly analyze 1 million tokens without mistakes, so that’s putting a lot of faith into poorly understood LLM hands.

For those interested in diving into technical details, Google has released a technical report on Gemini 1.5 that appears to show Gemini performing favorably versus GPT-4 Turbo on various tasks, but it’s also important to note that the selection and interpretation of those benchmarks can be subjective. The report does give some numbers on how much better 1.5 is compared to 1.0, saying it’s 28.9 percent better than 1.0 Pro at “Math, Science & Reasoning” and 5.2 percent better at those subjects than 1.0 Ultra.

A table from the Gemini 1.5 technical document showing comparisons to Gemini 1.0.

Enlarge / A table from the Gemini 1.5 technical document showing comparisons to Gemini 1.0.

Google

But for now, we’re still kind of shocked that Google would launch this particular model at this particular moment in time. Is it trying to get ahead of something that it knows might be just around the corner, like OpenAI’s unreleased GPT-5, for instance? We’ll keep digging and let you know what we find.

Google says that a limited preview of 1.5 Pro is available now for developers via AI Studio and Vertex AI with a 128,000 token context window, scaling up to 1 million tokens later. Gemini 1.5 apparently has not come to the Gemini chatbot (formerly Bard) yet.

Google upstages itself with Gemini 1.5 AI launch, one week after Ultra 1.0 Read More »

google-might-already-be-replacing-some-ad-sales-jobs-with-ai

Google might already be replacing some Ad sales jobs with AI

Better click-through rates than Cyberdyne Systems —

When AI can make assets and text for ads, you don’t need humans to do it anymore.

A large Google logo is displayed amidst foliage.

Google is wrapping its head around the idea of being a generative AI company. The “code red” called in response to ChatGPT has had Googlers scrambling to come up with AI features and ideas. Once all the dust settles on that work, Google might turn inward and try to “optimize” the company with some of its new AI capabilities. With artificial intelligence being the hot new thing, how much of Google’s, uh, natural intelligence needs to be there?

A report at The Information says that AI might already be taking people’s jobs at Google. The report cites people briefed on the plans and says Google intends to “consolidate staff, including through possible layoffs, by reassigning employees at its large customer sales unit who oversee relationships with major advertisers.” According to the report, the jobs are being vacated because Google’s new AI tools have automated them. The report says a future restructuring was apparently already announced at a department-wide Google Ads meeting last week.

Google announced a “new era of AI-powered ads” in May, featuring a “natural-language conversational experience within Google Ads, designed to jump-start campaign creation and simplify Search ads.” Google said its new AI could scan your website and “generate relevant and effective keywords, headlines, descriptions, images, and other assets,” making the Google Ads chatbot one part designer and one part sales expert.

One ad tool, Google’s Performance Max (or “PMax” for short), got a generative AI boost after May’s announcement and can now “create custom assets and scale them in a few clicks.” First, it helps advertisers decide if an ad should be in places like YouTube, Search, Discover, Gmail, Maps, or banner ads on third-party sites. Then, it can just make the ad content, thanks to generative AI that can scan your website for material. (A human advertiser is still in the loop approving content—for now.) It’s called “Performance Max” because variations of your ad are still left up to the machines, which can constantly remix your ads in real time using click-through rates as feedback. Google’s official description is that “Assets are automatically mixed and matched to find the top performing combinations based on which Google Ads channel your ad is appearing on.”

Changing ads on the fly with immediate click-through-rate validation and A/B testing is a task that no person would have the time to do. Also, no one would want to pay a human to do this much work, so having an AI monitor your ad performance sounds like a smart solution. The report also notes another benefit of making AI do this work: “Because these tools don’t require much employee attention, they carry relatively few expenses, so the ad revenue carries a high-profit margin.”

The Information report says, “A growing number of advertisers have adopted PMax since [launch], eliminating the need for some employees who specialized in selling ads for a particular Google service, like search, working together to design ad campaigns for big customers.”

According to the report, as of a year ago, Google had about 13,500 people devoted to this kind of sales work, a huge chunk of the 30,000-strong ad division. These 13,500 people aren’t necessarily all going to be affected, and those who are won’t necessarily be laid off—they could be reassigned to other areas in Google. We should know the scale of Google Ad’s big re-org soon. The report says, “Some employees expect the changes to be announced next month.”

Google might already be replacing some Ad sales jobs with AI Read More »

the-pixel-9-might-come-with-exclusive-“pixie”-ai-assistant

The Pixel 9 might come with exclusive “Pixie” AI assistant

Things are looking dark for Google Assistant —

What will happen to the Google Assistant when the new AI assistant comes out?

The Pixel 9 might come with exclusive “Pixie” AI assistant

Move over Google Assistant, Google is apparently working on a new AI. The Information reports that Google is working on a new “Pixie” AI assistant that will be exclusive to Pixel devices. Pixie will reportedly be powered by Google’s new “Gemini” AI model. The report says Pixie would launch first on the Pixel 9: “Eventually, Google wants to bring the features to its lower-end phones and devices like its watch.”

So far, Google and Amazon reportedly have plans to reboot their voice assistants with the new wave of large language models. Both are only at the rumor stage, so neither company has promoted how a large language model will help a voice assistant. Today, the typical complaints are usually around voice recognition accuracy and response time, which a language model doesn’t seem like it would help with. Presumably, large language models would help allow longer-form, more in-depth responses to questions, but whether consumers want to hear a synthetic robot voice read out a paragraph-long response is something the market will figure out.

Another feature listed in the report is that Google might build “glasses that could make use of the AI’s ability to recognize the objects a wearer is seeing.” Between Google Glass and Project Iris, Google has started and stopped a lot of eyewear projects.

The move shows how Google has changed its thinking around AI assistants over the past decade. It used to view Google Assistant as the future of Google Search, so it wanted Assistant to be available everywhere. Google Assistant was a good product for a time, available on all Android phones, on iOS via the Google app, and via lots of purpose-built hardware like the Google Home/Nest Audio speakers and smart displays. Google Assistant never made any money, though. The hardware was all sold at cost, the software was given away to partners, and the ongoing costs of voice processing piled up. There was never any additional revenue to pay for the Google Assistant in the form of ads. Amazon is in the same boat with its Alexa: No one has figured out how to make voice assistants profitable.

Since Google Assistant is a money pit, The Information previously reported that Google plans to “invest less in developing its Google Assistant voice-assisted search for cars and for devices not made by Google, including TVs, headphones, smart-home speakers, smart glasses and smartwatches that use Google’s Wear.” The idea is for Google to double down on its own hardware, which, according to the previous report, is what Google thinks will provide the best protection against regulators threatening the company’s search deals on the iPhone and Android partner devices. “We’re going to take on the iPhone” is apparently the hard-to-believe mindset at Google right now, according to this report.

Making the next-gen Assistant exclusive to the Pixel 9 would fall into this category. Presumably, the ongoing money problem would then be solved, or at least accounted for, in the sales of phone hardware. The current Google Assistant was originally exclusive to the first Pixel and spread out to Google’s partners, but The Information’s reporting makes it seem like that isn’t the plan this time (though that could always change). No one knows what will happen to Google AI assistant No. 1 (Google Assistant) when AI assistant No. 2 launches, but killing it off sounds like a likely outcome. It would also be a way to cut costs and get Google Assistant off people’s devices.

The problem with doubling down on hardware is that Google Hardware is a small division that has previously been unable to support this kind of ambition. Going back to that quote about third-party devices, there are no Google cars, TVs, or smart glasses (the report says smart glasses are being worked on, though). Some years, Google’s existing hardware isn’t necessarily very good. In other years, long times will go by when Google doesn’t update some product lines, leaving them for dead (laptops, tablets). Google Hardware is also usually only available in about 13 countries, which is a tiny sliver of the world. Being on third-party devices protects you from all this. Previously, Google’s strength was the availability of its ecosystem, and you give that up if you make everything exclusive to your hardware.

The Pixel 9 might come with exclusive “Pixie” AI assistant Read More »