Google

google-takes-advantage-of-federal-cost-cutting-with-steep-workspace-discount

Google takes advantage of federal cost-cutting with steep Workspace discount

Google has long been on the lookout for ways to break Microsoft’s stranglehold on US government office software, and the current drive to cut costs may be it. Google and the federal government have announced an agreement that makes Google Workspace available to all agencies at a significant discount, trimming 71 percent from the service’s subscription price tag.

Since Donald Trump returned to the White House, the government has engaged in a campaign of unbridled staffing reductions and program cancellations, all with the alleged aim of reducing federal spending. It would appear Google recognized this opportunity, negotiating with the General Services Administration (GSA) to offer Workspace at a lower price. Google claims the deal could yield up to $2 billion in savings.

Google has previously offered discounts for federal agencies interested in migrating to Workspace, but it saw little success displacing Microsoft. The Windows maker has enjoyed decades as an entrenched tech giant, leading the 365 productivity tools to proliferate throughout the government. While Google has gotten some agencies on board, Microsoft has traditionally won the lion’s share of contracts, including the $8 billion Defense Enterprise Office Solutions contract that pushed Microsoft 365 to all corners of the Pentagon beginning in 2020.

Google takes advantage of federal cost-cutting with steep Workspace discount Read More »

google-announces-faster,-more-efficient-gemini-ai-model

Google announces faster, more efficient Gemini AI model

We recently spoke with Google’s Tulsee Doshi, who noted that the 2.5 Pro (Experimental) release was still prone to “overthinking” its responses to simple queries. However, the plan was to further improve dynamic thinking for the final release, and the team also hoped to give developers more control over the feature. That appears to be happening with Gemini 2.5 Flash, which includes “dynamic and controllable reasoning.”

The newest Gemini models will choose a “thinking budget” based on the complexity of the prompt. This helps reduce wait times and processing for 2.5 Flash. Developers even get granular control over the budget to lower costs and speed things along where appropriate. Gemini 2.5 models are also getting supervised tuning and context caching for Vertex AI in the coming weeks.

In addition to the arrival of Gemini 2.5 Flash, the larger Pro model has picked up a new gig. Google’s largest Gemini model is now powering its Deep Research tool, which was previously running Gemini 2.0 Pro. Deep Research lets you explore a topic in greater detail simply by entering a prompt. The agent then goes out into the Internet to collect data and synthesize a lengthy report.

Gemini vs. ChatGPT chart

Credit: Google

Google says that the move to Gemini 2.5 has boosted the accuracy and usefulness of Deep Research. The graphic above shows Google’s alleged advantage compared to OpenAI’s deep research tool. These stats are based on user evaluations (not synthetic benchmarks) and show a greater than 2-to-1 preference for Gemini 2.5 Pro reports.

Deep Research is available for limited use on non-paid accounts, but you won’t get the latest model. Deep Research with 2.5 Pro is currently limited to Gemini Advanced subscribers. However, we expect before long that all models in the Gemini app will move to the 2.5 branch. With dynamic reasoning and new TPUs, Google could begin lowering the sky-high costs that have thus far made generative AI unprofitable.

Google announces faster, more efficient Gemini AI model Read More »

balatro-yet-again-subject-to-mods’-poor-understanding-of-“gambling”

Balatro yet again subject to mods’ poor understanding of “gambling”

Balatro is certainly habit-forming, but there’s nothing to be won or lost, other than time, by playing it. While the game has you using standard playing cards and poker hands as part of its base mechanics, it does not have in-app purchases, loot boxes, or any kind of online play or enticement to gambling, beyond the basics of risk and reward.

Yet many YouTube creators have had their Balatro videos set to the traffic-dropping “Age-restricted” status, allegedly due to “depictions or promotions of casino websites or apps,” with little recourse for appeal.

The Balatro University channel detailed YouTube’s recent concerns about “online gambling” in a video posted last weekend. Under policies that took effect March 19, YouTube no longer allows any reference to gambling sites or applications “not certified by Google.” Additionally, content with “online gambling content”—”excluding online sports betting and depictions of in-person gambling”—cannot be seen by anyone signed out of YouTube or registered as under 18 years old.

Balatro University’s primer on how more than 100 of his videos about Balatro suddenly became age-restricted.

“The problem is,” Balatro University’s host notes, “Balatro doesn’t have any gambling.” Balatro University reported YouTube placing age restrictions on 119 of his 606 videos, some of them having been up for more than a year. After receiving often confusingly worded notices from YouTube, the channel host filed 30 appeals, 24 of which were rejected. Some of the last messaging from YouTube to Balatro University, from likely outdated and improperly cross-linked guidance, implied that his videos were restricted because they show “harmful or dangerous activities that risk serious physical harm.”

Screen from the game Balatro, showing a pair hand with two

Balatro, while based on poker hands, involving chips and evoking some aspects of video poker or casinos, only has you winning money that buys you cards and upgrades in the game.

Credit: Playstack

Balatro, while based on poker hands, involving chips and evoking some aspects of video poker or casinos, only has you winning money that buys you cards and upgrades in the game. Credit: Playstack

Developer LocalThunk took to social network Bluesky with some exasperation. “Good thing we are protecting children from knowing what a 4 of a kind is and letting them watch CS case opening videos instead,” he wrote, referencing the popularity of videos showing Counter-Strike “cases” with weapon skins being opened.

Apparently Balatro videos are being rated 18+ on YouTube now for gambling

Good thing we are protecting children from knowing what a 4 of a kind is and letting them watch CS case opening videos instead

— localthunk (@localthunk.bsky.social) April 5, 2025 at 4: 39 PM

Balatro yet again subject to mods’ poor understanding of “gambling” Read More »

google’s-ai-mode-search-can-now-answer-questions-about-images

Google’s AI Mode search can now answer questions about images

Google started cramming AI features into search in 2024, but last month marked an escalation. With the release of AI Mode, Google previewed a future in which searching the web does not return a list of 10 blue links. Google says it’s getting positive feedback on AI Mode from users, so it’s forging ahead by adding multimodal functionality to its robotic results.

AI Mode relies on a custom version of the Gemini large language model (LLM) to produce results. Google confirms that this model now supports multimodal input, which means you can now show images to AI Mode when conducting a search.

As this change rolls out, the search bar in AI Mode will gain a new button that lets you snap a photo or upload an image. The updated Gemini model can interpret the content of images, but it gets a little help from Google Lens. Google notes that Lens can identify specific objects in the images you upload, passing that context along so AI Mode can make multiple sub-queries, known as a “fan-out technique.”

Google illustrates how this could work in the example below. The user shows AI Mode a few books, asking questions about similar titles. Lens identifies each individual title, allowing AI Mode to incorporate the specifics of the books into its response. This is key to the model’s ability to suggest similar books and make suggestions based on the user’s follow-up question.

Google’s AI Mode search can now answer questions about images Read More »

gemini-“coming-together-in-really-awesome-ways,”-google-says-after-2.5-pro-release

Gemini “coming together in really awesome ways,” Google says after 2.5 Pro release


Google’s Tulsee Doshi talks vibes and efficiency in Gemini 2.5 Pro.

Google was caught flat-footed by the sudden skyrocketing interest in generative AI despite its role in developing the underlying technology. This prompted the company to refocus its considerable resources on catching up to OpenAI. Since then, we’ve seen the detail-flubbing Bard and numerous versions of the multimodal Gemini models. While Gemini has struggled to make progress in benchmarks and user experience, that could be changing with the new 2.5 Pro (Experimental) release. With big gains in benchmarks and vibes, this might be the first Google model that can make a dent in ChatGPT’s dominance.

We recently spoke to Google’s Tulsee Doshi, director of product management for Gemini, to talk about the process of releasing Gemini 2.5, as well as where Google’s AI models are going in the future.

Welcome to the vibes era

Google may have had a slow start in building generative AI products, but the Gemini team has picked up the pace in recent months. The company released Gemini 2.0 in December, showing a modest improvement over the 1.5 branch. It only took three months to reach 2.5, meaning Gemini 2.0 Pro wasn’t even out of the experimental stage yet. To hear Doshi tell it, this was the result of Google’s long-term investments in Gemini.

“A big part of it is honestly that a lot of the pieces and the fundamentals we’ve been building are now coming together in really awesome ways, ” Doshi said. “And so we feel like we’re able to pick up the pace here.”

The process of releasing a new model involves testing a lot of candidates. According to Doshi, Google takes a multilayered approach to inspecting those models, starting with benchmarks. “We have a set of evals, both external academic benchmarks as well as internal evals that we created for use cases that we care about,” she said.

Credit: Google

The team also uses these tests to work on safety, which, as Google points out at every given opportunity, is still a core part of how it develops Gemini. Doshi noted that making a model safe and ready for wide release involves adversarial testing and lots of hands-on time.

But we can’t forget the vibes, which have become an increasingly important part of AI models. There’s great focus on the vibe of outputs—how engaging and useful they are. There’s also the emerging trend of vibe coding, in which you use AI prompts to build things instead of typing the code yourself. For the Gemini team, these concepts are connected. The team uses product and user feedback to understand the “vibes” of the output, be that code or just an answer to a question.

Google has noted on a few occasions that Gemini 2.5 is at the top of the LM Arena leaderboard, which shows that people who have used the model prefer the output by a considerable margin—it has good vibes. That’s certainly a positive place for Gemini to be after a long climb, but there is some concern in the field that too much emphasis on vibes could push us toward models that make us feel good regardless of whether the output is good, a property known as sycophancy.

If the Gemini team has concerns about feel-good models, they’re not letting it show. Doshi mentioned the team’s focus on code generation, which she noted can be optimized for “delightful experiences” without stoking the user’s ego. “I think about vibe less as a certain type of personality trait that we’re trying to work towards,” Doshi said.

Hallucinations are another area of concern with generative AI models. Google has had plenty of embarrassing experiences with Gemini and Bard making things up, but the Gemini team believes they’re on the right path. Gemini 2.5 apparently has set a high-water mark in the team’s factuality metrics. But will hallucinations ever be reduced to the point we can fully trust the AI? No comment on that front.

Don’t overthink it

Perhaps the most interesting thing you’ll notice when using Gemini 2.5 is that it’s very fast compared to other models that use simulated reasoning. Google says it’s building this “thinking” capability into all of its models going forward, which should lead to improved outputs. The expansion of reasoning in large language models in 2024 resulted in a noticeable improvement in the quality of these tools. It also made them even more expensive to run, exacerbating an already serious problem with generative AI.

The larger and more complex an LLM becomes, the more expensive it is to run. Google hasn’t released technical data like parameter count on its newer models—you’ll have to go back to the 1.5 branch to get that kind of detail. However, Doshi explained that Gemini 2.5 is not a substantially larger model than Google’s last iteration, calling it “comparable” in size to 2.0.

Gemini 2.5 is more efficient in one key area: the chain of thought. It’s Google’s first public model to support a feature called Dynamic Thinking, which allows the model to modulate the amount of reasoning that goes into an output. This is just the first step, though.

“I think right now, the 2.5 Pro model we ship still does overthink for simpler prompts in a way that we’re hoping to continue to improve,” Doshi said. “So one big area we are investing in is Dynamic Thinking as a way to get towards our [general availability] version of 2.5 Pro where it thinks even less for simpler prompts.”

Gemini models on phone

Credit: Ryan Whitwam

Google doesn’t break out earnings from its new AI ventures, but we can safely assume there’s no profit to be had. No one has managed to turn these huge LLMs into a viable business yet. OpenAI, which has the largest user base with ChatGPT, loses money even on the users paying for its $200 Pro plan. Google is planning to spend $75 billion on AI infrastructure in 2025, so it will be crucial to make the most of this very expensive hardware. Building models that don’t waste cycles on overthinking “Hi, how are you?” could be a big help.

Missing technical details

Google plays it close to the chest with Gemini, but the 2.5 Pro release has offered more insight into where the company plans to go than ever before. To really understand this model, though, we’ll need to see the technical report. Google last released such a document for Gemini 1.5. We still haven’t seen the 2.0 version, and we may never see that document now that 2.5 has supplanted 2.0.

Doshi notes that 2.5 Pro is still an experimental model. So, don’t expect full evaluation reports to happen right away. A Google spokesperson clarified that a full technical evaluation report on the 2.5 branch is planned, but there is no firm timeline. Google hasn’t even released updated model cards for Gemini 2.0, let alone 2.5. These documents are brief one-page summaries of a model’s training, intended use, evaluation data, and more. They’re essentially LLM nutrition labels. It’s much less detailed than a technical report, but it’s better than nothing. Google confirms model cards are on the way for Gemini 2.0 and 2.5.

Given the recent rapid pace of releases, it’s possible Gemini 2.5 Pro could be rolling out more widely around Google I/O in May. We certainly hope Google has more details when the 2.5 branch expands. As Gemini development picks up steam, transparency shouldn’t fall by the wayside.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Gemini “coming together in really awesome ways,” Google says after 2.5 Pro release Read More »

deepmind-has-detailed-all-the-ways-agi-could-wreck-the-world

DeepMind has detailed all the ways AGI could wreck the world

As AI hype permeates the Internet, tech and business leaders are already looking toward the next step. AGI, or artificial general intelligence, refers to a machine with human-like intelligence and capabilities. If today’s AI systems are on a path to AGI, we will need new approaches to ensure such a machine doesn’t work against human interests.

Unfortunately, we don’t have anything as elegant as Isaac Asimov’s Three Laws of Robotics. Researchers at DeepMind have been working on this problem and have released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience.

It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to “severe harm.”

All the ways AGI could harm humanity

This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks. Misuse and misalignment are discussed in the paper at length, but the latter two are only covered briefly.

table of AGI risks

The four categories of AGI risk, as determined by DeepMind.

Credit: Google DeepMind

The four categories of AGI risk, as determined by DeepMind. Credit: Google DeepMind

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne’er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon.

DeepMind has detailed all the ways AGI could wreck the world Read More »

gmail-unveils-end-to-end-encrypted-messages-only-thing-is:-it’s-not-true-e2ee.

Gmail unveils end-to-end encrypted messages. Only thing is: It’s not true E2EE.

“The idea is that no matter what, at no time and in no way does Gmail ever have the real key. Never,” Julien Duplant, a Google Workspace product manager, told Ars. “And we never have the decrypted content. It’s only happening on that user’s device.”

Now, as to whether this constitutes true E2EE, it likely doesn’t, at least under stricter definitions that are commonly used. To purists, E2EE means that only the sender and the recipient have the means necessary to encrypt and decrypt the message. That’s not the case here, since the people inside Bob’s organization who deployed and manage the KACL have true custody of the key.

In other words, the actual encryption and decryption process occurs on the end-user devices, not on the organization’s server or anywhere else in between. That’s the part that Google says is E2EE. The keys, however, are managed by Bob’s organization. Admins with full access can snoop on the communications at any time.

The mechanism making all of this possible is what Google calls CSE, short for client-side encryption. It provides a simple programming interface that streamlines the process. Until now, CSE worked only with S/MIME. What’s new here is a mechanism for securely sharing a symmetric key between Bob’s organization and Alice or anyone else Bob wants to email.

The new feature is of potential value to organizations that must comply with onerous regulations mandating end-to-end encryption. It most definitely isn’t suitable for consumers or anyone who wants sole control over the messages they send. Privacy advocates, take note.

Gmail unveils end-to-end encrypted messages. Only thing is: It’s not true E2EE. Read More »

critics-suspect-trump’s-weird-tariff-math-came-from-chatbots

Critics suspect Trump’s weird tariff math came from chatbots

Rumors claim Trump consulted chatbots

On social media, rumors swirled that the Trump administration got these supposedly fake numbers from chatbots. On Bluesky, tech entrepreneur Amy Hoy joined others posting screenshots from ChatGPT, Gemini, Claude, and Grok, each showing that the chatbots arrived at similar calculations as the Trump administration.

Some of the chatbots also warned against the oversimplified math in outputs. ChatGPT acknowledged that the easy method “ignores the intricate dynamics of international trade.” Gemini cautioned that it could only offer a “highly simplified conceptual approach” that ignored the “vast real-world complexities and consequences” of implementing such a trade strategy. And Claude specifically warned that “trade deficits alone don’t necessarily indicate unfair trade practices, and tariffs can have complex economic consequences, including increased prices and potential retaliation.” And even Grok warns that “imposing tariffs isn’t exactly ‘easy'” when prompted, calling it “a blunt tool: quick to swing, but the ripple effects (higher prices, pissed-off allies) can complicate things fast,” an Ars test showed, using a similar prompt as social media users generally asking, “how do you impose tariffs easily?”

The Verge plugged in phrasing explicitly used by the Trump administration—prompting chatbots to provide “an easy way for the US to calculate tariffs that should be imposed on other countries to balance bilateral trade deficits between the US and each of its trading partners, with the goal of driving bilateral trade deficits to zero”—and got the “same fundamental suggestion” as social media users reported.

Whether the Trump administration actually consulted chatbots while devising its global trade policy will likely remain a rumor. It’s possible that the chatbots’ training data simply aligned with the administration’s approach.

But with even chatbots warning that the strategy may not benefit the US, the pressure appears to be on Trump to prove that the reciprocal tariffs will lead to “better-paying American jobs making beautiful American-made cars, appliances, and other goods” and “address the injustices of global trade, re-shore manufacturing, and drive economic growth for the American people.” As his approval rating hits new lows, Trump continues to insist that “reciprocal tariffs are a big part of why Americans voted for President Trump.”

“Everyone knew he’d push for them once he got back in office; it’s exactly what he promised, and it’s a key reason he won the election,” the White House fact sheet said.

Critics suspect Trump’s weird tariff math came from chatbots Read More »

feeling-curious?-google’s-notebooklm-can-now-discover-data-sources-for-you

Feeling curious? Google’s NotebookLM can now discover data sources for you

In addition to the chatbot functionality, NotebookLM can use the source data to build FAQs, briefing summaries, and, of course, Audio Overviews—that’s a podcast-style conversation between two fake people, a feature that manages to be simultaneously informative and deeply unsettling. It’s probably the most notable capability of NotebookLM, though. Google recently brought Audio Overviews to its Gemini Deep Research product, too.

And that’s not all—Google is lowering the barrier to entry even more. You don’t even need to have any particular goal to play around with NotebookLM. In addition to the Discover button, Google has added an “I’m Feeling Curious” button, a callback to its iconic randomized “I’m feeling lucky” search button. Register your curiosity with NotebookLM, and it will seek out sources on a random topic.

Google says the new NotebookLM features are available starting today, but you might not see them right away. It could take about a week until everyone has Discover Sources and I’m Feeling Curious. Both of these features are available for free users, but be aware that the app has limits on the number of Audio Overviews and sources unless you pay for Google’s AI Premium subscription for $20 per month.

Feeling curious? Google’s NotebookLM can now discover data sources for you Read More »

google-shakes-up-gemini-leadership,-google-labs-head-taking-the-reins

Google shakes up Gemini leadership, Google Labs head taking the reins

On the heels of releasing its most capable AI model yet, Google is making some changes to the Gemini team. A new report from Semafor reveals that longtime Googler Sissie Hsiao will step down from her role leading the Gemini team effective immediately. In her place, Google is appointing Josh Woodward, who currently leads Google Labs.

According to a memo from DeepMind CEO Demis Hassabis, this change is designed to “sharpen our focus on the next evolution of the Gemini app.” This new responsibility won’t take Woodward away from his role at Google Labs—he will remain in charge of that division while leading the Gemini team.

Meanwhile, Hsiao says in a message to employees that she is happy with “Chapter 1” of the Bard story and is optimistic for Woodward’s “Chapter 2.” Hsiao won’t be involved in Google’s AI efforts for now—she’s opted to take some time off before returning to Google in a new role.

Hsiao has been at Google for 19 years and was tasked with building Google’s chatbot in 2022. At the time, Google was reeling after ChatGPT took the world by storm using the very transformer architecture that Google originally invented. Initially, the team’s chatbot efforts were known as Bard before being unified under the Gemini brand at the end of 2023.

This process has been a bit of a slog, with Google’s models improving slowly while simultaneously worming their way into many beloved products. However, the sense inside the company is that Gemini has turned a corner with 2.5 Pro. While this model is still in the experimental stage, it has bested other models in academic benchmarks and has blown right past them in all-important vibemarks like LM Arena.

Google shakes up Gemini leadership, Google Labs head taking the reins Read More »

apple-enables-rcs-messaging-for-google-fi-subscribers-at-last

Apple enables RCS messaging for Google Fi subscribers at last

With RCS, iPhone users can converse with non-Apple users without losing the enhanced features to which they’ve become accustomed in iMessage. That includes longer messages, HD media, typing indicators, and much more. Google Fi has several different options for data plans, and the company notes that RCS does use mobile data when away from Wi-Fi. Those on the “Flexible” Fi plan pay for blocks of data as they go, and using RCS messaging could inadvertently increase their bill.

If that’s not a concern, it’s a snap for Fi users to enable RCS on the new iOS update. Head to Apps > Messages, and then find the Text Messaging section to toggle on RCS. It may, however, take a few minutes for your phone number to be registered with the Fi RCS server.

In hindsight, the way Apple implemented iMessage was clever. By intercepting messages being sent to other iPhone phone numbers, Apple was able to add enhanced features to its phones instantly. It had the possibly intended side effect of reinforcing the perception that Android phones were less capable. This turned Android users into dreaded green bubbles that limited chat features. Users complained, and Google ran ads calling on Apple to support RCS. That, along with some pointed questions from reporters may have prompted Apple to announce the change in late 2023. It took some time, but you almost don’t have to worry about missing messaging features in 2025.

Apple enables RCS messaging for Google Fi subscribers at last Read More »

google’s-new-experimental-gemini-2.5-model-rolls-out-to-free-users

Google’s new experimental Gemini 2.5 model rolls out to free users

Google released its latest and greatest Gemini AI model last week, but it was only made available to paying subscribers. Google has moved with uncharacteristic speed to release Gemini 2.5 Pro (Experimental) for free users, too. The next time you check in with Gemini, you can access most of the new AI’s features without a Gemini Advanced subscription.

The Gemini 2.5 branch will eventually replace 2.0, which was only released in late 2024. It supports simulated reasoning, as all Google’s models will in the future. This approach to producing an output can avoid some of the common mistakes that AI models have made in the past. We’ve also been impressed with Gemini 2.5’s vibe, which has landed it at the top of the LMSYS Chatbot arena leaderboard.

Google says Gemini 2.5 Pro (Experimental) is ready and waiting for free users to try on the web. Simply select the model from the drop-down menu and enter your prompt to watch the “thinking” happen. The model will roll out to the mobile app for free users soon.

While the free tier gets access to this model, it won’t have all the advanced features. You still cannot upload files to Gemini without a paid account, which may make it hard to take advantage of the model’s large context window—although you won’t get the full 1 million-token window anyway. Google says the free version of Gemini 2.5 Pro (Experimental) will have a lower limit, which it has not specified. We’ve added a few thousand words without issue, but there’s another roadblock in the way.

Google’s new experimental Gemini 2.5 model rolls out to free users Read More »