Google

google’s-gemini-ai-can-now-see-your-search-history

Google’s Gemini AI can now see your search history

Gemini search opt-in

Credit: Google

Gemini 2.0 is also coming to Deep Research, Google’s AI tool that creates detailed reports on a topic or question. This tool browses the web on your behalf, taking its time to assemble its responses. The new Gemini 2.0-based version will show more of its work as it gathers data, and Google claims the final product will be of higher quality.

You don’t have to take Google’s word on this—you can try it for yourself, even if you don’t pay for advanced AI features. Google is making Deep Research free, but it’s not unlimited. The company says everyone will be able to try Deep Research “a few times a month” at no cost. That’s all the detail we’re getting, so don’t go crazy with Deep Research right away.

Lastly, Google is also rolling out Gems to free accounts. Gems are like custom chatbots you can set up with a specific task in mind. Google has some defaults like Learning Coach and Brainstormer, but you can get creative and make just about anything (within the limits prescribed by Google LLC and applicable laws).

Some of the newly free features require a lot of inference processing, which is not cheap. Making its most expensive models free, even on a limited basis, will undoubtedly increase Google’s AI losses. No one has figured out how to make money on generative AI yet, but Google seems content spending more money to secure market share.

Google’s Gemini AI can now see your search history Read More »

google-is-bringing-every-android-game-to-windows-in-big-gaming-update

Google is bringing every Android game to Windows in big gaming update

The annual Game Developers Conference is about to kick off, and even though Stadia is dead and buried, Google has a lot of plans for games. It’s expanding tools that help PC developers bring premium games to Android, and games are heading in the other direction, too. The PC-based Play Games platform is expanding to bring every single Android game to Windows. Google doesn’t have a firm timeline for all these changes, but 2025 will be an interesting year for the company’s gaming efforts.

Google released the first beta of Google Play Games on PC back in 2022, allowing you to play Android games on a PC. It has chugged along quietly ever since, mostly because of the anemic and largely uninteresting game catalog. While there are hundreds of thousands of Android games, only a handful were made available in the PC client. That’s changing in a big way now that Google is bringing over every Android game from Google Play.

Starting today, you’ll see thousands of new games in Google Play Games on PC. Developers actually have to opt out if they don’t want their games available on Windows machines via Google Play Games. Google says this is possible thanks to improved custom controls, making it easy to map keyboard and gamepad controls onto games that were designed for touchscreens (see below). The usability of these mapped controls will probably vary dramatically from game to game.

While almost every Android game will soon be available on Windows, not all will get top billing. Google Play Games on PC has a playability badge, indicating a game has been tested on Windows. Games that have been specifically optimized for PC get a more prominent badge. Games with the “Playable” or “Optimized” distinction will appear throughout the client in lists of suggested titles, but untested games will only appear if you search for them. However, you can install them all just the same, and they’ll work better on AMD-based machines, support for which has been lacking throughout the beta.

Google is bringing every Android game to Windows in big gaming update Read More »

android-apps-laced-with-north-korean-spyware-found-in-google-play

Android apps laced with North Korean spyware found in Google Play

Researchers have discovered multiple Android apps, some that were available in Google Play after passing the company’s security vetting, that surreptitiously uploaded sensitive user information to spies working for the North Korean government.

Samples of the malware—named KoSpy by Lookout, the security firm that discovered it—masquerade as utility apps for managing files, app or OS updates, and device security. Behind the interfaces, the apps can collect a variety of information including SMS messages, call logs, location, files, nearby audio, and screenshots and send them to servers controlled by North Korean intelligence personnel. The apps target English language and Korean language speakers and have been available in at least two Android app marketplaces, including Google Play.

Think twice before installing

The surveillanceware masquerades as the following five different apps:

  • 휴대폰 관리자 (Phone Manager)
  • File Manager
  • 스마트 관리자 (Smart Manager)
  • 카카오 보안 (Kakao Security) and
  • Software Update Utility

Besides Play, the apps have also been available in the third-party Apkpure market. The following image shows how one such app appeared in Play.

Credit: Lookout

The image shows that the developer email address was mlyqwl@gmail[.]com and the privacy policy page for the app was located at https://goldensnakeblog.blogspot[.]com/2023/02/privacy-policy.html.

“I value your trust in providing us your Personal Information, thus we are striving to use commercially acceptable means of protecting it,” the page states. “But remember that no method of transmission over the internet, or method of electronic storage is 100% secure and reliable, and I cannot guarantee its absolute security.”

The page, which remained available at the time this post went live on Ars, has no reports of malice on Virus Total. By contrast, IP addresses hosting the command-and-control servers have previously hosted at least three domains that have been known since at least 2019 to host infrastructure used in North Korean spy operations.

Android apps laced with North Korean spyware found in Google Play Read More »

google’s-new-robot-ai-can-fold-delicate-origami,-close-zipper-bags-without-damage

Google’s new robot AI can fold delicate origami, close zipper bags without damage

On Wednesday, Google DeepMind announced two new AI models designed to control robots: Gemini Robotics and Gemini Robotics-ER. The company claims these models will help robots of many shapes and sizes understand and interact with the physical world more effectively and delicately than previous systems, paving the way for applications such as humanoid robot assistants.

It’s worth noting that even though hardware for robot platforms appears to be advancing at a steady pace (well, maybe not always), creating a capable AI model that can pilot these robots autonomously through novel scenarios with safety and precision has proven elusive. What the industry calls “embodied AI” is a moonshot goal of Nvidia, for example, and it remains a holy grail that could potentially turn robotics into general-use laborers in the physical world.

Along those lines, Google’s new models build upon its Gemini 2.0 large language model foundation, adding capabilities specifically for robotic applications. Gemini Robotics includes what Google calls “vision-language-action” (VLA) abilities, allowing it to process visual information, understand language commands, and generate physical movements. By contrast, Gemini Robotics-ER focuses on “embodied reasoning” with enhanced spatial understanding, letting roboticists connect it to their existing robot control systems.

For example, with Gemini Robotics, you can ask a robot to “pick up the banana and put it in the basket,” and it will use a camera view of the scene to recognize the banana, guiding a robotic arm to perform the action successfully. Or you might say, “fold an origami fox,” and it will use its knowledge of origami and how to fold paper carefully to perform the task.

Gemini Robotics: Bringing AI to the physical world.

In 2023, we covered Google’s RT-2, which represented a notable step toward more generalized robotic capabilities by using Internet data to help robots understand language commands and adapt to new scenarios, then doubling performance on unseen tasks compared to its predecessor. Two years later, Gemini Robotics appears to have made another substantial leap forward, not just in understanding what to do but in executing complex physical manipulations that RT-2 explicitly couldn’t handle.

While RT-2 was limited to repurposing physical movements it had already practiced, Gemini Robotics reportedly demonstrates significantly enhanced dexterity that enables previously impossible tasks like origami folding and packing snacks into Zip-loc bags. This shift from robots that just understand commands to robots that can perform delicate physical tasks suggests DeepMind may have started solving one of robotics’ biggest challenges: getting robots to turn their “knowledge” into careful, precise movements in the real world.

Better generalized results

According to DeepMind, the new Gemini Robotics system demonstrates much stronger generalization, or the ability to perform novel tasks that it was not specifically trained to do, compared to its previous AI models. In its announcement, the company claims Gemini Robotics “more than doubles performance on a comprehensive generalization benchmark compared to other state-of-the-art vision-language-action models.” Generalization matters because robots that can adapt to new scenarios without specific training for each situation could one day work in unpredictable real-world environments.

That’s important because skepticism remains regarding how useful humanoid robots currently may be or how capable they really are. Tesla unveiled its Optimus Gen 3 robot last October, claiming the ability to complete many physical tasks, yet concerns persist over the authenticity of its autonomous AI capabilities after the company admitted that several robots in its splashy demo were controlled remotely by humans.

Here, Google is attempting to make the real thing: a generalist robot brain. With that goal in mind, the company announced a partnership with Austin, Texas-based Apptronik to”build the next generation of humanoid robots with Gemini 2.0.” While trained primarily on a bimanual robot platform called ALOHA 2, Google states that Gemini Robotics can control different robot types, from research-oriented Franka robotic arms to more complex humanoid systems like Apptronik’s Apollo robot.

Gemini Robotics: Dexterous skills.

While the humanoid robot approach is a relatively new application for Google’s generative AI models (from this cycle of technology based on LLMs), it’s worth noting that Google had previously acquired several robotics companies around 2013–2014 (including Boston Dynamics, which makes humanoid robots), but later sold them off. The new partnership with Apptronik appears to be a fresh approach to humanoid robotics rather than a direct continuation of those earlier efforts.

Other companies have been hard at work on humanoid robotics hardware, such as Figure AI (which secured significant funding for its humanoid robots in March 2024) and the aforementioned former Alphabet subsidiary Boston Dynamics (which introduced a flexible new Atlas robot last April), but a useful AI “driver” to make the robots truly useful has not yet emerged. On that front, Google has also granted limited access to the Gemini Robotics-ER through a “trusted tester” program to companies like Boston Dynamics, Agility Robotics, and Enchanted Tools.

Safety and limitations

For safety considerations, Google mentions a “layered, holistic approach” that maintains traditional robot safety measures like collision avoidance and force limitations. The company describes developing a “Robot Constitution” framework inspired by Isaac Asimov’s Three Laws of Robotics and releasing a dataset unsurprisingly called “ASIMOV” to help researchers evaluate safety implications of robotic actions.

This new ASIMOV dataset represents Google’s attempt to create standardized ways to assess robot safety beyond physical harm prevention. The dataset appears designed to help researchers test how well AI models understand the potential consequences of actions a robot might take in various scenarios. According to Google’s announcement, the dataset will “help researchers to rigorously measure the safety implications of robotic actions in real-world scenarios.”

The company did not announce availability timelines or specific commercial applications for the new AI models, which remain in a research phase. While the demo videos Google shared depict advancements in AI-driven capabilities, the controlled research environments still leave open questions about how these systems would actually perform in unpredictable real-world settings.

Google’s new robot AI can fold delicate origami, close zipper bags without damage Read More »

gmail-gains-gemini-powered-“add-to-calendar”-button

Gmail gains Gemini-powered “Add to calendar” button

Google has a new mission in the AI era: to add Gemini to as many of the company’s products as possible. We’ve already seen Gemini appear in search results, text messages, and more. In Google’s latest update to Workspace, Gemini will be able to add calendar appointments from Gmail with a single click. Well, assuming Gemini gets it right the first time, which is far from certain.

The new calendar button will appear at the top of emails, right next to the summarize button that arrived last year. The calendar option will show up in Gmail threads with actionable meeting chit-chat, allowing you to mash that button to create an appointment in one step. The Gemini sidebar will open to confirm the appointment was made, which is a good opportunity to double-check the robot. There will be a handy edit button in the Gemini window in the event it makes a mistake. However, the robot can’t invite people to these events yet.

The effect of using the button is the same as opening the Gemini panel and asking it to create an appointment. The new functionality is simply detecting events and offering the button as a shortcut of sorts. You should not expect to see this button appear on messages that already have calendar integration, like dining reservations and flights. Those already pop up in Google Calendar without AI.

Gmail gains Gemini-powered “Add to calendar” button Read More »

what-does-“phd-level”-ai-mean?-openai’s-rumored-$20,000-agent-plan-explained.

What does “PhD-level” AI mean? OpenAI’s rumored $20,000 agent plan explained.

On the Frontier Math benchmark by EpochAI, o3 solved 25.2 percent of problems, while no other model has exceeded 2 percent—suggesting a leap in mathematical reasoning capabilities over the previous model.

Benchmarks vs. real-world value

Ideally, potential applications for a true PhD-level AI model would include analyzing medical research data, supporting climate modeling, and handling routine aspects of research work.

The high price points reported by The Information, if accurate, suggest that OpenAI believes these systems could provide substantial value to businesses. The publication notes that SoftBank, an OpenAI investor, has committed to spending $3 billion on OpenAI’s agent products this year alone—indicating significant business interest despite the costs.

Meanwhile, OpenAI faces financial pressures that may influence its premium pricing strategy. The company reportedly lost approximately $5 billion last year covering operational costs and other expenses related to running its services.

News of OpenAI’s stratospheric pricing plans come after years of relatively affordable AI services that have conditioned users to expect powerful capabilities at relatively low costs. ChatGPT Plus remains $20 per month and Claude Pro costs $30 monthly—both tiny fractions of these proposed enterprise tiers. Even ChatGPT Pro’s $200/month subscription is relatively small compared to the new proposed fees. Whether the performance difference between these tiers will match their thousandfold price difference is an open question.

Despite their benchmark performances, these simulated reasoning models still struggle with confabulations—instances where they generate plausible-sounding but factually incorrect information. This remains a critical concern for research applications where accuracy and reliability are paramount. A $20,000 monthly investment raises questions about whether organizations can trust these systems not to introduce subtle errors into high-stakes research.

In response to the news, several people quipped on social media that companies could hire an actual PhD student for much cheaper. “In case you have forgotten,” wrote xAI developer Hieu Pham in a viral tweet, “most PhD students, including the brightest stars who can do way better work than any current LLMs—are not paid $20K / month.”

While these systems show strong capabilities on specific benchmarks, the “PhD-level” label remains largely a marketing term. These models can process and synthesize information at impressive speeds, but questions remain about how effectively they can handle the creative thinking, intellectual skepticism, and original research that define actual doctoral-level work. On the other hand, they will never get tired or need health insurance, and they will likely continue to improve in capability and drop in cost over time.

What does “PhD-level” AI mean? OpenAI’s rumored $20,000 agent plan explained. Read More »

bad-vibes?-google-may-have-screwed-up-haptics-in-the-new-pixel-drop-update

Bad vibes? Google may have screwed up haptics in the new Pixel Drop update

The unexpected appearance of notification cooldown, along with smaller changes to haptics globally, could be responsible for the complaints. Maybe this is working as intended and Pixel owners are just caught off guard; or maybe Google broke something. It wouldn’t be the first time.

Pixel notification cooldown

The unexpected appearance of Notification Cooldown in the update might have something to do with the reports—it’s on by default.

Credit: Ryan Whitwam

The unexpected appearance of Notification Cooldown in the update might have something to do with the reports—it’s on by default. Credit: Ryan Whitwam

In 2022, Google released an update that weakened haptic feedback on the Pixel 6, making it so soft that people were missing calls. Google released a fix for the problem a few weeks later. If there’s something wrong with the new Pixel Drop, it’s a more subtle problem. People can’t even necessarily explain how it’s different, but most seem to agree that it is.

After testing several Pixel phones both before and after the update, there may be some truth to the complaints. The length and intensity of haptic notification feedback feel different on a Pixel 9 Pro XL post-update, but our Pixel 9 Pro feels the same after installing the Pixel Drop. The different models may simply have been tuned differently in the update, or there could be a bug involved. We’ve reached out to Google to ask about this possible issue and have been told the Pixel team is actively investigating the reports.

Updated on 3/7/2025 with comment from Google. 

Bad vibes? Google may have screwed up haptics in the new Pixel Drop update Read More »

no-one-asked-for-this:-google-is-testing-round-keys-in-gboard

No one asked for this: Google is testing round keys in Gboard

Most Android phones ship with Google’s Gboard as the default input option. It’s a reliable, feature-rich on-screen keyboard, so most folks just keep using it instead of installing a third-party option. Depending on how you feel about circles, it might be time to check out some of those alternatives. Google has quietly released an update that changes the shape and position of the keys, and users are not pleased.

In the latest build of Gboard (v15.1.05.726012951-beta-arm64-v8a), Google has changed the key shape from the long-running squares to circle shapes. If you’re using the four-row layout, the keys are like little pills. In five-row mode with the exposed number row, the keys are collapsed further into circles. The reactions seem split between those annoyed by this change and those annoyed that everyone else is so annoyed.

Change can be hard sometimes, so certainly some of the discontent is just a function of having the phone interface changed without warning. If you find it particularly distasteful, you can head into the Gboard settings and open the Themes menu. From there, you can tap on a theme and then turn off the key borders. Thus, you won’t be distracted by the horror of rounded edges. That’s not the only problem with the silent update, though.

The wave of objections isn’t just about aesthetics—this update also moves the keys around a bit. After years of tapping away on keys with a particular layout, people develop muscle memory. Big texters can sometimes type messages on their phone without even looking at it, but moving the keys around even slightly, as Google has done here, can cause you to miss more keys than you did before the update.

No one asked for this: Google is testing round keys in Gboard Read More »

you-knew-it-was-coming:-google-begins-testing-ai-only-search-results

You knew it was coming: Google begins testing AI-only search results

Google has become so integral to online navigation that its name became a verb, meaning “to find things on the Internet.” Soon, Google might just tell you what’s on the Internet instead of showing you. The company has announced an expansion of its AI search features, powered by Gemini 2.0. Everyone will soon see more AI Overviews at the top of the results page, but Google is also testing a more substantial change in the form of AI Mode. This version of Google won’t show you the 10 blue links at all—Gemini completely takes over the results in AI Mode.

This marks the debut of Gemini 2.0 in Google search. Google announced the first Gemini 2.0 models in December 2024, beginning with the streamlined Gemini 2.0 Flash. The heavier versions of Gemini 2.0 are still in testing, but Google says it has tuned AI Overviews with this model to offer help with harder questions in the areas of math, coding, and multimodal queries.

With this update, you will begin seeing AI Overviews on more results pages, and minors with Google accounts will see AI results for the first time. In fact, even logged out users will see AI Overviews soon. This is a big change, but it’s only the start of Google’s plans for AI search.

Gemini 2.0 also powers the new AI Mode for search. It’s launching as an opt-in feature via Google’s Search Labs, offering a totally new alternative to search as we know it. This custom version of the Gemini large language model (LLM) skips the standard web links that have been part of every Google search thus far. The model uses “advanced reasoning, thinking, and multimodal capabilities” to build a response to your search, which can include web summaries, Knowledge Graph content, and shopping data. It’s essentially a bigger, more complex AI Overview.

As Google has previously pointed out, many searches are questions rather than a string of keywords. For those kinds of queries, an AI response could theoretically provide an answer more quickly than a list of 10 blue links. However, that relies on the AI response being useful and accurate, something that often still eludes generative AI systems like Gemini.

You knew it was coming: Google begins testing AI-only search results Read More »

google-tells-trump’s-doj-that-forcing-a-chrome-sale-would-harm-national-security

Google tells Trump’s DOJ that forcing a Chrome sale would harm national security

Close-up of Google Chrome Web Browser web page on the web browser. Chrome is widely used web browser developed by Google.

Credit: Getty Images

The government’s 2024 request also sought to have Google’s investment in AI firms curtailed even though this isn’t directly related to search. If, like Google, you believe leadership in AI is important to the future of the world, limiting its investments could also affect national security. But in November, Mehta suggested he was open to considering AI remedies because “the recent emergence of AI products that are intended to mimic the functionality of search engines” is rapidly shifting the search market.

This perspective could be more likely to find supporters in the newly AI-obsessed US government with a rapidly changing Department of Justice. However, the DOJ has thus far opposed allowing AI firm Anthropic to participate in the case after it recently tried to intervene. Anthropic has received $3 billion worth of investments from Google, including $1 billion in January.

New year, new Justice Department

Google naturally opposed the government’s early remedy proposal, but this happened in November, months before the incoming Trump administration began remaking the DOJ. Since taking office, the new administration has routinely criticized the harsh treatment of US tech giants, taking aim at European Union laws like the Digital Markets Act, which tries to ensure user privacy and competition among so-called “gatekeeper” tech companies like Google.

We may get a better idea of how the DOJ wants to proceed later this week when both sides file their final proposals with Mehta. Google already announced its preferred remedy at the tail end of 2024. It’s unlikely Google’s final version will be any different, but everything is up in the air for the government.

Even if current political realities don’t affect the DOJ’s approach, the department’s staffing changes could. Many of the people handling Google’s case today are different than they were just a few months ago, so arguments that fell on deaf ears in 2024 could move the needle. Perhaps emphasizing the national security angle will resonate with the newly restaffed DOJ.

After both sides have had their say, it will be up to the judge to eventually rule on how Google must adapt its business. This remedy phase should get fully underway in April.

Google tells Trump’s DOJ that forcing a Chrome sale would harm national security Read More »

google’s-ai-powered-pixel-sense-app-could-gobble-up-all-your-pixel-10-data

Google’s AI-powered Pixel Sense app could gobble up all your Pixel 10 data

Google’s AI ambitions know no bounds. A new report claims Google’s next phones will herald the arrival of a feature called Pixel Sense that will ingest data from virtually every Google app on your phone, fueling a new personalized experience. This app could be the premiere feature of the Pixel 10 series expected out late this year.

According to a report from Android Authority, Pixel Sense is the new name for Pixie, an AI that was supposed to integrate with Google Assistant before Gemini became the center of Google’s universe. In late 2023, it looked as though Pixie would be launched on the Pixel 9 series, but that never happened. Now, it’s reportedly coming back as Pixel Sense, and we have more details on how it might work.

Pixel Sense will apparently be able to leverage data you create in apps like Calendar, Gmail, Docs, Maps, Keep Notes, Recorder, Wallet, and almost every other Google app. It can also process media files like screenshots in the same way the Pixel Screenshots app currently does. The goal of collecting all this data is to help you complete tasks faster by suggesting content, products, and names by understanding the context of how you use the phone. Pixel Sense will essentially try to predict what you need without being prompted.

Samsung is pursuing a goal that is ostensibly similar to Now Brief, a new AI feature available on the Galaxy S25 series. Now Brief collects data from a handful of apps like Samsung Health, Samsung Calendar, and YouTube to distill your important data with AI. However, it rarely offers anything of use with its morning, noon, and night “Now Bar” updates.

Pixel Sense sounds like a more expansive version of this same approach to processing user data—and perhaps the fulfillment of Google Now’s decade-old promise. The supposed list of supported apps is much larger, and they’re apps people actually use. If pouring more and more data into a large language model leads to better insights into your activities, Pixel Sense should be better at guessing what you’ll need. Admittedly, that’s a big “if.”

Google’s AI-powered Pixel Sense app could gobble up all your Pixel 10 data Read More »

gemini-live-will-learn-to-peer-through-your-camera-lens-in-a-few-weeks

Gemini Live will learn to peer through your camera lens in a few weeks

At Mobile World Congress, Google confirmed that a long-awaited Gemini AI feature it first teased nearly a year ago is ready for launch. The company’s conversational Gemini Live will soon be able to view live video and screen sharing, a feature Google previously demoed as Project Astra. When Gemini’s video capabilities arrive, you’ll be able to simply show the robot something instead of telling it.

Right now, Google’s multimodal AI can process text, images, and various kinds of documents. However, its ability to accept video as an input is spotty at best—sometimes it can summarize a YouTube video, and sometimes it can’t, for unknown reasons. Later in March, the Gemini app on Android will get a major update to its video functionality. You’ll be able to open your camera to provide Gemini Live a video stream or share your screen as a live video, thus allowing you to pepper Gemini with questions about what it sees.

Gemini Live with video.

It can be hard to keep track of which Google AI project is which—the 2024 Google I/O was largely a celebration of all things Gemini AI. The Astra demo made waves as it demonstrated a more natural way to interact with the AI. In the original video, which you can see below, Google showed how Gemini Live could answer questions in real time as the user swept a phone around a room. It had things to say about code on a computer screen, how speakers work, and a network diagram on a whiteboard. It even remembered where the user left their glasses from an earlier part of the video.

Gemini Live will learn to peer through your camera lens in a few weeks Read More »