Artificial Intelligence

google’s-ai-videos-get-a-big-upgrade-with-veo-3.1

Google’s AI videos get a big upgrade with Veo 3.1

It’s getting harder to know what’s real on the Internet, and Google is not helping one bit with the announcement of Veo 3.1. The company’s new video model supposedly offers better audio and realism, along with greater prompt accuracy. The updated video AI will be available throughout the Google ecosystem, including the Flow filmmaking tool, where the new model will unlock additional features. And if you’re worried about the cost of conjuring all these AI videos, Google is also adding a “Fast” variant of Veo.

Veo made waves when it debuted earlier this year, demonstrating a staggering improvement in AI video quality just a few months after Veo 2’s release. It turns out that having all that video on YouTube is very useful for training AI models, so Google is already moving on to Veo 3.1 with a raft of new features.

Google says Veo 3.1 offers stronger prompt adherence, which results in better video outputs and fewer wasted compute cycles. Audio, which was a hallmark feature of the Veo 3 release, has reportedly improved, too. Veo 3’s text-to-video was limited to 720p landscape output, but there’s an ever-increasing volume of vertical video on the Internet. So Veo 3.1 can produce both landscape and portrait 16:9 video.

Google previously said it would bring Veo video tools to YouTube Shorts, which use a vertical video format like TikTok. The release of Veo 3.1 probably opens the door to fulfilling that promise. You can bet Veo videos will show up more frequently on TikTok as well now that it fits the format. This release also keeps Google in its race with OpenAI, which recently released a Sora iPhone app with an impressive new version of its video-generating AI.

Google’s AI videos get a big upgrade with Veo 3.1 Read More »

google-will-let-gemini-schedule-meetings-for-you-in-gmail

Google will let Gemini schedule meetings for you in Gmail

Meetings can be a real drain on productivity, but a new Gmail feature might at least cut down on the time you spend scheduling them. The company has announced “Help Me Schedule” is coming to Gmail, leveraging Gemini AI to recognize when you want to schedule a meeting and offering possible meeting times for the email recipient to choose.

The new meeting feature is reminiscent of Magic Cue on Google’s latest Pixel phones. As you type emails, Gmail will be able to recognize when you are planning a meeting. A Help Me Schedule button will appear in the toolbar. Upon clicking, Google’s AI will swing into action and find possible meeting times that match the context of your message and are available in your calendar.

When you engage with Help me schedule, the AI generates an in-line meeting widget for your message. The recipient can select the time that works for them, and that’s it—the meeting is scheduled for both parties. What about meetings with more than one invitee? Google says the feature won’t support groups at launch.

Google has been on a Gemini-fueled tear lately, expanding access to AI features across a range of products. The company’s nano banana image model is coming to multiple products, and the Veo video model is popping up in Photos and YouTube. Gemini has also rolled out to Google Home to offer AI-assisted notifications and activity summaries.

Google will let Gemini schedule meetings for you in Gmail Read More »

google’s-photoshop-killer-ai-model-is-coming-to-search,-photos,-and-notebooklm

Google’s Photoshop-killer AI model is coming to search, Photos, and NotebookLM

NotebookLM added a video overview feature several months back, which uses AI to generate a video summary of the content you’ve added to the notebook. The addition of Nano Banana to NotebookLM is much less open-ended. Instead of entering prompts to edit images, NotebookLM has a new set of video styles powered by Nano Banana, including whiteboard, anime, retro print, and more. The original style is still available as “Classic.”

My favorite video.

NotebookLM’s videos are still somewhat limited, but this update adds a second general format. You can now choose “Brief” in addition to “Explainer,” with the option to add prompts that steer the video in the right direction. Although, that’s not a guarantee, as this is still generative AI. At least the style should be more consistent with the addition of Nano Banana.

The updated image editor is also coming to Google Photos, but Google doesn’t have a firm timeline. Google claims that its Nano Banana model is a “major upgrade” over its previous image-editing model. Conversational editing was added to Photos last month, but it’s not the Nano Banana model that has impressed testers over the summer. Google says that Nano Banana will arrive in the Photos app in the next few weeks, which should make those conversational edits much less frustrating.

Google’s Photoshop-killer AI model is coming to search, Photos, and NotebookLM Read More »

vandals-deface-ads-for-ai-necklaces-that-listen-to-all-your-conversations

Vandals deface ads for AI necklaces that listen to all your conversations

In addition to backlash over feared surveillance capitalism, critics have accused Schiffman of taking advantage of the loneliness epidemic. Conducting a survey last year, researchers with Harvard Graduate School of Education’s Making Caring Common found that people between “30-44 years of age were the loneliest group.” Overall, 73 percent of those surveyed “selected technology as contributing to loneliness in the country.”

But Schiffman rejects these criticisms, telling the NYT that his AI Friend pendant is intended to supplement human friends, not replace them, supposedly helping to raise the “average emotional intelligence” of users “significantly.”

“I don’t view this as dystopian,” Schiffman said, suggesting that “the AI friend is a new category of companionship, one that will coexist alongside traditional friends rather than replace them,” the NYT reported. “We have a cat and a dog and a child and an adult in the same room,” the Friend founder said. “Why not an AI?”

The MTA has not commented on the controversy, but Victoria Mottesheard—a vice president at Outfront Media, which manages MTA advertising—told the NYT that the Friend campaign blew up because AI “is the conversation of 2025.”

Website lets anyone deface Friend ads

So far, the Friend ads have not yielded significant sales, Schiffman confirmed, telling the NYT that only 3,100 have sold. He expects that society isn’t ready for AI companions to be promoted at such a large scale and that his ad campaign will help normalize AI friends.

In the meantime, critics have rushed to attack Friend on social media, inspiring a website where anyone can vandalize a Friend ad and share it online. That website has received close to 6,000 submissions so far, its creator, Marc Mueller, told the NYT, and visitors can take a tour of these submissions by choosing “ride train to see more” after creating their own vandalized version.

For visitors to Mueller’s site, riding the train displays a carousel documenting backlash to Friend, as well as “performance art” by visitors poking fun at the ads in less serious ways. One example showed a vandalized ad changing “Friend” to “Fries,” with a crude illustration of McDonald’s French fries, while another transformed the ad into a campaign for “fried chicken.”

Others were seemingly more serious about turning the ad into a warning. One vandal drew a bunch of arrows pointing to the “end” in Friend while turning the pendant into a cry-face emoji, seemingly drawing attention to research on the mental health risks of relying on AI companions—including the alleged suicide risks of products like Character.AI and ChatGPT, which have spawned lawsuits and prompted a Senate hearing.

Vandals deface ads for AI necklaces that listen to all your conversations Read More »

google’s-gemini-powered-smart-home-revamp-is-here-with-a-new-app-and-cameras

Google’s Gemini-powered smart home revamp is here with a new app and cameras


Google promises a better smart home experience thanks to Gemini.

Google’s new Nest cameras keep the same look. Credit: Google

Google’s products and services have been flooded with AI features over the past couple of years, but smart home has been largely spared until now. The company’s plans to replace Assistant are moving forward with a big Google Home reset. We’ve been told over and over that generative AI will do incredible things when given enough data, and here’s the test.

There’s a new Home app with Gemini intelligence throughout the experience, updated subscriptions, and even some new hardware. The revamped Home app will allegedly gain deeper insights into what happens in your home, unlocking advanced video features and conversational commands. It demos well, but will it make smart home tech less or more frustrating?

A new Home

You may have already seen some elements of the revamped Home experience percolating to the surface, but that process begins in earnest today. The new app apparently boosts speed and reliability considerably, with camera feeds loading 70 percent faster and with 80 percent fewer app crashes. The app will also bring new Gemini features, some of which are free. Google’s new Home subscription retains the same price as the old Nest subs, but naturally, there’s a lot more AI.

Google claims that Gemini will make your smart home easier to monitor and manage. All that video streaming from your cameras churns through the AI, which interprets the goings on. As a result, you get features like AI-enhanced notifications that give you more context about what your cameras saw. For instance, your notifications will include descriptions of activity, and Home Brief will summarize everything that happens each day.

Home app

The new Home app has a simpler three-tab layout.

Credit: Google

The new Home app has a simpler three-tab layout. Credit: Google

Conversational interaction is also a big part of this update. In the home app, subscribers will see a new Ask Home bar where you can input natural language queries. For example, you could ask if a certain person has left or returned home, or whether or not your package showed up. At least, that’s what’s supposed to happen—generative AI can get things wrong.

The new app comes with new subscriptions based around AI, but the tiers don’t cost any more than the old Nest plans, and they include all the same video features. The base $10 subscription, now known as Standard, includes 30 days of video event history, along with Gemini automation features and the “intelligent alerts” Home has used for a while that can alert you to packages, familiar faces, and so on. The $20 subscription is becoming Home Advanced, which adds the conversational Ask Home feature in the app, AI notifications, AI event descriptions, and a new “Home Brief.” It also still offers 60 days of events and 10 days of 24/7 video history.

Home app and notification

Gemini is supposed to help you keep tabs on what’s happening at home.

Credit: Google

Gemini is supposed to help you keep tabs on what’s happening at home. Credit: Google

Free users still get saved event video history, and it’s been boosted from three hours to six. If you are not subscribing to Gemini Home or using the $10 plan, the Ask Home bar that is persistent across the app will become a quick search, which surfaces devices and settings.

If you’re already subscribing to Google’s AI services, this change could actually save you some cash. Anyone with Google AI Pro (a $20 sub) will get Home Standard for free. If you’re paying for the lavish $250 per month AI Ultra plan, you get Home Advanced at no additional cost.

A proving ground for AI

You may have gotten used to Assistant over the past decade in spite of its frequent feature gaps, but you’ll have to leave it behind. Gemini for Home will be taking over beginning this month in early access. The full release will come later, but Google intends to deliver the Gemini-powered smart home experience to as many users as possible.

Gemini will replace Assistant on every first-party Google Home device, going all the way back to the original 2016 Google Home. You’ll be able to have live chats with Gemini via your smart speakers and make more complex smart home queries. Google is making some big claims about contextual understanding here.

Gemini Home

If Google’s embrace of generative AI pays off, we’ll see it here.

Credit: Google

If Google’s embrace of generative AI pays off, we’ll see it here. Credit: Google

If you’ve used Gemini Live, the new Home interactions will seem familiar. You can ask Gemini anything you want via your smart speakers, perhaps getting help with a recipe or an appliance issue. However, the robot will sometimes just keep talking long past the point it’s helpful. Like Gemini Live, you just have to interrupt the robot sometimes. Google also promises a selection of improved voices to interrupt.

If you want to get early access to the new Gemini Home features, you can sign up in the Home app settings. Just look for the “Early access” option. Google doesn’t guarantee access on a specific timeline, but the first people will be allowed to try the new Gemini Home this month.

New AI-first hardware

It has been four years since Google released new smart home devices, but the era of Gemini brings some new hardware. There are three new cameras, all with 2K image sensors. The new Nest Indoor camera will retail for $100, and the Nest Outdoor Camera will cost $150 (or $250 in a two-pack). There’s also a new Nest Doorbell, which requires a wired connection, for $180.

Google says these cameras were designed with generative AI in mind. The sensor choice allows for good detail even if you need to digitally zoom in, but the video feed is still small enough to be ingested by Google’s AI models as it’s created. This is what gives the new Home app the ability to provide rich updates on your smart home.

Nest Doorbell 3

The new Nest Doorbell looks familiar.

Credit: Google

The new Nest Doorbell looks familiar. Credit: Google

You may also notice there are no battery-powered models in the new batch. Again, that’s because of AI. A battery-powered camera wakes up only momentarily when the system logs an event, but this approach isn’t as useful for generative AI. Providing the model with an ongoing video stream gives it better insights into the scene and, theoretically, produces better insights for the user.

All the new cameras are available for order today, but Google has one more device queued up for a later release. The “Google Home Speaker” is Google’s first smart speaker release since 2020’s Nest Audio. This device is smaller than the Nest Audio but larger than the Nest Mini speakers. It supports 260-degree audio with custom on-device processing that reportedly makes conversing with Gemini smoother. It can also be paired with the Google TV Streamer for home theater audio. It will be available this coming spring for $99.

Google Home Speaker

The new Google Home Speaker comes out next spring.

Credit: Ryan Whitwam

The new Google Home Speaker comes out next spring. Credit: Ryan Whitwam

Google Home will continue to support a wide range of devices, but most of them won’t connect to all the advanced Gemini AI features. However, that could change. Google has also announced a new program for partners to build devices that work with Gemini alongside the Nest cameras. Devices built with the new Google Camera embedded SDK will begin appearing in the coming months, but Walmart’s Onn brand has two ready to go. The Onn Indoor camera retails for $22.96 and the Onn Video Doorbell is $49.86. Both cameras are 1080p resolution and will talk to Gemini just like Google’s cameras. So you may have more options to experience Google’s vision for the AI home of the future.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google’s Gemini-powered smart home revamp is here with a new app and cameras Read More »

youtube-music-is-testing-ai-hosts-that-will-interrupt-your-tunes

YouTube Music is testing AI hosts that will interrupt your tunes

YouTube has a new Labs program, allowing listeners to “discover the next generation of YouTube.” In case you were wondering, that generation is apparently all about AI. The streaming site says Labs will offer a glimpse of the AI features it’s developing for YouTube Music, and it starts with AI “hosts” that will chime in while you’re listening to music. Yes, really.

The new AI music hosts are supposed to provide a richer listening experience, according to YouTube. As you’re listening to tunes, the AI will generate audio snippets similar to, but shorter than, the fake podcasts you can create in NotebookLM. The “Beyond the Beat” host will break in every so often with relevant stories, trivia, and commentary about your musical tastes. YouTube says this feature will appear when you are listening to mixes and radio stations.

The experimental feature is intended to be a bit like having a radio host drop some playful banter while cueing up the next song. It sounds a bit like Spotify’s AI DJ, but the YouTube AI doesn’t create playlists like Spotify’s robot. This is still generative AI, which comes with the risk of hallucinations and low-quality slop, neither of which belongs in your music. That said, Google’s Audio Overviews are often surprisingly good in small doses.

YouTube Music is testing AI hosts that will interrupt your tunes Read More »

google-deepmind-unveils-its-first-“thinking”-robotics-ai

Google DeepMind unveils its first “thinking” robotics AI

Imagine that you want a robot to sort a pile of laundry into whites and colors. Gemini Robotics-ER 1.5 would process the request along with images of the physical environment (a pile of clothing). This AI can also call tools like Google search to gather more data. The ER model then generates natural language instructions, specific steps that the robot should follow to complete the given task.

Gemin iRobotics thinking

The two new models work together to “think” about how to complete a task.

Credit: Google

The two new models work together to “think” about how to complete a task. Credit: Google

Gemini Robotics 1.5 (the action model) takes these instructions from the ER model and generates robot actions while using visual input to guide its movements. But it also goes through its own thinking process to consider how to approach each step. “There are all these kinds of intuitive thoughts that help [a person] guide this task, but robots don’t have this intuition,” said DeepMind’s Kanishka Rao. “One of the major advancements that we’ve made with 1.5 in the VLA is its ability to think before it acts.”

Both of DeepMind’s new robotic AIs are built on the Gemini foundation models but have been fine-tuned with data that adapts them to operating in a physical space. This approach, the team says, gives robots the ability to undertake more complex multi-stage tasks, bringing agentic capabilities to robotics.

The DeepMind team tests Gemini robotics with a few different machines, like the two-armed Aloha 2 and the humanoid Apollo. In the past, AI researchers had to create customized models for each robot, but that’s no longer necessary. DeepMind says that Gemini Robotics 1.5 can learn across different embodiments, transferring skills learned from Aloha 2’s grippers to the more intricate hands on Apollo with no specialized tuning.

All this talk of physical agents powered by AI is fun, but we’re still a long way from a robot you can order to do your laundry. Gemini Robotics 1.5, the model that actually controls robots, is still only available to trusted testers. However, the thinking ER model is now rolling out in Google AI Studio, allowing developers to generate robotic instructions for their own physically embodied robotic experiments.

Google DeepMind unveils its first “thinking” robotics AI Read More »

deepmind-ai-safety-report-explores-the-perils-of-“misaligned”-ai

DeepMind AI safety report explores the perils of “misaligned” AI

DeepMind also addresses something of a meta-concern about AI. The researchers say that a powerful AI in the wrong hands could be dangerous if it is used to accelerate machine learning research, resulting in the creation of more capable and unrestricted AI models. DeepMind says this could “have a significant effect on society’s ability to adapt to and govern powerful AI models.” DeepMind ranks this as a more severe threat than most other CCLs.

The misaligned AI

Most AI security mitigations follow from the assumption that the model is at least trying to follow instructions. Despite years of hallucination, researchers have not managed to make these models completely trustworthy or accurate, but it’s possible that a model’s incentives could be warped, either accidentally or on purpose. If a misaligned AI begins to actively work against humans or ignore instructions, that’s a new kind of problem that goes beyond simple hallucination.

Version 3 of the Frontier Safety Framework introduces an “exploratory approach” to understanding the risks of a misaligned AI. There have already been documented instances of generative AI models engaging in deception and defiant behavior, and DeepMind researchers express concern that it may be difficult to monitor for this kind of behavior in the future.

A misaligned AI might ignore human instructions, produce fraudulent outputs, or refuse to stop operating when requested. For the time being, there’s a fairly straightforward way to combat this outcome. Today’s most advanced simulated reasoning models produce “scratchpad” outputs during the thinking process. Devs are advised to use an automated monitor to double-check the model’s chain-of-thought output for evidence misalignment or deception.

Google says this CCL could become more severe in the future. The team believes models in the coming years may evolve to have effective simulated reasoning without producing a verifiable chain of thought. So your overseer guardrail wouldn’t be able to peer into the reasoning process of such a model. For this theoretical advanced AI, it may be impossible to completely rule out that the model is working against the interests of its human operator.

The framework doesn’t have a good solution to this problem just yet. DeepMind says it is researching possible mitigations for a misaligned AI, but it’s hard to know when or if this problem will become a reality. These “thinking” models have only been common for about a year, and there’s still a lot we don’t know about how they arrive at a given output.

DeepMind AI safety report explores the perils of “misaligned” AI Read More »

google-gemini-earns-gold-medal-in-icpc-world-finals-coding-competition

Google Gemini earns gold medal in ICPC World Finals coding competition

More than human

At the ICPC, only correct solutions earn points, and the time it takes to come up with the solution affects the final score. Gemini reached the upper rankings quickly, completing eight problems correctly in just 45 minutes. After 677 minutes, Gemini 2.5 Deep Think had 10 correct answers, securing a second-place finish among the university teams.

You can take a look at all of Gemini’s solutions on GitHub, but Google points to Problem C as especially impressive. This question, a multi-dimensional optimization problem revolving around fictitious “flubber” storage and drainage rates, stumped every human team. But not Gemini.

According to Google, there are an infinite combination of possible configurations for the flubber reservoirs, making it challenging to find the optimal setup. Gemini tackled the problem by assuming that each reservoir had a priority value, which allowed the model to find the most efficient configuration using a dynamic programming algorithm. After 30 minutes of churning on this problem, Deep Think used nested ternary search to pin down the correct values.

Credit: Google

Gemini’s solutions for this year’s ICPC were scored by the event coordinators, but Google also turned Gemini 2.5 loose on previous ICPC problems. The company reports that its internal analysis showed Gemini also reached gold medal status for the 2023 and 2024 question sets.

Google believes Gemini’s ability to perform well in these kinds of advanced academic competitions portends AI’s future in industries like semiconductor engineering and biotechnology. The ability to tackle a complex problem with multi-step logic could make AI models like Gemini 2.5 invaluable to the people working in those fields. The company points out that if you combine the intelligence of the top-ranking university teams and Gemini, you get correct answers to all 12 ICPC problems.

Of course, five hours of screaming-fast inference processing doesn’t come cheap. Google isn’t saying how much power it took for an AI model to compete in the ICPC, but we can safely assume it was a lot. Even simpler consumer-facing models are too expensive to turn a profit right now, but AI that can solve previously unsolvable problems could justify the technology’s high cost.

Google Gemini earns gold medal in ICPC World Finals coding competition Read More »

google-releases-vaultgemma,-its-first-privacy-preserving-llm

Google releases VaultGemma, its first privacy-preserving LLM

The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to “memorize” any of that content.

LLMs have non-deterministic outputs, meaning you can’t exactly predict what they’ll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data—if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase.

Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data.

By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private.

Google releases VaultGemma, its first privacy-preserving LLM Read More »

openai-and-microsoft-sign-preliminary-deal-to-revise-partnership-terms

OpenAI and Microsoft sign preliminary deal to revise partnership terms

On Thursday, OpenAI and Microsoft announced they have signed a non-binding agreement to revise their partnership, marking the latest development in a relationship that has grown increasingly complex as both companies compete for customers in the AI market and seek new partnerships for growing infrastructure needs.

“Microsoft and OpenAI have signed a non-binding memorandum of understanding (MOU) for the next phase of our partnership,” the companies wrote in a joint statement. “We are actively working to finalize contractual terms in a definitive agreement. Together, we remain focused on delivering the best AI tools for everyone, grounded in our shared commitment to safety.”

The announcement comes as OpenAI seeks to restructure from a nonprofit to a for-profit entity, a transition that requires Microsoft’s approval, as the company is OpenAI’s largest investor, with more than $13 billion committed since 2019.

The partnership has shown increasing strain as OpenAI has grown from a research lab into a company valued at $500 billion. Both companies now compete for customers, and OpenAI seeks more compute capacity than Microsoft can provide. The relationship has also faced complications over contract terms, including provisions that would limit Microsoft’s access to OpenAI technology once the company reaches so-called AGI (artificial general intelligence)—a nebulous milestone both companies now economically define as AI systems capable of generating at least $100 billion in profit.

In May, OpenAI abandoned its original plan to fully convert to a for-profit company after pressure from former employees, regulators, and critics, including Elon Musk. Musk has sued to block the conversion, arguing it betrays OpenAI’s founding mission as a nonprofit dedicated to benefiting humanity.

OpenAI and Microsoft sign preliminary deal to revise partnership terms Read More »

one-of-google’s-new-pixel-10-ai-features-has-already-been-removed

One of Google’s new Pixel 10 AI features has already been removed

Google is one of the most ardent proponents of generative AI technology, as evidenced by the recent launch of the Pixel 10 series. The phones were announced with more than 20 new AI experiences, according to Google. However, one of them is already being pulled from the company’s phones. If you go looking for your Daily Hub, you may be disappointed. Not that disappointed, though, as it has been pulled because it didn’t do very much.

Many of Google’s new AI features only make themselves known in specific circumstances, for example when Magic Cue finds an opportunity to suggest an address or calendar appointment based on your screen context. The Daily Hub, on the other hand, asserted itself multiple times throughout the day. It appeared at the top of the Google Discover feed, as well as in the At a Glance widget right at the top of the home screen.

Just a few weeks after release, Google has pulled the Daily Hub preview from Pixel 10 devices. You will no longer see it in Google Discover nor in the home screen widget. After being spotted by 9to5Google, the company has issued a statement explaining its plans.

“To ensure the best possible experience on Pixel, we’re temporarily pausing the public preview of Daily Hub for users. Our teams are actively working to enhance its performance and refine the personalized experience. We look forward to reintroducing an improved Daily Hub when it’s ready,” a Google spokesperson said.

One of Google’s new Pixel 10 AI features has already been removed Read More »