Google

leaker-reveals-which-pixels-are-vulnerable-to-cellebrite-phone-hacking

Leaker reveals which Pixels are vulnerable to Cellebrite phone hacking

Cellebrite leak

This blurry screenshot appears to list which Pixel phones Cellebrite devices can hack.

Credit: rogueFed

This blurry screenshot appears to list which Pixel phones Cellebrite devices can hack. Credit: rogueFed

At least according to Cellebrite, GrapheneOS is more secure than what Google offers out of the box. The company is telling law enforcement in these briefings that its technology can extract data from Pixel 6, 7, 8, and 9 phones in unlocked, AFU, and BFU states on stock software. However, it cannot brute-force passcodes to enable full control of a device. The leaker also notes law enforcement is still unable to copy an eSIM from Pixel devices. Notably, the Pixel 10 series is moving away from physical SIM cards.

For those same phones running GrapheneOS, police can expect to have a much harder time. The Cellebrite table says that Pixels with GrapheneOS are only accessible when running software from before late 2022—both the Pixel 8 and Pixel 9 were launched after that. Phones in both BFU and AFU states are safe from Cellebrite on updated builds, and as of late 2024, even a fully unlocked GrapheneOS device is immune from having its data copied. An unlocked phone can be inspected in plenty of other ways, but data extraction in this case is limited to what the user can access.

The original leaker claims to have dialed into two calls so far without detection. However, rogueFed also called out the meeting organizer by name (the second screenshot, which we are not reposting). Odds are that Cellebrite will be screening meeting attendees more carefully now.

We’ve reached out to Google to inquire about why a custom ROM created by volunteers is more resistant to industrial phone hacking than the official Pixel OS. We’ll update this article if Google has anything to say.

Leaker reveals which Pixels are vulnerable to Cellebrite phone hacking Read More »

tv-focused-youtube-update-brings-ai-upscaling,-shopping-qr-codes

TV-focused YouTube update brings AI upscaling, shopping QR codes

YouTube has been streaming for 20 years, but it was only in the last couple that it came to dominate TV streaming. Google’s video platform attracts more TV viewers than Netflix, Disney+, and all the other apps, and Google is looking to further beef up its big-screen appeal with a new raft of features, including shopping, immersive channel surfing, and an official version of the AI upscaling that had creators miffed a few months back.

According to Google, YouTube’s growth has translated into higher payouts. The number of channels earning more than $100,000 annually is up 45 percent in 2025 versus 2024. YouTube is now giving creators some tools to boost their appeal (and hopefully their income) on TV screens. Those elaborate video thumbnails featuring surprised, angry, smiley hosts are about to get even prettier with the new 50MB file size limit. That’s up from a measly 2MB.

Video upscaling is also coming to YouTube, and creators will be opted in automatically. To start, YouTube will be upscaling lower-quality videos to 1080p. In the near future, Google plans to support “super resolution” up to 4K.

The site stresses that it’s not modifying original files—creators will have access to both the original and upscaled files, and they can opt out of upscaling. In addition, super resolution videos will be clearly labeled on the user side, allowing viewers to select the original upload if they prefer. The lack of transparency was a sticking point for creators, some of whom complained about the sudden artificial look of their videos during YouTube’s testing earlier this year.

TV-focused YouTube update brings AI upscaling, shopping QR codes Read More »

ai-powered-search-engines-rely-on-“less-popular”-sources,-researchers-find

AI-powered search engines rely on “less popular” sources, researchers find

OK, but which one is better?

These differences don’t necessarily mean the AI-generated results are “worse,” of course. The researchers found that GPT-based searches were more likely to cite sources like corporate entities and encyclopedias for their information, for instance, while almost never citing social media websites.

An LLM-based analysis tool found that AI-powered search results also tended to cover a similar number of identifiable “concepts” as the traditional top 10 links, suggesting a similar level of detail, diversity, and novelty in the results. At the same time, the researchers found that “generative engines tend to compress information, sometimes omitting secondary or ambiguous aspects that traditional search retains.” That was especially true for more ambiguous search terms (such as names shared by different people), for which “organic search results provide better coverage,” the researchers found.

Google Gemini search in particular was more likely to cite low-popularity domains.

Google Gemini search in particular was more likely to cite low-popularity domains. Credit: Kirsten et al

The AI search engines also arguably have an advantage in being able to weave pre-trained “internal knowledge” in with data culled from cited websites. That was especially true for GPT-4o with Search Tool, which often didn’t cite any web sources and simply provided a direct response based on its training.

But this reliance on pre-trained data can become a limitation when searching for timely information. For search terms pulled from Google’s list of Trending Queries for September 15, the researchers found GPT-4o with Search Tool often responded with messages along the lines of “could you please provide more information” rather than actually searching the web for up-to-date information.

While the researchers didn’t determine whether AI-based search engines were overall “better” or “worse” than traditional search engine links, they did urge future research on “new evaluation methods that jointly consider source diversity, conceptual coverage, and synthesis behavior in generative search systems.”

AI-powered search engines rely on “less popular” sources, researchers find Read More »

the-android-powered-boox-palma-2-pro-fits-in-your-pocket,-but-it’s-not-a-phone

The Android-powered Boox Palma 2 Pro fits in your pocket, but it’s not a phone

Softly talking about the Boox Palma 2 Pro

For years, color E Ink was seen as a desirable feature, which would make it easier to read magazines and comics on low-power devices—Boox even has an E Ink monitor. However, the quality of the displays has been lacking. These screens do show colors, but they’re not as vibrant as what you get on an LCD or OLED. In the case of the Palma 2 Pro, the screen is also less sharp in color mode. The touchscreen display is 824 × 1648 in monochrome, but turning on color cuts that in half to 412 × 824.

In addition to the new screen, the second-gen Palma adds a SIM card slot. It’s not for phone calls, though. The SIM slot allows the device to get 5G mobile data in addition to Wi-Fi.

Credit: Boox

The Palma 2 Pro runs Android 15 out of the box. That’s a solid showing for Boox, which often uses much older builds of Google’s mobile OS. Upgrades aren’t guaranteed, and there’s no official support for Google services. However, Boox has a workaround for its devices so the Play Store can be installed.

The new Boox pocket reader is available for pre-order now at $400. It’s expected to ship around November 14.

The Android-powered Boox Palma 2 Pro fits in your pocket, but it’s not a phone Read More »

reddit-sues-to-block-perplexity-from-scraping-google-search-results

Reddit sues to block Perplexity from scraping Google search results

“Unable to scrape Reddit directly, they mask their identities, hide their locations, and disguise their web scrapers to steal Reddit content from Google Search,” Lee said. “Perplexity is a willing customer of at least one of these scrapers, choosing to buy stolen data rather than enter into a lawful agreement with Reddit itself.”

On Reddit, Perplexity pushed back on Reddit’s claims that Perplexity ignored requests to license Reddit content.

“Untrue. Whenever anyone asks us about content licensing, we explain that Perplexity, as an application-layer company, does not train AI models on content,” Perplexity said. “Never has. So, it is impossible for us to sign a license agreement to do so.”

Reddit supposedly “insisted we pay anyway, despite lawfully accessing Reddit data,” Perplexity said. “Bowing to strong arm tactics just isn’t how we do business.”

Perplexity’s spokesperson, Jesse Dwyer, told Ars the company chose to post its statement on Reddit “to illustrate a simple point.”

“It is a public Reddit link accessible to anyone, yet by the logic of Reddit’s lawsuit, if you mention it or cite it in any way (which is your job as a reporter), they might just sue you,” Dwyer said.

But Reddit claimed that its business and reputation have been “damaged” by “misappropriation of Reddit data and circumvention of technological control measures.” Without a licensing deal ensuring that Perplexity and others are respecting Reddit policies, Reddit cannot control who has access to data, how they’re using data, and if data use conflicts with Reddit’s privacy policy and user agreement, the complaint said.

Further, Reddit’s worried that Perplexity’s workaround could catch on, potentially messing up Reddit’s other licensing deals. All the while, Reddit noted, it has to invest “significant resources” in anti-scraping technology, with Reddit ultimately suffering damages, including “lost profits and business opportunities, reputational harm, and loss of user trust.”

Reddit’s hoping the court will grant an injunction barring companies from scraping Reddit content from Google SERPs. It also wants companies blocked from both selling Reddit data and “developing or distributing any technology or product that is used for the unauthorized circumvention of technological control measures and scraping of Reddit data.”

If Reddit wins, companies could be required to pay substantial damages or to disgorge profits from the sale of Reddit content.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.

Reddit sues to block Perplexity from scraping Google search results Read More »

samsung-galaxy-xr-is-the-first-android-xr-headset,-now-on-sale-for-$1,800

Samsung Galaxy XR is the first Android XR headset, now on sale for $1,800

Android-XR

Credit: Google

Android XR is Google’s most ambitious take on Android as a virtual environment. The company calls it an “infinite screen” that lets you organize floating apps to create a custom workspace. The software includes 3D versions of popular Google apps like Google Maps, Google Photos, and YouTube, along with streaming apps, games, and custom XR experiences from the likes of Calm and Adobe.

Google says that its support of open standards for immersive experiences means more content is coming. However, more than anything else, Android XR is a vehicle for Gemini. The Gemini Live feature from phones is available in Android XR, and it’s more aware of your surroundings thanks to all the cameras and orientation sensors in Galaxy XR. For example, you can ask Gemini questions about what’s on the screen—that includes app content or real objects that appear in passthrough video when you look around. Gemini can also help organize your floating windows

While more Android XR hardware is planned, Galaxy XR is the only way to experience it right now, and it’s not cheap. Samsung’s headset is available for purchase at $1,800. If hand gesture control isn’t enough, you’ll have to pay another $175 for wireless controllers (discounted from the $250 retail price). Galaxy XR also supports corrective lenses if you need them, but that’s another $99.

Buyers get a collection of freebies to help justify the price. It comes with a full year of Google AI Pro, YouTube Premium, and Google Play Pass. Collectively, that would usually cost $370. Owners can also get three months of YouTube TV for $3, and everyone with Galaxy XR will get access to the 2025–2026 season of NBA League Pass in the US.

Samsung Galaxy XR is the first Android XR headset, now on sale for $1,800 Read More »

google-has-a-useful-quantum-algorithm-that-outperforms-a-supercomputer

Google has a useful quantum algorithm that outperforms a supercomputer


An approach it calls “quantum echoes” takes 13,000 times longer on a supercomputer.

Image of a silvery plate labeled with

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

A few years back, Google made waves when it claimed that some of its hardware had achieved quantum supremacy, performing operations that would be effectively impossible to simulate on a classical computer. That claim didn’t hold up especially well, as mathematicians later developed methods to help classical computers catch up, leading the company to repeat the work on an improved processor.

While this back-and-forth was unfolding, the field became less focused on quantum supremacy and more on two additional measures of success. The first is quantum utility, in which a quantum computer performs computations that are useful in some practical way. The second is quantum advantage, in which a quantum system completes calculations in a fraction of the time it would take a typical computer. (IBM and a startup called Pasqual have published a useful discussion about what would be required to verifiably demonstrate a quantum advantage.)

Today, Google and a large collection of academic collaborators are publishing a paper describing a computational approach that demonstrates a quantum advantage compared to current algorithms—and may actually help us achieve something useful.

Out of time

Google’s latest effort centers on something it’s calling “quantum echoes.” The approach could be described as a series of operations on the hardware qubits that make up its machine. These qubits hold a single bit of quantum information in a superposition between two values, with probabilities of finding the qubit in one value or the other when it’s measured. Each qubit is entangled with its neighbors, allowing its probability to influence those of all the qubits around it. The operations that allow computation, called gates, are ways of manipulating these probabilities. Most current hardware, including Google’s, perform manipulations on one or two qubits at a time (termed one- and two-qubit gates, respectively.

For quantum echoes, the operations involved performing a set of two-qubit gates, altering the state of the system, and later performing the reverse set of gates. On its own, this would return the system to its original state. But for quantum echoes, Google inserts single-qubit gates performed with a randomized parameter. This alters the state of the system before the reverse operations take place, ensuring that the system won’t return to exactly where it started. That explains the “echoes” portion of the name: You’re sending an imperfect copy back toward where things began, much like an echo involves the imperfect reversal of sound waves.

That’s what the process looks like in terms of operations performed on the quantum hardware. But it’s probably more informative to think of it in terms of a quantum system’s behavior. As Google’s Tim O’Brien explained, “You evolve the system forward in time, then you apply a small butterfly perturbation, and then you evolve the system backward in time.” The forward evolution is the first set of two qubit gates, the small perturbation is the randomized one qubit gate, and the second set of two qubit gates is the equivalent of sending the system backward in time.

Because this is a quantum system, however, strange things happen. “On a quantum computer, these forward and backward evolutions, they interfere with each other,” O’Brien said. One way to think about that interference is in terms of probabilities. The system has multiple paths between its start point and the point of reflection—where it goes from evolving forward in time to evolving backward—and from that reflection point back to a final state. Each of those paths has a probability associated with it. And since we’re talking about quantum mechanics, those paths can interfere with each other, increasing some probabilities at the expense of others. That interference ultimately determines where the system ends up.

(Technically, these are termed “out of time order correlations,” or OTOCs. If you read the Nature paper describing this work, prepare to see that term a lot.)

Demonstrating advantage

So how do you turn quantum echoes into an algorithm? On its own, a single “echo” can’t tell you much about the system—the probabilities ensure that any two runs might show different behaviors. But if you repeat the operations multiple times, you can begin to understand the details of this quantum interference. And performing the operations on a quantum computer ensures that it’s easy to simply rerun the operations with different random one-qubit gates and get many instances of the initial and final states—and thus a sense of the probability distributions involved.

This is also where Google’s quantum advantage comes from. Everyone involved agrees that the precise behavior of a quantum echo of moderate complexity can be modeled using any leading supercomputer. But doing so is very time-consuming, so repeating those simulations a few times becomes unrealistic. The paper estimates that a measurement that took its quantum computer 2.1 hours to perform would take the Frontier supercomputer approximately 3.2 years. Unless someone devises a far better classical algorithm than what we have today, this represents a pretty solid quantum advantage.

But is it a useful algorithm? The repeated sampling can act a bit like the Monte Carlo sampling done to explore the behavior of a wide variety of physical systems. Typically, however, we don’t view algorithms as modeling the behavior of the underlying hardware they’re being run on; instead, they’re meant to model some other physical system we’re interested in. That’s where Google’s announcement stands apart from its earlier work—the company believes it has identified an interesting real-world physical system with behaviors that the quantum echoes can help us understand.

That system is a small molecule in a Nuclear Magnetic Resonance (NMR) machine. In a second draft paper being published on the arXiv later today, Google has collaborated with a large collection of NMR experts to explore that use.

From computers to molecules

NMR is based on the fact that the nucleus of every atom has a quantum property called spin. When nuclei are held near to each other, such as when they’re in the same molecule, these spins can influence one another. NMR uses magnetic fields and photons to manipulate these spins and can be used to infer structural details, like how far apart two given atoms are. But as molecules get larger, these spin networks can extend for greater distances and become increasingly complicated to model. So NMR has been limited to focusing on the interactions of relatively nearby spins.

For this work, though, the researchers figured out how to use an NMR machine to create the physical equivalent of a quantum echo in a molecule. The work involved synthesizing the molecule with a specific isotope of carbon (carbon-13) in a known location in the molecule. That isotope could be used as the source of a signal that propagates through the network of spins formed by the molecule’s atoms.

“The OTOC experiment is based on a many-body echo, in which polarization initially localized on a target spin migrates through the spin network, before a Hamiltonian-engineered time-reversal refocuses to the initial state,” the team wrote. “This refocusing is sensitive to perturbations on distant butterfly spins, which allows one to measure the extent of polarization propagation through the spin network.”

Naturally, something this complicated needed a catchy nickname. The team came up with TARDIS, or Time-Accurate Reversal of Dipolar InteractionS. While that name captures the “out of time order” aspect of OTOC, it’s simply a set of control pulses sent to the NMR sample that starts a perturbation of the molecule’s network of nuclear spins. A second set of pulses then reflects an echo back to the source.

The reflections that return are imperfect, with noise coming from two sources. The first is simply imperfections in the control sequence, a limitation of the NMR hardware. But the second is the influence of fluctuations happening in distant atoms along the spin network. These happen at a certain frequency at random, or the researchers could insert a fluctuation by targeting a specific part of the molecule with randomized control signals.

The influence of what’s going on in these distant spins could allow us to use quantum echoes to tease out structural information at greater distances than we currently do with NMR. But to do so, we need an accurate model of how the echoes will propagate through the molecule. And again, that’s difficult to do with classical computations. But it’s very much within the capabilities of quantum computing, which the paper demonstrates.

Where things stand

For now, the team stuck to demonstrations on very simple molecules, making this work mostly a proof of concept. But the researchers are optimistic that there are many ways the system could be used to extract structural information from molecules at distances that are currently unobtainable using NMR. They list a lot of potential upsides that should be explored in the discussion of the paper, and there are plenty of smart people who would love to find new ways of using their NMR machines, so the field is likely to figure out pretty quickly which of these approaches turns out to be practically useful.

The fact that the demonstrations were done with small molecules, however, means that the modeling run on the quantum computer could also have been done on classical hardware (it only required 15 hardware qubits). So Google is claiming both quantum advantage and quantum utility, but not at the same time. The sorts of complex, long-distance interactions that would be out of range of classical simulation are still a bit beyond the reach of the current quantum hardware. O’Brien estimated that the hardware’s fidelity would have to improve by a factor of three or four to model molecules that are beyond classical simulation.

The quantum advantage issue should also be seen as a work in progress. Google has collaborated with enough researchers at enough institutions that there’s unlikely to be a major improvement in algorithms that could allow classical computers to catch up. Until the community as a whole has some time to digest the announcement, though, we shouldn’t take that as a given.

The other issue is verifiability. Some quantum algorithms will produce results that can be easily verified on classical hardware—situations where it’s hard to calculate the right result but easy to confirm a correct answer. Quantum echoes isn’t one of those, so we’ll need another quantum computer to verify the behavior Google has described.

But Google told Ars nothing is up to the task yet. “No other quantum processor currently matches both the error rates and number of qubits of our system, so our quantum computer is the only one capable of doing this at present,” the company said. (For context, Google says that the algorithm was run on up to 65 qubits, but the chip has 105 qubits total.)

There’s a good chance that other companies would disagree with that contention, but it hasn’t been possible to ask them ahead of the paper’s release.

In any case, even if this claim proves controversial, Google’s Michel Devoret, a recent Nobel winner, hinted that we shouldn’t have long to wait for additional ones. “We have other algorithms in the pipeline, so we will hopefully see other interesting quantum algorithms,” Devoret said.

Nature, 2025. DOI: 10.1038/s41586-025-09526-6  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google has a useful quantum algorithm that outperforms a supercomputer Read More »

youtube’s-likeness-detection-has-arrived-to-help-stop-ai-doppelgangers

YouTube’s likeness detection has arrived to help stop AI doppelgängers

AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators.

Google’s powerful and freely available AI models have helped fuel the rise of AI content, some of which is aimed at spreading misinformation and harassing individuals. Creators and influencers fear their brands could be tainted by a flood of AI videos that show them saying and doing things that never happened—even lawmakers are fretting about this. Google has placed a large bet on the value of AI content, so banning AI from YouTube, as many want, simply isn’t happening.

Earlier this year, YouTube promised tools that would flag face-stealing AI content on the platform. The likeness detection tool, which is similar to the site’s copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes.

Sneak Peek: Likeness Detection on YouTube.

Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing “Content detection” menu. In YouTube’s demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It’s unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules.

YouTube’s likeness detection has arrived to help stop AI doppelgängers Read More »

google-reportedly-searching-for-15-pixel-“superfans”-to-test-unreleased-phones

Google reportedly searching for 15 Pixel “Superfans” to test unreleased phones

The Pixel Superfans program has offered freebies and special events in the past, but access to unreleased phones would be the biggest benefit yet. The document explains that those chosen for the program will have to sign a strict non-disclosure agreement and agree to keep the phones in a protective case that disguises its physical appearance at all times.

Pixel 10 series shadows

Left to right: Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL.

Credit: Ryan Whitwam

Left to right: Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL. Credit: Ryan Whitwam

Pixels are among the most-leaked phones every year. In the past, devices have been photographed straight off the manufacturing line or “fallen off a truck” and ended up for sale months before release. However, it’s unlikely the 15 Superfan testers will lead to more leaks. Like most brands, Google’s pre-release sample phones have physical identifiers to help trace their origin if they end up in a leak or auction listing. The NDA will probably be sufficiently scary to keep the testers in line, as they won’t want to lose their Superfan privileges. Google has also started sharing details on its phones earlier—its Superfan testers may even be involved in these previews.

You can sign up to be a Superfan at any time, but it can take a few weeks to get access after applying. Google has not confirmed when its selection process will begin, but it would have to be soon if Google intends to gather feedback for the Pixel 11. Google’s next phone has probably been in the works for over a year, and it will have to lock down the features at least several months ahead of the expected August 2026 release window. The Pixel 10a is expected to launch even sooner, in spring 2026.

Google reportedly searching for 15 Pixel “Superfans” to test unreleased phones Read More »

oneplus-unveils-oxygenos-16-update-with-deep-gemini-integration

OnePlus unveils OxygenOS 16 update with deep Gemini integration

The updated Android software expands what you can add to Mind Space and uses Gemini. For starters, you can add scrolling screenshots and voice memos up to 60 seconds in length. This provides more data for the AI to generate content. For example, if you take screenshots of hotel listings and airline flights, you can tell Gemini to use your Mind Space content to create a trip itinerary. This will be fully integrated with the phone and won’t require a separate subscription to Google’s AI tools.

oneplus-oxygen-os16

Credit: OnePlus

Mind Space isn’t a totally new idea—it’s quite similar to AI features like Nothing’s Essential Space and Google’s Pixel Screenshots and Journal. The idea is that if you give an AI model enough data on your thoughts and plans, it can provide useful insights. That’s still hypothetical based on what we’ve seen from other smartphone OEMs, but that’s not stopping OnePlus from fully embracing AI in Android 16.

In addition to beefing up Mind Space, OxygenOS 16 will also add system-wide AI writing tools, which is another common AI add-on. Like the systems from Apple, Google, and Samsung, you will be able to use the OnePlus writing tools to adjust text, proofread, and generate summaries.

OnePlus will make OxygenOS 16 available starting October 17 as an open beta. You’ll need a OnePlus device from the past three years to run the software, both in the beta phase and when it’s finally released. As for that, OnePlus hasn’t offered a specific date. The initial OxygenOS 16 release will be with the OnePlus 15 devices, with releases for other supported phones and tablets coming later.

OnePlus unveils OxygenOS 16 update with deep Gemini integration Read More »

google’s-ai-videos-get-a-big-upgrade-with-veo-3.1

Google’s AI videos get a big upgrade with Veo 3.1

It’s getting harder to know what’s real on the Internet, and Google is not helping one bit with the announcement of Veo 3.1. The company’s new video model supposedly offers better audio and realism, along with greater prompt accuracy. The updated video AI will be available throughout the Google ecosystem, including the Flow filmmaking tool, where the new model will unlock additional features. And if you’re worried about the cost of conjuring all these AI videos, Google is also adding a “Fast” variant of Veo.

Veo made waves when it debuted earlier this year, demonstrating a staggering improvement in AI video quality just a few months after Veo 2’s release. It turns out that having all that video on YouTube is very useful for training AI models, so Google is already moving on to Veo 3.1 with a raft of new features.

Google says Veo 3.1 offers stronger prompt adherence, which results in better video outputs and fewer wasted compute cycles. Audio, which was a hallmark feature of the Veo 3 release, has reportedly improved, too. Veo 3’s text-to-video was limited to 720p landscape output, but there’s an ever-increasing volume of vertical video on the Internet. So Veo 3.1 can produce both landscape and portrait 16:9 video.

Google previously said it would bring Veo video tools to YouTube Shorts, which use a vertical video format like TikTok. The release of Veo 3.1 probably opens the door to fulfilling that promise. You can bet Veo videos will show up more frequently on TikTok as well now that it fits the format. This release also keeps Google in its race with OpenAI, which recently released a Sora iPhone app with an impressive new version of its video-generating AI.

Google’s AI videos get a big upgrade with Veo 3.1 Read More »

google-will-let-gemini-schedule-meetings-for-you-in-gmail

Google will let Gemini schedule meetings for you in Gmail

Meetings can be a real drain on productivity, but a new Gmail feature might at least cut down on the time you spend scheduling them. The company has announced “Help Me Schedule” is coming to Gmail, leveraging Gemini AI to recognize when you want to schedule a meeting and offering possible meeting times for the email recipient to choose.

The new meeting feature is reminiscent of Magic Cue on Google’s latest Pixel phones. As you type emails, Gmail will be able to recognize when you are planning a meeting. A Help Me Schedule button will appear in the toolbar. Upon clicking, Google’s AI will swing into action and find possible meeting times that match the context of your message and are available in your calendar.

When you engage with Help me schedule, the AI generates an in-line meeting widget for your message. The recipient can select the time that works for them, and that’s it—the meeting is scheduled for both parties. What about meetings with more than one invitee? Google says the feature won’t support groups at launch.

Google has been on a Gemini-fueled tear lately, expanding access to AI features across a range of products. The company’s nano banana image model is coming to multiple products, and the Veo video model is popping up in Photos and YouTube. Gemini has also rolled out to Google Home to offer AI-assisted notifications and activity summaries.

Google will let Gemini schedule meetings for you in Gmail Read More »