Google

youtube’s-likeness-detection-has-arrived-to-help-stop-ai-doppelgangers

YouTube’s likeness detection has arrived to help stop AI doppelgängers

AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators.

Google’s powerful and freely available AI models have helped fuel the rise of AI content, some of which is aimed at spreading misinformation and harassing individuals. Creators and influencers fear their brands could be tainted by a flood of AI videos that show them saying and doing things that never happened—even lawmakers are fretting about this. Google has placed a large bet on the value of AI content, so banning AI from YouTube, as many want, simply isn’t happening.

Earlier this year, YouTube promised tools that would flag face-stealing AI content on the platform. The likeness detection tool, which is similar to the site’s copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes.

Sneak Peek: Likeness Detection on YouTube.

Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing “Content detection” menu. In YouTube’s demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It’s unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules.

YouTube’s likeness detection has arrived to help stop AI doppelgängers Read More »

google-reportedly-searching-for-15-pixel-“superfans”-to-test-unreleased-phones

Google reportedly searching for 15 Pixel “Superfans” to test unreleased phones

The Pixel Superfans program has offered freebies and special events in the past, but access to unreleased phones would be the biggest benefit yet. The document explains that those chosen for the program will have to sign a strict non-disclosure agreement and agree to keep the phones in a protective case that disguises its physical appearance at all times.

Pixel 10 series shadows

Left to right: Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL.

Credit: Ryan Whitwam

Left to right: Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL. Credit: Ryan Whitwam

Pixels are among the most-leaked phones every year. In the past, devices have been photographed straight off the manufacturing line or “fallen off a truck” and ended up for sale months before release. However, it’s unlikely the 15 Superfan testers will lead to more leaks. Like most brands, Google’s pre-release sample phones have physical identifiers to help trace their origin if they end up in a leak or auction listing. The NDA will probably be sufficiently scary to keep the testers in line, as they won’t want to lose their Superfan privileges. Google has also started sharing details on its phones earlier—its Superfan testers may even be involved in these previews.

You can sign up to be a Superfan at any time, but it can take a few weeks to get access after applying. Google has not confirmed when its selection process will begin, but it would have to be soon if Google intends to gather feedback for the Pixel 11. Google’s next phone has probably been in the works for over a year, and it will have to lock down the features at least several months ahead of the expected August 2026 release window. The Pixel 10a is expected to launch even sooner, in spring 2026.

Google reportedly searching for 15 Pixel “Superfans” to test unreleased phones Read More »

oneplus-unveils-oxygenos-16-update-with-deep-gemini-integration

OnePlus unveils OxygenOS 16 update with deep Gemini integration

The updated Android software expands what you can add to Mind Space and uses Gemini. For starters, you can add scrolling screenshots and voice memos up to 60 seconds in length. This provides more data for the AI to generate content. For example, if you take screenshots of hotel listings and airline flights, you can tell Gemini to use your Mind Space content to create a trip itinerary. This will be fully integrated with the phone and won’t require a separate subscription to Google’s AI tools.

oneplus-oxygen-os16

Credit: OnePlus

Mind Space isn’t a totally new idea—it’s quite similar to AI features like Nothing’s Essential Space and Google’s Pixel Screenshots and Journal. The idea is that if you give an AI model enough data on your thoughts and plans, it can provide useful insights. That’s still hypothetical based on what we’ve seen from other smartphone OEMs, but that’s not stopping OnePlus from fully embracing AI in Android 16.

In addition to beefing up Mind Space, OxygenOS 16 will also add system-wide AI writing tools, which is another common AI add-on. Like the systems from Apple, Google, and Samsung, you will be able to use the OnePlus writing tools to adjust text, proofread, and generate summaries.

OnePlus will make OxygenOS 16 available starting October 17 as an open beta. You’ll need a OnePlus device from the past three years to run the software, both in the beta phase and when it’s finally released. As for that, OnePlus hasn’t offered a specific date. The initial OxygenOS 16 release will be with the OnePlus 15 devices, with releases for other supported phones and tablets coming later.

OnePlus unveils OxygenOS 16 update with deep Gemini integration Read More »

google’s-ai-videos-get-a-big-upgrade-with-veo-3.1

Google’s AI videos get a big upgrade with Veo 3.1

It’s getting harder to know what’s real on the Internet, and Google is not helping one bit with the announcement of Veo 3.1. The company’s new video model supposedly offers better audio and realism, along with greater prompt accuracy. The updated video AI will be available throughout the Google ecosystem, including the Flow filmmaking tool, where the new model will unlock additional features. And if you’re worried about the cost of conjuring all these AI videos, Google is also adding a “Fast” variant of Veo.

Veo made waves when it debuted earlier this year, demonstrating a staggering improvement in AI video quality just a few months after Veo 2’s release. It turns out that having all that video on YouTube is very useful for training AI models, so Google is already moving on to Veo 3.1 with a raft of new features.

Google says Veo 3.1 offers stronger prompt adherence, which results in better video outputs and fewer wasted compute cycles. Audio, which was a hallmark feature of the Veo 3 release, has reportedly improved, too. Veo 3’s text-to-video was limited to 720p landscape output, but there’s an ever-increasing volume of vertical video on the Internet. So Veo 3.1 can produce both landscape and portrait 16:9 video.

Google previously said it would bring Veo video tools to YouTube Shorts, which use a vertical video format like TikTok. The release of Veo 3.1 probably opens the door to fulfilling that promise. You can bet Veo videos will show up more frequently on TikTok as well now that it fits the format. This release also keeps Google in its race with OpenAI, which recently released a Sora iPhone app with an impressive new version of its video-generating AI.

Google’s AI videos get a big upgrade with Veo 3.1 Read More »

google-will-let-gemini-schedule-meetings-for-you-in-gmail

Google will let Gemini schedule meetings for you in Gmail

Meetings can be a real drain on productivity, but a new Gmail feature might at least cut down on the time you spend scheduling them. The company has announced “Help Me Schedule” is coming to Gmail, leveraging Gemini AI to recognize when you want to schedule a meeting and offering possible meeting times for the email recipient to choose.

The new meeting feature is reminiscent of Magic Cue on Google’s latest Pixel phones. As you type emails, Gmail will be able to recognize when you are planning a meeting. A Help Me Schedule button will appear in the toolbar. Upon clicking, Google’s AI will swing into action and find possible meeting times that match the context of your message and are available in your calendar.

When you engage with Help me schedule, the AI generates an in-line meeting widget for your message. The recipient can select the time that works for them, and that’s it—the meeting is scheduled for both parties. What about meetings with more than one invitee? Google says the feature won’t support groups at launch.

Google has been on a Gemini-fueled tear lately, expanding access to AI features across a range of products. The company’s nano banana image model is coming to multiple products, and the Veo video model is popping up in Photos and YouTube. Gemini has also rolled out to Google Home to offer AI-assisted notifications and activity summaries.

Google will let Gemini schedule meetings for you in Gmail Read More »

nvidia-sells-tiny-new-computer-that-puts-big-ai-on-your-desktop

Nvidia sells tiny new computer that puts big AI on your desktop

On Tuesday, Nvidia announced it will begin taking orders for the DGX Spark, a $4,000 desktop AI computer that wraps one petaflop of computing performance and 128GB of unified memory into a form factor small enough to sit on a desk. Its biggest selling point is likely its large integrated memory that can run larger AI models than consumer GPUs.

Nvidia will begin taking orders for the DGX Spark on Wednesday, October 15, through its website, with systems also available from manufacturing partners and select US retail stores.

The DGX Spark, which Nvidia previewed as “Project DIGITS” in January and formally named in May, represents Nvidia’s attempt to create a new category of desktop computer workstation specifically for AI development.

With the Spark, Nvidia seeks to address a problem facing some AI developers: Many AI tasks exceed the memory and software capabilities of standard PCs and workstations (more on that below), forcing them to shift their work to cloud services or data centers. However, the actual market for a desktop AI workstation remains uncertain, particularly given the upfront cost versus cloud alternatives, which allow developers to pay as they go.

Nvidia’s Spark reportedly includes enough memory to run larger-than-typical AI models for local tasks, with up to 200 billion parameters and fine-tune models containing up to 70 billion parameters without requiring remote infrastructure. Potential uses include running larger open-weights language models and media synthesis models such as AI image generators.

According to Nvidia, users can customize Black Forest Labs’ Flux.1 models for image generation, build vision search and summarization agents using Nvidia’s Cosmos Reason vision language model, or create chatbots using the Qwen3 model optimized for the DGX Spark platform.

Big memory in a tiny box

Nvidia has squeezed a lot into a 2.65-pound box that measures 5.91 x 5.91 x 1.99 inches and uses 240 watts of power. The system runs on Nvidia’s GB10 Grace Blackwell Superchip, includes ConnectX-7 200Gb/s networking, and uses NVLink-C2C technology that provides five times the bandwidth of PCIe Gen 5. It also includes the aforementioned 128GB of unified memory that is shared between system and GPU tasks.

Nvidia sells tiny new computer that puts big AI on your desktop Read More »

hackers-can-steal-2fa-codes-and-private-messages-from-android-phones

Hackers can steal 2FA codes and private messages from Android phones

In the second step, Pixnapping performs graphical operations on individual pixels that the targeted app sent to the rendering pipeline. These operations choose the coordinates of target pixels the app wants to steal and begin to check if the color of those coordinates is white or non-white.

“Suppose, for example, [the attacker] wants to steal a pixel that is part of the screen region where a 2FA character is known to be rendered by Google Authenticator,” Wang said. “This pixel is either white (if nothing was rendered there) or non-white (if part of a 2FA digit was rendered there). Then, conceptually, the attacker wants to cause some graphical operations whose rendering time is long if the target victim pixel is non-white and short if it is white. The malicious app does this by opening some malicious activities (i.e., windows) in front of the victim app that was opened in Step 1.”

The third step measures the amount of time required at each coordinate. By combining the times for each one, the attack can rebuild the images sent to the rendering pipeline one pixel at a time.

The amount of time required to perform the attack depends on several variables, including how many coordinates need to be measured. In some cases, there’s no hard deadline for obtaining the information the attacker wants to steal. In other cases—such as stealing a 2FA code—every second counts, since each one is valid for only 30 seconds. In the paper, the researchers explained:

To meet the strict 30-second deadline for the attack, we also reduce the number of samples per target pixel to 16 (compared to the 34 or 64 used in earlier attacks) and decrease the idle time between pixel leaks from 1.5 seconds to 70 milliseconds. To ensure that the attacker has the full 30 seconds to leak the 2FA code, our implementation waits for the beginning of a new 30-second global time interval, determined using the system clock.

… We use our end-to-end attack to leak 100 different 2FA codes from Google Authenticator on each of our Google Pixel phones. Our attack correctly recovers the full 6-digit 2FA code in 73%, 53%, 29%, and 53% of the trials on the Pixel 6, 7, 8, and 9, respectively. The average time to recover each 2FA code is 14.3, 25.8, 24.9, and 25.3 seconds for the Pixel 6, Pixel 7, Pixel 8, and Pixel 9, respectively. We are unable to leak 2FA codes within 30 seconds using our implementation on the Samsung Galaxy S25 device due to significant noise. We leave further investigation of how to tune our attack to work on this device to future work.

In an email, a Google representative wrote, “We issued a patch for CVE-2025-48561 in the September Android security bulletin, which partially mitigates this behavior. We are issuing an additional patch for this vulnerability in the December Android security bulletin. We have not seen any evidence of in-the-wild exploitation.”

Hackers can steal 2FA codes and private messages from Android phones Read More »

google’s-photoshop-killer-ai-model-is-coming-to-search,-photos,-and-notebooklm

Google’s Photoshop-killer AI model is coming to search, Photos, and NotebookLM

NotebookLM added a video overview feature several months back, which uses AI to generate a video summary of the content you’ve added to the notebook. The addition of Nano Banana to NotebookLM is much less open-ended. Instead of entering prompts to edit images, NotebookLM has a new set of video styles powered by Nano Banana, including whiteboard, anime, retro print, and more. The original style is still available as “Classic.”

My favorite video.

NotebookLM’s videos are still somewhat limited, but this update adds a second general format. You can now choose “Brief” in addition to “Explainer,” with the option to add prompts that steer the video in the right direction. Although, that’s not a guarantee, as this is still generative AI. At least the style should be more consistent with the addition of Nano Banana.

The updated image editor is also coming to Google Photos, but Google doesn’t have a firm timeline. Google claims that its Nano Banana model is a “major upgrade” over its previous image-editing model. Conversational editing was added to Photos last month, but it’s not the Nano Banana model that has impressed testers over the summer. Google says that Nano Banana will arrive in the Photos app in the next few weeks, which should make those conversational edits much less frustrating.

Google’s Photoshop-killer AI model is coming to search, Photos, and NotebookLM Read More »

uk-regulators-plan-to-force-google-changes-under-new-competition-law

UK regulators plan to force Google changes under new competition law

Google is facing multiple antitrust actions in the US, and European regulators have been similarly tightening the screws. You can now add the UK to the list of Google’s governmental worries. The country’s antitrust regulator, known as the Competition and Markets Authority (CMA), has confirmed that Google has “strategic market status,” paving the way to more limits on how Google does business in the UK. Naturally, Google objects to this course of action.

The designation is connected to the UK’s new digital markets competition regime, which was enacted at the beginning of the year. Shortly after, the CMA announced it was conducting an investigation into whether Google should be designated with strategic market status. The outcome of that process is a resounding “yes.”

This label does not mean Google has done anything illegal or that it is subject to immediate regulation. It simply means the company has “substantial and entrenched market power” in one or more areas under the purview of the CMA. Specifically, the agency has found that Google is dominant in search and search advertising, holding a greater than 90 percent share of Internet searches in the UK.

In Google’s US antitrust trials, the rapid rise of generative AI has muddied the waters. Google has claimed on numerous occasions that the proliferation of AI firms offering search services means there is ample competition. In the UK, regulators note that Google’s Gemini AI assistant is not in the scope of the strategic market status designation. However, some AI features connected to search, like AI Overviews and AI Mode, are included.

According to the CMA, consultations on possible interventions to ensure effective competition will begin later this year. The agency’s first set of antitrust measures will likely expand on solutions that Google has introduced in other regions or has offered on a voluntary basis in the UK. This could include giving publishers more control over how their data is used in search and “choice screens” that suggest Google alternatives to users. Measures that require new action from Google could be announced in the first half of 2026.

UK regulators plan to force Google changes under new competition law Read More »

youtube-prepares-to-welcome-back-banned-creators-with-“second-chance”-program

YouTube prepares to welcome back banned creators with “second chance” program

A few weeks ago, Google told US Rep. Jim Jordan (R-Ohio) that it would allow creators banned for COVID and election misinformation to rejoin the platform. It didn’t offer many details in the letter, but now YouTube has explained the restoration process. YouTube’s “second chances” are actually more expansive than the letter made it seem. Going forward, almost anyone banned from YouTube will have an opportunity to request a new channel. The company doesn’t guarantee approval, but you can expect to see plenty of banned creators back on Google’s video platform in the coming months.

YouTube will now allow banned creator to request reinstatement, but this is separate from appealing a ban. If a channel is banned, creators continue to have the option of appealing the ban. If successful, their channel comes back as if nothing happened. After one year, creators will now have the “second chance” option.

“We know many terminated creators deserve a second chance,” the blog post reads, noting that YouTube itself doesn’t always get things right the first time. The option for getting a new channel will appear in YouTube Studio on the desktop, and Google expects to begin sending out these notices in the coming months. However, anyone terminated for copyright violations is out of luck—Google does not forgive such infringement as easily as it does claiming that COVID is a hoax.

The readmission process will still come with a review by YouTube staff, and the company says it will take multiple factors into consideration, including whether or not the behavior that got the channel banned is still against the rules. This is clearly a reference to COVID and election misinformation, which Google did not allow on YouTube for several years but has since stopped policing. The site will also consider factors like how severe or persistent the violations were and whether the creator’s actions “harmed or may continue to harm the YouTube community.”

YouTube prepares to welcome back banned creators with “second chance” program Read More »

apple-and-google-reluctantly-comply-with-texas-age-verification-law

Apple and Google reluctantly comply with Texas age verification law

Apple yesterday announced a plan to comply with a Texas age verification law and warned that changes required by the law will reduce privacy for app users.

“Beginning January 1, 2026, a new state law in Texas—SB2420—introduces age assurance requirements for app marketplaces and developers,” Apple said yesterday in a post for developers. “While we share the goal of strengthening kids’ online safety, we are concerned that SB2420 impacts the privacy of users by requiring the collection of sensitive, personally identifiable information to download any app, even if a user simply wants to check the weather or sports scores.”

The Texas App Store Accountability Act requires app stores to verify users’ ages and imposes restrictions on those under 18. Apple said that developers will have “to adopt new capabilities and modify behavior within their apps to meet their obligations under the law.”

Apple’s post noted that similar laws will take effect later in 2026 in Utah and Louisiana. Google also recently announced plans for complying with the three state laws and said the new requirements reduce user privacy.

“While we have user privacy and trust concerns with these new verification laws, Google Play is designing APIs, systems, and tools to help you meet your obligations,” Google told developers in an undated post.

The Utah law is scheduled to take effect May 7, 2026, while the Louisiana law will take effect July 1, 2026. The Texas, Utah, and Louisiana “laws impose significant new requirements on many apps that may need to provide age appropriate experiences to users in these states,” Google said. “These requirements include ingesting users’ age ranges and parental approval status for significant changes from app stores and notifying app stores of significant changes.”

New features for Texas

Apple and Google both announced new features to help developers comply.

“Once this law goes into effect, users located in Texas who create a new Apple Account will be required to confirm whether they are 18 years or older,” Apple said. “All new Apple Accounts for users under the age of 18 will be required to join a Family Sharing group, and parents or guardians will need to provide consent for all App Store downloads, app purchases, and transactions using Apple’s In-App Purchase system by the minor.”

Apple and Google reluctantly comply with Texas age verification law Read More »

bank-of-england-warns-ai-stock-bubble-rivals-2000-dotcom-peak

Bank of England warns AI stock bubble rivals 2000 dotcom peak

Share valuations based on past earnings have also reached their highest levels since the dotcom bubble 25 years ago, though the BoE noted they appear less extreme when based on investors’ expectations for future profits. “This, when combined with increasing concentration within market indices, leaves equity markets particularly exposed should expectations around the impact of AI become less optimistic,” the central bank said.

Toil and trouble?

The dotcom bubble offers a potentially instructive parallel to our current era. In the late 1990s, investors poured money into Internet companies based on the promise of a transformed economy, seemingly ignoring whether individual businesses had viable paths to profitability. Between 1995 and March 2000, the Nasdaq index rose 600 percent. When sentiment shifted, the correction was severe: the Nasdaq fell 78 percent from its peak, reaching a low point in October 2002.

Whether we’ll see the same thing or worse if an AI bubble pops is mere speculation at this point. But similarly to the early 2000s, the question about today’s market isn’t necessarily about the utility of AI tools themselves (the Internet was useful, after all, despite the bubble), but whether the amount of money being poured into the companies that sell them is out of proportion with the potential profits those improvements might bring.

We don’t have a crystal ball to determine when such a bubble might pop, or even if it is guaranteed to do so, but we’ll likely continue to see more warning signs ahead if AI-related deals continue to grow larger and larger over time.

Bank of England warns AI stock bubble rivals 2000 dotcom peak Read More »