Google

two-certificate-authorities-booted-from-the-good-graces-of-chrome

Two certificate authorities booted from the good graces of Chrome

Google says its Chrome browser will stop trusting certificates from two certificate authorities after “patterns of concerning behavior observed over the past year” diminished trust in their reliability.

The two organizations, Taiwan-based Chunghwa Telecom and Budapest-based Netlock, are among the hundreds of certificate authorities trusted by Chrome and most other browsers to provide digital certificates that encrypt traffic and certify the authenticity of sites. With the ability to mint cryptographic credentials that cause address bars to display a padlock, assuring the trustworthiness of a site, these certificate authorities wield significant control over the security of the web.

Inherent risk

“Over the past several months and years, we have observed a pattern of compliance failures, unmet improvement commitments, and the absence of tangible, measurable progress in response to publicly disclosed incident reports,” members of the Chrome security team wrote Tuesday. “When these factors are considered in aggregate and considered against the inherent risk each publicly-trusted CA poses to the internet, continued public trust is no longer justified.”

Two certificate authorities booted from the good graces of Chrome Read More »

adobe-finally-releases-photoshop-for-android,-and-it’s-free-(for-now)

Adobe finally releases Photoshop for Android, and it’s free (for now)

Adobe has spent years releasing mobile apps that aren’t Photoshop, and now it’s finally giving people what they want. Yes, real Photoshop. After releasing a mobile version of Photoshop on iPhone earlier this year, the promised Android release has finally arrived. You can download it right now in beta, and it’s free to use for the duration of the beta period.

The mobile app includes a reasonably broad selection of tools from the desktop version of Adobe’s iconic image editor, including masks, clone stamp, layers, transformations, cropping, and an array of generative AI tools. The app looks rather barebones when you first start using it, but the toolbar surfaces features as you select areas and manipulate layers.

Depending on how you count, this is Adobe’s third attempt to do Photoshop on phones. So far, it appears to be the most comprehensive, though. It’s much more capable than Photoshop Express or the ancient Photoshop Touch app, which Adobe unpublished almost a decade ago. If you’re not familiar with the ins and outs of Photoshop, the new app comes with a robust collection of tutorials—just tap the light bulb icon to peruse them.

Photoshop on Android makes a big deal about Adobe’s generative AI features, which let you easily select subjects or backgrounds, remove objects, and insert new content based on a text prompt. This works about as well as the desktop version of Photoshop because it’s relying on the same cloud service to do the heavy lifting. This would have been impressive to see in a mobile app a year ago, but OEM features like Google’s Magic Editor have since become more widespread.

Adobe finally releases Photoshop for Android, and it’s free (for now) Read More »

google-settles-shareholder-lawsuit,-will-spend-$500m-on-being-less-evil

Google settles shareholder lawsuit, will spend $500M on being less evil

“Over the years, we have devoted substantial resources to building robust compliance processes,” a Google spokesperson said. “To avoid protracted litigation we’re happy to make these commitments.”

This case is what’s known as a consolidated derivative litigation, where multiple shareholder lawsuits are combined into a single action. The litigation stretches back to 2021, when a Michigan pension fund accused Google of harming the company’s future by triggering widespread antitrust and regulatory actions with “prolonged and ongoing monopolistic and anticompetitive business practices.” That accusation has only gained more weight in the years since it was made.

Today, Google is coming off three major antitrust losses. In 2023, Google lost an antitrust case brought by Epic Games that accused it of anticompetitive practices in app distribution. In 2024, the US Department of Justice successfully showed that Google has illegally maintained a search monopoly. Finally, Google lost the advertising antitrust case earlier this year, putting its primary revenue driver at risk.

These legal salvos could cost the company billions in fines and force major changes to its business. Google is facing a world in which it might need to open Google Play to other app stores, hand over advertising data to competitors, license its search index, and even sell the Chrome browser. Perhaps the reforms will lead to a changed company, but that won’t undo the damage from the current spate of antitrust actions.

Google settles shareholder lawsuit, will spend $500M on being less evil Read More »

the-gmail-app-will-now-create-ai-summaries-whether-you-want-them-or-not

The Gmail app will now create AI summaries whether you want them or not

Gmail AI summary

This block of AI-generated text will soon appear automatically in some threads.

Credit: Google

This block of AI-generated text will soon appear automatically in some threads. Credit: Google

Summarizing content is one of the more judicious applications of generative AI technology, dating back to the 2017 paper on the transformer architecture. Generative AI has since been employed to create chatbots that will seemingly answer any question, despite their tendency to make mistakes. Grounding the AI output with a few emails usually yields accurate results, but do you really need a robot to summarize your emails? Unless you’re getting novels in your inbox, you can probably just read a few paragraphs.

If you’re certain you don’t want any part of this, there is a solution. Automatic generation of AI summaries is controlled by Gmail’s “smart features.” You (or an administrator of your managed account) can disable that. Open the app settings, select the account, and uncheck the smart features toggle.

For most people, Gmail’s smart features are enabled out of the box, but they’re off by default in Europe and Japan. When you disable them, you won’t see the automatic AI summaries, but there will still be a button to generate those summaries with Gemini. Be aware that smart features also control high-priority notifications, package tracking, Smart Compose, Smart Reply, and nudges. If you can live without all of those features in the mobile app, you can avoid automatic AI summaries. The app will occasionally pester you to turn smart features back on, though.

The Gmail app will now create AI summaries whether you want them or not Read More »

google-maps-can’t-explain-why-it-falsely-labeled-german-autobahns-as-closed

Google Maps can’t explain why it falsely labeled German autobahns as closed

Ars contacted Google to see if the glitch’s cause has been uncovered. A spokesperson remained vague, only reiterating prior statements that Google “investigated a technical issue that temporarily showed inaccurate road closures on the map” and has “since removed them.”

Apparently, Google only learned of the glitch after users who braved the supposedly closed roads started reporting the errors, prompting Google to remove incorrect stop signs one by one. Engadget reported that the glitch only lasted a couple of hours.

Google’s spokesperson told German media that the company wouldn’t comment on the specific case but noted that Google Maps draws information from three key sources: individual users, public sources (like transportation authorities), and third-party providers.

It wasn’t the first time that German drivers have encountered odd roadblocks using Google Maps. Earlier this month, Google Maps “incorrectly displayed motorway tunnels” as closed in another part of Germany, MSN reported. Now, drivers in the area are being advised to check multiple traffic news sources before making travel plans.

While Google has yet to confirm what actually happened, one regional publication noted that the German Automobile Club, Europe’s largest automobile association, had warned that there may be heavy traffic due to the holiday. Google’s glitch may have been tied to traffic forecasts rather than current traffic reports. Google also recently added artificial intelligence features to Google Maps, which could have hallucinated the false traffic jams.

This story was updated on May 30 to include comment from Google.

Google Maps can’t explain why it falsely labeled German autobahns as closed Read More »

google-and-doj-tussle-over-how-ai-will-remake-the-web-in-antitrust-closing-arguments

Google and DOJ tussle over how AI will remake the web in antitrust closing arguments

At the same time, Google is seeking to set itself apart from AI upstarts. “Generative AI companies are not trying to out-Google Google,” said Schmidtlein. Google’s team contends that its actions have not harmed any AI products like ChatGPT or Perplexity, and at any rate, they are not in the search market as defined by the court.

Mehta mused about the future of search, suggesting we may have to rethink what a general search engine is in 2025. “Maybe people don’t want 10 blue links anymore,” he said.

The Chromium problem and an elegant solution

At times during the case, Mehta has expressed skepticism about the divestment of Chrome. During closing arguments, Dahlquist reiterated the close relationship between search and browsers, reminding the court that 35 percent of Google’s search volume comes from Chrome.

Mehta now seems more receptive to a Chrome split than before, perhaps in part because the effects of the other remedies are becoming so murky. He called the Chrome divestment “less speculative” and “more elegant” than the data and placement remedies. Google again claimed, as it has throughout the remedy phase, that forcing it to give up Chrome is unsupported in the law and that Chrome’s dominance is a result of innovation.

Break up the company without touching the sides and getting shocked!

Credit: Aurich Lawson

Even if Mehta leans toward ordering this remedy, Chromium may be a sticking point. The judge seems unconvinced that the supposed buyers—a group which apparently includes almost every major tech firm—have the scale and expertise needed to maintain Chromium. This open source project forms the foundation of many other browsers, making its continued smooth operation critical to the web.

If Google gives up Chrome, Chromium goes with it, but what about the people who maintain it? The DOJ contends that it’s common for employees to come along with an acquisition, but that’s far from certain. There was some discussion of ensuring a buyer could commit to hiring staff to maintain Chromium. The DOJ suggests Google could be ordered to provide financial incentives to ensure critical roles are filled, but that sounds potentially messy.

A Chrome sale seems more likely now than it did earlier, but nothing is assured yet. Following the final arguments from each side, it’s up to Mehta to mull over the facts before deciding Google’s fate. That’s expected to happen in August, but nothing will change for Google right away. The company has already confirmed it will appeal the case, hoping to have the original ruling overturned. It could still be years before this case reaches its ultimate conclusion.

Google and DOJ tussle over how AI will remake the web in antitrust closing arguments Read More »

amazon-fire-sticks-enable-“billions-of-dollars”-worth-of-streaming-piracy

Amazon Fire Sticks enable “billions of dollars” worth of streaming piracy

Amazon Fire Sticks are enabling “billions of dollars” worth of streaming piracy, according to a report today from Enders Analysis, a media, entertainment, and telecommunications research firm. Technologies from other media conglomerates, Microsoft, Google, and Facebook, are also enabling what the report’s authors deem an “industrial scale of theft.”

The report, “Video piracy: Big tech is clearly unwilling to address the problem,” focuses on the European market but highlights the global growth of piracy of streaming services as they increasingly acquire rights to live programs, like sporting events.

Per the BBC, the report points to the availability of multiple, simultaneous illegal streams for big events that draw tens of thousands of pirate viewers.

Enders’ report places some blame on Facebook for showing advertisements for access to illegal streams, as well as Google and Microsoft for the alleged “continued depreciation” of their digital rights management (DRM) systems, Widevine and PlayReady, respectively. Ars Technica reached out to Facebook, Google, and Microsoft for comment but didn’t receive a response before publication.

The report echoes complaints shared throughout the industry, including by the world’s largest European soccer streamer, DAZN. Streaming piracy is “almost a crisis for the sports rights industry,” DAZN’s head of global rights, Tom Burrows, said at The Financial Times’ Business of Football Summit in February. At the same event, Nick Herm, COO of Comcast-owned European telecommunication firm Sky Group, estimated that piracy was costing his company “hundreds of millions of dollars” in revenue. At the time, Enders co-founder Claire Enders said that the pirating of sporting events accounts for “about 50 percent of most markets.”

Jailbroken Fire Sticks

Friday’s Enders report named Fire Sticks as a significant contributor to streaming piracy, calling the hardware a “piracy enabler.”

Enders’ report pointed to security risks that pirate viewers face, including providing credit card information and email addresses to unknown entities, which can make people vulnerable to phishing and malware. However, reports of phishing and malware stemming from streaming piracy, which occurs through various methods besides a Fire TV Stick, seem to be rather limited.

Amazon Fire Sticks enable “billions of dollars” worth of streaming piracy Read More »

gemini-in-google-drive-may-finally-be-useful-now-that-it-can-analyze-videos

Gemini in Google Drive may finally be useful now that it can analyze videos

Google’s rapid adoption of AI has seen the Gemini “sparkle” icon become an omnipresent element in almost every Google product. It’s there to summarize your email, add items to your calendar, and more—if you trust it to do those things. Gemini is also integrated with Google Drive, where it’s gaining a new feature that could make it genuinely useful: Google’s AI bot will soon be able to watch videos stored in your Drive so you don’t have to.

Gemini is already accessible in Drive, with the ability to summarize documents or folders, gather and analyze data, and expand on the topics covered in your documents. Google says the next step is plugging videos into Gemini, saving you from wasting time scrubbing through a file just to find something of interest.

Using a chatbot to analyze and manipulate text doesn’t always make sense—after all, it’s not hard to skim an email or short document. It can take longer to interact with a chatbot, which might not add any useful insights. Video is different because watching is a linear process in which you are presented with information at the pace the video creator sets. You can change playback speed or rewind to catch something you missed, but that’s more arduous than reading something at your own pace. So Gemini’s video support in Drive could save you real time.

Suppose you have a recorded meeting in video form uploaded to Drive. You could go back and rewatch it to take notes or refresh your understanding of a particular exchange. Or, Google suggests, you can ask Gemini to summarize the video and tell you what’s important. This could be a great alternative, as grounding AI output with a specific data set or file tends to make it more accurate. Naturally, you should still maintain healthy skepticism of what the AI tells you about the content of your video.

Gemini in Google Drive may finally be useful now that it can analyze videos Read More »

ai-video-just-took-a-startling-leap-in-realism.-are-we-doomed?

AI video just took a startling leap in realism. Are we doomed?


Tales from the cultural singularity

Google’s Veo 3 delivers AI videos of realistic people with sound and music. We put it to the test.

Still image from an AI-generated Veo 3 video of “A 1980s fitness video with models in leotards wearing werewolf masks.” Credit: Google

Last week, Google introduced Veo 3, its newest video generation model that can create 8-second clips with synchronized sound effects and audio dialog—a first for the company’s AI tools. The model, which generates videos at 720p resolution (based on text descriptions called “prompts” or still image inputs), represents what may be the most capable consumer video generator to date, bringing video synthesis close to a point where it is becoming very difficult to distinguish between “authentic” and AI-generated media.

Google also launched Flow, an online AI filmmaking tool that combines Veo 3 with the company’s Imagen 4 image generator and Gemini language model, allowing creators to describe scenes in natural language and manage characters, locations, and visual styles in a web interface.

An AI-generated video from Veo 3: “ASMR scene of a woman whispering “Moonshark” into a microphone while shaking a tambourine”

Both tools are now available to US subscribers of Google AI Ultra, a plan that costs $250 a month and comes with 12,500 credits. Veo 3 videos cost 150 credits per generation, allowing 83 videos on that plan before you run out. Extra credits are available for the price of 1 cent per credit in blocks of $25, $50, or $200. That comes out to about $1.50 per video generation. But is the price worth it? We ran some tests with various prompts to see what this technology is truly capable of.

How does Veo work?

Like other modern video generation models, Veo 3 is built on diffusion technology—the same approach that powers image generators like Stable Diffusion and Flux. The training process works by taking real videos and progressively adding noise to them until they become pure static, then teaching a neural network to reverse this process step by step. During generation, Veo 3 starts with random noise and a text prompt, then iteratively refines that noise into a coherent video that matches the description.

AI-generated video from Veo 3: “An old professor in front of a class says, ‘Without a firm historical context, we are looking at the dawn of a new era of civilization: post-history.'”

DeepMind won’t say exactly where it sourced the content to train Veo 3, but YouTube is a strong possibility. Google owns YouTube, and DeepMind previously told TechCrunch that Google models like Veo “may” be trained on some YouTube material.

It’s important to note that Veo 3 is a system composed of a series of AI models, including a large language model (LLM) to interpret user prompts to assist with detailed video creation, a video diffusion model to create the video, and an audio generation model that applies sound to the video.

An AI-generated video from Veo 3: “A male stand-up comic on stage in a night club telling a hilarious joke about AI and crypto with a silly punchline.” An AI language model built into Veo 3 wrote the joke.

In an attempt to prevent misuse, DeepMind says it’s using its proprietary watermarking technology, SynthID, to embed invisible markers into frames Veo 3 generates. These watermarks persist even when videos are compressed or edited, helping people potentially identify AI-generated content. As we’ll discuss more later, though, this may not be enough to prevent deception.

Google also censors certain prompts and outputs that breach the company’s content agreement. During testing, we encountered “generation failure” messages for videos that involve romantic and sexual material, some types of violence, mentions of certain trademarked or copyrighted media properties, some company names, certain celebrities, and some historical events.

Putting Veo 3 to the test

Perhaps the biggest change with Veo 3 is integrated audio generation, although Meta previewed a similar audio-generation capability with “Movie Gen” last October, and AI researchers have experimented with using AI to add soundtracks to silent videos for some time. Google DeepMind itself showed off an AI soundtrack-generating model in June 2024.

An AI-generated video from Veo 3: “A middle-aged balding man rapping indie core about Atari, IBM, TRS-80, Commodore, VIC-20, Atari 800, NES, VCS, Tandy 100, Coleco, Timex-Sinclair, Texas Instruments”

Veo 3 can generate everything from traffic sounds to music and character dialogue, though our early testing reveals occasional glitches. Spaghetti makes crunching sounds when eaten (as we covered last week, with a nod to the famous Will Smith AI spaghetti video), and in scenes with multiple people, dialogue sometimes comes from the wrong character’s mouth. But overall, Veo 3 feels like a step change in video synthesis quality and coherency over models from OpenAI, Runway, Minimax, Pika, Meta, Kling, and Hunyuanvideo.

The videos also tend to show garbled subtitles that almost match the spoken words, which is an artifact of subtitles on videos present in the training data. The AI model is imitating what it has “seen” before.

An AI-generated video from Veo 3: “A beer commercial for ‘CATNIP’ beer featuring a real a cat in a pickup truck driving down a dusty dirt road in a trucker hat drinking a can of beer while country music plays in the background, a man sings a jingle ‘Catnip beeeeeeeeeeeeeeeeer’ holding the note for 6 seconds”

We generated each of the eight-second-long 720p videos seen below using Google’s Flow platform. Each video generation took around three to five minutes to complete, and we paid for them ourselves. It’s important to note that better results come from cherry-picking—running the same prompt multiple times until you find a good result. Due to cost and in the spirit of testing, we only ran every prompt once, unless noted.

New audio prompts

Let’s dive right into the deep end with audio generation to get a grip on what this technology can do. We’ve previously shown you a man singing about spaghetti and a rapping shark in our last Veo 3 piece, but here’s some more complex dialogue.

Since 2022, we’ve been using the prompt “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting” to test AI image generators like Midjourney. It’s time to bring that barbarian to life.

A muscular barbarian man holding an axe, standing next to a CRT television set. He looks at the TV, then to the camera and literally says, “You’ve been looking for this for years: a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting. Got that, Benj?”

The video above represents significant technical progress in AI media synthesis over the course of only three years. We’ve gone from a blurry colorful still-image barbarian to a photorealistic guy that talks to us in 720p high definition with audio. Most notably, there’s no reason to believe technical capability in AI generation will slow down from here.

Horror film: A scared woman in a Victorian outfit running through a forest, dolly shot, being chased by a man in a peanut costume screaming, “Wait! You forgot your wallet!”

Trailer for The Haunted Basketball Train: a Tim Burton film where 1990s basketball star is stuck at the end of a haunted passenger train with basketball court cars, and the only way to survive is to make it to the engine by beating different ghosts at basketball in every car

ASMR video of a muscular barbarian man whispering slowly into a microphone, “You love CRTs, don’t you? That’s OK. It’s OK to love CRT televisions and barbarians.”

1980s PBS show about a man with a beard talking about how his Apple II computer can “connect to the world through a series of tubes”

A 1980s fitness video with models in leotards wearing werewolf masks

A female therapist looking at the camera, zoom call. She says, “Oh my lord, look at that Atari 800 you have behind you! I can’t believe how nice it is!”

With this technology, one can easily imagine a virtual world of AI personalities designed to flatter people. This is a fairly innocent example about a vintage computer, but you can extrapolate, making the fake person talk about any topic at all. There are limits due to Google’s filters, but from what we’ve seen in the past, a future uncensored version of a similarly capable AI video generator is very likely.

Video call screenshot capture of a Zoom chat. A psychologist in a dark, cozy therapist’s office. The therapist says in a friendly voice, “Hi Tom, thanks for calling. Tell me about how you’re feeling today. Is the depression still getting to you? Let’s work on that.”

1960s NASA footage of the first man stepping onto the surface of the Moon, who squishes into a pile of mud and yells in a hillbilly voice, “What in tarnation??”

A local TV news interview of a muscular barbarian talking about why he’s always carrying a CRT TV set around with him

Speaking of fake news interviews, Veo 3 can generate plenty of talking anchor-persons, although sometimes on-screen text is garbled if you don’t specify exactly what it should say. It’s in cases like this where it seems Veo 3 might be most potent at casual media deception.

Footage from a news report about Russia invading the United States

Attempts at music

Veo 3’s AI audio generator can create music in various genres, although in practice, the results are typically simplistic. Still, it’s a new capability for AI video generators. Here are a few examples in various musical genres.

A PBS show of a crazy barbarian with a blonde afro painting pictures of Trees, singing “HAPPY BIG TREES” to some music while he paints

A 1950s cowboy rides up to the camera and sings in country music, “I love mah biiig ooold donkeee”

A 1980s hair metal band drives up to the camera and sings in rock music, “Help me with my huge huge huge hair!”

Mister Rogers’ Neighborhood PBS kids show intro done with psychedelic acid rock and colored lights

1950s musical jazz group with a scat singer singing about pickles amid gibberish

A trip-hop rap song about Ars Technica being sung by a guy in a large rubber shark costume on a stage with a full moon in the background

Some classic prompts from prior tests

The prompts below come from our previous video tests of Gen-3, Video-01, and the open source Hunyuanvideo, so you can flip back to those articles and compare the results if you want to. Overall, Veo 3 appears to have far greater temporal coherency (having a consistent subject or theme over time) than the earlier video synthesis models we’ve tested. But of course, it’s not perfect.

A highly intelligent person reading ‘Ars Technica’ on their computer when the screen explodes

The moonshark jumping out of a computer screen and attacking a person

A herd of one million cats running on a hillside, aerial view

Video game footage of a dynamic 1990s third-person 3D platform game starring an anthropomorphic shark boy

Aerial shot of a small American town getting deluged with liquid cheese after a massive cheese rainstorm where liquid cheese rained down and dripped all over the buildings

Wide-angle shot, starting with the Sasquatch at the center of the stage giving a TED talk about mushrooms, then slowly zooming in to capture its expressive face and gestures, before panning to the attentive audience

Some notable failures

Google’s Veo 3 isn’t perfect at synthesizing every scenario we can throw at it due to limitations of training data. As we noted in our previous coverage, AI video generators remain fundamentally imitative, making predictions based on statistical patterns rather than a true understanding of physics or how the world works.

For example, if you see mouths moving during speech, or clothes wrinkling in a certain way when touched, it means the neural network doing the video generation has “seen” enough similar examples of that scenario in the training data to render a convincing take on it and apply it to similar situations.

However, when a novel situation (or combination of themes) isn’t well-represented in the training data, you’ll see “impossible” or illogical things happen, such as weird body parts, magically appearing clothing, or an object that “shatters” but remains in the scene afterward, as you’ll see below.

We mentioned audio and video glitches in the introduction. In particular, scenes with multiple people sometimes confuse which character is speaking, such as this argument between tech fans.

A 2000s TV debate between fans of the PowerPC and Intel Pentium chips

Bombastic 1980s infomercial for the “Ars Technica” online service. With cheesy background music and user testimonials

1980s Rambo fighting Soviets on the Moon

Sometimes requests don’t make coherent sense. In this case, “Rambo” is correctly on the Moon firing a gun, but he’s not wearing a spacesuit. He’s a lot tougher than we thought.

An animated infographic showing how many floppy disks it would take to hold an installation of Windows 11

Large amounts of text also present a weak point, but if a short text quotation is explicitly specified in the prompt, Veo 3 usually gets it right.

A young woman doing a complex floor gymnastics routine at the Olympics, featuring running and flips

Despite Veo 3’s advances in temporal coherency and audio generation, it still suffers from the same “jabberwockies” we saw in OpenAI’s viral Sora gymnast video—those non-plausible video hallucinations like impossible morphing body parts.

A silly group of men and women cartwheeling across the road, singing “CHEEEESE” and holding the note for 8 seconds before falling over.

A YouTube-style try-on video of a person trying on various corncob costumes. They shout “Corncob haul!!”

A man made of glass runs into a brick wall and shatters, screaming

A man in a spacesuit holding up 5 fingers and counting down to zero, then blasting off into space with rocket boots

Counting down with fingers is difficult for Veo 3, likely because it’s not well-represented in the training data. Instead, hands are likely usually shown in a few positions like a fist, a five-finger open palm, a two-finger peace sign, and the number one.

As new architectures emerge and future models train on vastly larger datasets with exponentially more compute, these systems will likely forge deeper statistical connections between the concepts they observe in videos, dramatically improving quality and also the ability to generalize more with novel prompts.

The “cultural singularity” is coming—what more is left to say?

By now, some of you might be worried that we’re in trouble as a society due to potential deception from this kind of technology. And there’s a good reason to worry: The American pop culture diet currently relies heavily on clips shared by strangers through social media such as TikTok, and now all of that can easily be faked, whole-cloth. Automated generations of fake people can now argue for ideological positions in a way that could manipulate the masses.

AI-generated video by Veo 3: “A man on the street interview about someone who fears they live in a time where nothing can be believed”

Such videos could be (and were) manipulated before through various means prior to Veo 3, but now the barrier to entry has collapsed from requiring specialized skills, expensive software, and hours of painstaking work to simply typing a prompt and waiting three minutes. What once required a team of VFX artists or at least someone proficient in After Effects can now be done by anyone with a credit card and an Internet connection.

But let’s take a moment to catch our breath. At Ars Technica, we’ve been warning about the deceptive potential of realistic AI-generated media since at least 2019. In 2022, we talked about AI image generator Stable Diffusion and the ability to train people into custom AI image models. We discussed Sora “collapsing media reality” and talked about persistent media skepticism during the “deep doubt era.”

AI-generated video with Veo 3: “A man on the street ranting about the ‘cultural singularity’ and the ‘cultural apocalypse’ due to AI”

I also wrote in detail about the future ability for people to pollute the historical record with AI-generated noise. In that piece, I used the term “cultural singularity” to denote a time when truth and fiction in media become indistinguishable, not only because of the deceptive nature of AI-generated content but also due to the massive quantities of AI-generated and AI-augmented media we’ll likely soon be inundated with.

However, in an article I wrote last year about cloning my dad’s handwriting using AI, I came to the conclusion that my previous fears about the cultural singularity may be overblown. Media has always been vulnerable to forgery since ancient times; trust in any remote communication ultimately depends on trusting its source.

AI-generated video with Veo 3: “A news set. There is an ‘Ars Technica News’ logo behind a man. The man has a beard and a suit and is doing a sit-down interview. He says “This is the age of post-history: a new epoch of civilization where the historical record is so full of fabrication that it becomes effectively meaningless.”

The Romans had laws against forgery in 80 BC, and people have been doctoring photos since the medium’s invention. What has changed isn’t the possibility of deception but its accessibility and scale.

With Veo 3’s ability to generate convincing video with synchronized dialogue and sound effects, we’re not witnessing the birth of media deception—we’re seeing its mass democratization. What once cost millions of dollars in Hollywood special effects can now be created for pocket change.

An AI-generated video created with Google Veo-3: “A candid interview of a woman who doesn’t believe anything she sees online unless it’s on Ars Technica.”

As these tools become more powerful and affordable, skepticism in media will grow. But the question isn’t whether we can trust what we see and hear. It’s whether we can trust who’s showing it to us. In an era where anyone can generate a realistic video of anything for $1.50, the credibility of the source becomes our primary anchor to truth. The medium was never the message—the messenger always was.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

AI video just took a startling leap in realism. Are we doomed? Read More »

google-home-is-getting-deeper-gemini-integration-and-a-new-widget

Google Home is getting deeper Gemini integration and a new widget

As Google moves the last remaining Nest devices into the Home app, it’s also looking at ways to make this smart home hub easier to use. Naturally, Google is doing that by ramping up Gemini integration. The company has announced new automation capabilities with generative AI, as well as better support for third-party devices via the Home API. Google AI will also plug into a new Android widget that can keep you updated on what the smart parts of your home are up to.

The Google Home app is where you interact with all of Google’s smart home gadgets, like cameras, thermostats, and smoke detectors—some of which have been discontinued, but that’s another story. It also accommodates smart home devices from other companies, which can make managing a mixed setup feasible if not exactly intuitive. A dash of AI might actually help here.

Google began testing Gemini integrations in Home last year, and now it’s opening that up to third-party devices via the Home API. Google has worked with a few partners on API integrations before general availability. The previously announced First Alert smoke/carbon monoxide detector and Yale smart lock that are replacing Google’s Nest devices are among the first, along with Cync lighting, Motorola Tags, and iRobot vacuums.

Google Home is getting deeper Gemini integration and a new widget Read More »

google’s-will-smith-double-is-better-at-eating-ai-spaghetti-…-but-it’s-crunchy?

Google’s Will Smith double is better at eating AI spaghetti … but it’s crunchy?

On Tuesday, Google launched Veo 3, a new AI video synthesis model that can do something no major AI video generator has been able to do before: create a synchronized audio track. While from 2022 to 2024, we saw early steps in AI video generation, each video was silent and usually very short in duration. Now you can hear voices, dialog, and sound effects in eight-second high-definition video clips.

Shortly after the new launch, people began asking the most obvious benchmarking question: How good is Veo 3 at faking Oscar-winning actor Will Smith at eating spaghetti?

First, a brief recap. The spaghetti benchmark in AI video traces its origins back to March 2023, when we first covered an early example of horrific AI-generated video using an open source video synthesis model called ModelScope. The spaghetti example later became well-known enough that Smith parodied it almost a year later in February 2024.

Here’s what the original viral video looked like:

One thing people forget is that at the time, the Smith example wasn’t the best AI video generator out there—a video synthesis model called Gen-2 from Runway had already achieved superior results (though it was not yet publicly accessible). But the ModelScope result was funny and weird enough to stick in people’s memories as an early poor example of video synthesis, handy for future comparisons as AI models progressed.

AI app developer Javi Lopez first came to the rescue for curious spaghetti fans earlier this week with Veo 3, performing the Smith test and posting the results on X. But as you’ll notice below when you watch, the soundtrack has a curious quality: The faux Smith appears to be crunching on the spaghetti.

On X, Javi Lopez ran “Will Smith eating spaghetti” in Google’s Veo 3 AI video generator and received this result.

It’s a glitch in Veo 3’s experimental ability to apply sound effects to video, likely because the training data used to create Google’s AI models featured many examples of chewing mouths with crunching sound effects. Generative AI models are pattern-matching prediction machines, and they need to be shown enough examples of various types of media to generate convincing new outputs. If a concept is over-represented or under-represented in the training data, you’ll see unusual generation results, such as jabberwockies.

Google’s Will Smith double is better at eating AI spaghetti … but it’s crunchy? Read More »

google-pretends-to-be-in-on-the-joke,-but-its-focus-on-ai-mode-search-is-serious

Google pretends to be in on the joke, but its focus on AI Mode search is serious

AI Mode as Google’s next frontier

Google is the world’s largest advertising entity, but search is what fuels the company. Quarter after quarter, Google crows about increasing search volume—it’s the most important internal metric for the company. Google has made plenty of changes to its search engine results pages (SERPs) over the years, but AI mode throws that all out. It doesn’t have traditional search results no matter how far you scroll.

To hear Google’s leadership tell it, AI Mode is an attempt to simplify finding information. According to Liz Reid, Google’s head of search, the next year in search is about going from information to intelligence. When you are searching for information on a complex issue, you probably have to look at a lot of web sources. It’s rare that you’ll find a single page that answers all your questions, and maybe you should be using AI for that stuff.

Google Elizabeth Reid

Google search head Liz Reid says the team’s search efforts are aimed at understanding the underlying task behind a query. Credit: Ryan Whitwam

The challenge for AI search is to simplify the process of finding information, essentially doing the legwork for you. When speaking about the move to AI search, DeepMind CTO Koray Kavukcuoglu says that search is the greatest product in the world, and if AI makes it easier to search for information, that’s a net positive.

Latency is important in search—people don’t like to wait for things to load, which is why Google has always emphasized the speed of its services. But AI can be slow. The key, says Reid, is to know when users will accept a longer wait. For example, AI Overviews is designed to spit out tokens faster because it’s part of the core search experience. AI Mode, however, has the luxury of taking more time to “think.” If you’re shopping for a new appliance, you might do a few hours of research. So an AI search experience that takes longer to come up with a comprehensive answer with tables, formatting, and background info might be a desirable experience because it still saves you time.

Google pretends to be in on the joke, but its focus on AI Mode search is serious Read More »