Artificial Intelligence

ai-overviews-gets-upgraded-to-gemini-3-with-a-dash-of-ai-mode

AI Overviews gets upgraded to Gemini 3 with a dash of AI Mode

It can be hard sometimes to keep up with the deluge of generative AI in Google products. Even if you try to avoid it all, there are some features that still manage to get in your face. Case in point: AI Overviews. This AI-powered search experience has a reputation for getting things wrong, but you may notice some improvements soon. Google says AI Overviews is being upgraded to the latest Gemini 3 models with a more conversational bent.

In just the last year, Google has radically expanded the number of searches on which you get an AI Overview at the top. Today, the chatbot will almost always have an answer for your query, which has relied mostly on models in Google’s Gemini 2.5 family. There was nothing wrong with Gemini 2.5 as generative AI models go, but Gemini 3 is a little better by every metric.

There are, of course, multiple versions of Gemini 3, and Google doesn’t like to be specific about which ones appear in your searches. What Google does say is that AI Overviews chooses the right model for the job. So if you’re searching for something simple for which there are a lot of valid sources, AI Overviews may manifest something like Gemini 3 Flash without running through a ton of reasoning tokens. For a complex “long tail” query, it could step up the thinking or move to Gemini 3 Pro (for paying subscribers).

AI Overviews gets upgraded to Gemini 3 with a dash of AI Mode Read More »

“wildly-irresponsible”:-dot’s-use-of-ai-to-draft-safety-rules-sparks-concerns

“Wildly irresponsible”: DOT’s use of AI to draft safety rules sparks concerns

At DOT, Trump likely hopes to see many rules quickly updated to modernize airways and roadways. In a report highlighting the Office of Science and Technology Policy’s biggest “wins” in 2025, the White House credited DOT with “replacing decades-old rules with flexible, innovation-friendly frameworks,” including fast-tracking rules to allow for more automated vehicles on the roads.

Right now, DOT expects that Gemini can be relied on to “handle 80 to 90 percent of the work of writing regulations,” ProPublica reported. Eventually all federal workers who rely on AI tools like Gemini to draft rules “would fall back into merely an oversight role, monitoring ‘AI-to-AI interactions,’” ProPublica reported.

Google silent on AI drafting safety rules

Google did not respond to Ars’ request to comment on this use case for Gemini, which could spread across government under Trump’s direction.

Instead, the tech giant posted a blog on Monday, pitching Gemini for government more broadly, promising federal workers that AI would help with “creative problem-solving to the most critical aspects of their work.”

Google has been competing with AI rivals for government contracts, undercutting OpenAI and Anthropic’s $1 deals by offering a year of access to Gemini for $0.47.

The DOT contract seems important to Google. In a December blog, the company celebrated that DOT was “the first cabinet-level agency to fully transition its workforce away from legacy providers to Google Workspace with Gemini.”

At that time, Google suggested this move would help DOT “ensure the United States has the safest, most efficient, and modern transportation system in the world.”

Immediately, Google encouraged other federal leaders to launch their own efforts using Gemini.

“We are committed to supporting the DOT’s digital transformation and stand ready to help other federal leaders across the government adopt this blueprint for their own mission successes,” Google’s blog said.

DOT did not immediately respond to Ars’ request for comment.

“Wildly irresponsible”: DOT’s use of AI to draft safety rules sparks concerns Read More »

google-begins-offering-free-sat-practice-tests-powered-by-gemini

Google begins offering free SAT practice tests powered by Gemini

It’s no secret that students worldwide use AI chatbots to do their homework and avoid learning things. On the flip side, students can also use AI as a tool to beef up their knowledge and plan for the future with flashcards or study guides. Google hopes its latest Gemini feature will help with the latter. The company has announced that Gemini can now create free SAT practice tests and coach students to help them get higher scores.

As a standardized test, the content of the SAT follows a predictable pattern. So there’s no need to use a lengthy, personalized prompt to get Gemini going. Just say something like, “I want to take a practice SAT test,” and the chatbot will generate one complete with clickable buttons, graphs, and score analysis.

Of course, generative AI can go off the rails and provide incorrect information, which is a problem when you’re trying to learn things. However, Google says it has worked with education firms like The Princeton Review to ensure the AI-generated tests resemble what students will see in the real deal.

The interface for Gemini’s practice tests includes scoring and the ability to review previous answers. If you are unclear on why a particular answer is right or wrong, the questions have an “Explain answer” button right at the bottom. After you finish the practice exam, the custom interface (which looks a bit like Gemini’s Canvas coding tool) can help you follow up on areas that need improvement.

Google begins offering free SAT practice tests powered by Gemini Read More »

google-adds-your-gmail-and-photos-to-ai-mode-to-enable-“personal-intelligence”

Google adds your Gmail and Photos to AI Mode to enable “Personal Intelligence”

Google believes AI is the future of search, and it’s not shy about saying it. After adding account-level personalization to Gemini earlier this month, it’s now updating AI Mode with so-called “Personal Intelligence.” According to Google, this makes the bot’s answers more useful because they are tailored to your personal context.

Starting today, the feature is rolling out to all users who subscribe to Google AI Pro or AI Ultra. However, it will be a Labs feature that needs to be explicitly enabled (subscribers will be prompted to do this). Google tends to expand access to new AI features to free accounts later on, so free users will most likely get access to Personal Intelligence in the future. Whenever this option does land on your account, it’s entirely optional and can be disabled at any time.

If you decide to integrate your data with AI Mode, the search bot will be able to scan your Gmail and Google Photos. That’s less extensive than the Gemini app version, which supports Gmail, Photos, Search, and YouTube history. Gmail will probably be the biggest contributor to AI Mode—a great many life events involve confirmation emails. Traditional search results when you are logged in are adjusted based on your usage history, but this goes a step further.

If you’re going to use AI Mode to find information, Personal Intelligence could actually be quite helpful. When you connect data from other Google apps, Google’s custom Gemini search model will instantly know about your preferences and background—that’s the kind of information you’d otherwise have to include in your search query to get the best output. With Personal Intelligence, AI Mode can just pull those details from your email or photos.

Google adds your Gmail and Photos to AI Mode to enable “Personal Intelligence” Read More »

google:-don’t-make-“bite-sized”-content-for-llms-if-you-care-about-search-rank

Google: Don’t make “bite-sized” content for LLMs if you care about search rank

Signal in the noise

Google only provides general SEO recommendations, leaving the Internet’s SEO experts to cast bones and read tea leaves to gauge how the search algorithm works. This approach has borne fruit in the past, but not every SEO suggestion is a hit.

The tumultuous current state of the Internet, defined by inconsistent traffic and rapidly expanding use of AI, may entice struggling publishers to try more SEO snake oil like content chunking. When traffic is scarce, people will watch for any uptick and attribute that to the changes they have made. When the opposite happens, well, it’s just a bad day.

The new content superstition may appear to work at first, but at best, that’s an artifact of Google’s current quirks—the company isn’t building LLMs to like split-up content. Sullivan admits there may be “edge cases” where content chunking appears to work.

“Great. That’s what’s happening now, but tomorrow the systems may change,” he said. “You’ve made all these things that you did specifically for a ranking system, not for a human being because you were trying to be more successful in the ranking system, not staying focused on the human being. And then the systems improve, probably the way the systems always try to improve, to reward content written for humans. All that stuff that you did to please this LLM system that may or may not have worked, may not carry through for the long term.”

We probably won’t see chunking go away as long as publishers can point to a positive effect. However, Google seems to feel that chopping up content for LLMs is not a viable future for SEO.

Google: Don’t make “bite-sized” content for LLMs if you care about search rank Read More »

china-drafts-world’s-strictest-rules-to-end-ai-encouraged-suicide,-violence

China drafts world’s strictest rules to end AI-encouraged suicide, violence

China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence.

China’s Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or “other means” to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, told CNBC that the “planned rules would mark the world’s first attempt to regulate AI with human or anthropomorphic characteristics” at a time when companion bot usage is rising globally.

Growing awareness of problems

In 2025, researchers flagged major harms of AI companions, including promotion of self-harm, violence, and terrorism. Beyond that, chatbots shared harmful misinformation, made unwanted sexual advances, encouraged substance abuse, and verbally abused users. Some psychiatrists are increasingly ready to link psychosis to chatbot use, the Wall Street Journal reported this weekend, while the most popular chatbot in the world, ChatGPT, has triggered lawsuits over outputs linked to child suicide and murder-suicide.

China is now moving to eliminate the most extreme threats. Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register—the guardian would be notified if suicide or self-harm is discussed.

Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed “emotional traps,”—chatbots would additionally be prevented from misleading users into making “unreasonable decisions,” a translation of the rules indicates.

China drafts world’s strictest rules to end AI-encouraged suicide, violence Read More »

google-lobs-lawsuit-at-search-result-scraping-firm-serpapi

Google lobs lawsuit at search result scraping firm SerpApi

Google has filed a lawsuit to protect its search results, targeting a firm called SerpApi that has turned Google’s 10 blue links into a business. According to Google, SerpApi ignores established law and Google’s terms to scrape and resell its search engine results pages (SERPs). This is not the first action against SerpApi, but Google’s decision to go after a scraper could signal a new, more aggressive stance on protecting its search data.

SerpApi and similar firms do fulfill a need, but they sit in a legal gray area. Google does not provide an API for its search results, which are based on the world’s largest and most comprehensive web index. That makes Google’s SERPs especially valuable in the age of AI. A chatbot can’t summarize web links if it can’t find them, which has led companies like Perplexity to pay for SerpApi’s second-hand Google data. That prompted Reddit to file a lawsuit against SerpApi and Perplexity for grabbing its data from Google results.

Google is echoing many of the things Reddit said when it publicized its lawsuit earlier this year. The search giant claims it’s not just doing this to protect itself—it’s also about protecting the websites it indexes. In Google’s blog post on the legal action, it says SerpApi “violates the choices of websites and rightsholders about who should have access to their content.”

It’s worth noting that Google has a partnership with Reddit that pipes data directly into Gemini. As a result, you’ll often see Reddit pages cited in the chatbot’s outputs. As Google points out, it abides by “industry-standard crawling protocols” to collect the data that appears on its SERPs, but those sites didn’t agree to let SerpApi scrape their data from Google. So while you could reasonably argue that Google’s lawsuit helps protect the rights of web publishers, it also explicitly protects Google’s business interests.

Google lobs lawsuit at search result scraping firm SerpApi Read More »

youtube-bans-two-popular-channels-that-created-fake-ai-movie-trailers

YouTube bans two popular channels that created fake AI movie trailers

Deadline reports that the behavior of these creators ran afoul of YouTube’s spam and misleading-metadata policies. At the same time, Google loves generative AI—YouTube has added more ways for creators to use generative AI, and the company says more gen AI tools are coming in the future. It’s quite a tightrope for Google to walk.

AI movie trailers

A selection of videos from the now-defunct Screen Culture channel.

Credit: Ryan Whitwam

A selection of videos from the now-defunct Screen Culture channel. Credit: Ryan Whitwam

While passing off AI videos as authentic movie trailers is definitely spammy conduct, the recent changes to the legal landscape could be a factor, too. Disney recently entered into a partnership with OpenAI, bringing its massive library of characters to the company’s Sora AI video app. At the same time, Disney sent a cease-and-desist letter to Google demanding the removal of Disney content from Google AI. The letter specifically cited AI content on YouTube as a concern.

Both the banned trailer channels made heavy use of Disney properties, sometimes even incorporating snippets of real trailers. For example, Screen Culture created 23 AI trailers for The Fantastic Four: First Steps, some of which outranked the official trailer in searches. It’s unclear if either account used Google’s Veo models to create the trailers, but Google’s AI will recreate Disney characters without issue.

While Screen Culture and KH Studio were the largest purveyors of AI movie trailers, they are far from alone. There are others with five and six-digit subscriber counts, some of which include disclosures about fan-made content. Is that enough to save them from the ban hammer? Many YouTube viewers probably hope not.

YouTube bans two popular channels that created fake AI movie trailers Read More »

school-security-ai-flagged-clarinet-as-a-gun-exec-says-it-wasn’t-an-error.

School security AI flagged clarinet as a gun. Exec says it wasn’t an error.


Human review didn’t stop AI from triggering lockdown at panicked middle school.

A Florida middle school was locked down last week after an AI security system called ZeroEyes mistook a clarinet for a gun, reviving criticism that AI may not be worth the high price schools pay for peace of mind.

Human review of the AI-generated false flag did not stop police from rushing to Lawton Chiles Middle School. Cops expected to find “a man in the building, dressed in camouflage with a ‘suspected weapon pointed down the hallway, being held in the position of a shouldered rifle,’” a Washington Post review of the police report said.

Instead, after finding no evidence of a shooter, cops double-checked with dispatchers who confirmed that a closer look at the images indicated that “the suspected rifle might have been a band instrument.” Among panicked students hiding in the band room, police eventually found the suspect, a student “dressed as a military character from the Christmas movie Red One for the school’s Christmas-themed dress-up day,” the Post reported.

ZeroEyes cofounder Sam Alaimo told the Post that the AI performed exactly as it should have in this case, adopting a “better safe than sorry” outlook. A ZeroEyes spokesperson told Ars that “school resource officers, security directors and superintendents consistently ask us to be proactive and forward them an alert if there is any fraction of a doubt that the threat might be real.”

“We don’t think we made an error, nor does the school,” Alaimo said. “That was better to dispatch [police] than not dispatch.”

Cops left after the confused student confirmed he was “unaware” that the way he was holding his clarinet could have triggered that alert, the Post reported. But ZeroEyes’ spokesperson claimed he was “intentionally holding the instrument in the position of a shouldered rifle.” And seemingly rather than probe why the images weren’t more carefully reviewed to prevent a false alarm on campus, the school appeared to agree with ZeroEyes and blame the student.

“We did not make an error, and the school was pleased with the detection and their response,” ZeroEyes’ spokesperson said.

School warns students not to trigger AI

In a letter to parents, the principal, Melissa Laudani, reportedly told parents that “while there was no threat to campus, I’d like to ask you to speak with your student about the dangers of pretending to have a weapon on a school campus.” Along similar lines, Seminole County Public Schools (SCPS) communications officer, Katherine Crnkovich, emphasized in an email to Ars to “please make sure it is noted that this student wasn’t simply carrying a clarinet. This individual was holding it as if it were a weapon.”

However, warning students against brandishing ordinary objects like weapons isn’t a perfect solution. Video footage from a Texas high school in 2023 showed that ZeroEyes can sometimes confuse shadows for guns, accidentally flagging a student simply walking into school as a potential threat. The advice also ignores that ZeroEyes last year reportedly triggered a lockdown and police response after detecting two theater kids using prop guns to rehearse a play. And a similar AI tool called Omnilert made national headlines confusing an empty Doritos bag with a gun, which led to a 14-year-old Baltimore sophomore’s arrest. In that case, the student told the American Civil Liberties Union that he was just holding the chips when AI sent “like eight cop cars” to detain him.

For years, school safety experts have warned that AI tools like ZeroEyes take up substantial resources even though they are “unproven,” the Post reported. ZeroEyes’ spokesperson told Ars that “in most cases, ZeroEyes customers will never receive a ‘false positive,’” but the company is not transparent about how many false positives it receives or how many guns have been detected. An FAQ only notes that “we are always looking to minimize false positives and are constantly improving our learning models based on data collected.” In March, as some students began questioning ZeroEyes after it flagged a Nerf gun at a Pennsylvania university, a nearby K-12 private school, Germantown Academy, confirmed that its “system often makes ‘non-lethal’ detections.”

One critic, school safety consultant Kenneth Trump, suggested in October that these tools are “security theater,” with firms like ZeroEyes lobbying for taxpayer dollars by relying on what the ACLU called “misleading” marketing to convince schools that tools are proactive solutions to school shootings. Seemingly in response to this backlash, StateScoop reported that days after it began probing ZeroEyes in 2024, the company scrubbed a claim from its FAQ that said ZeroEyes “can prevent active shooter and mass shooting incidents.”

At Lawton Chiles Middle School, “the children were never in any danger,” police confirmed, but experts question if false positives cause students undue stress and suspicion, perhaps doing more harm than good in absence of efficacy studies. Schools may be better off dedicating resources to mental health services proven to benefit kids, some critics have suggested.

Laudani’s letter encouraged parents to submit any questions they have about the incident, but it’s hard to gauge if anyone’s upset. Asked if parents were concerned or if ZeroEyes has ever triggered lockdown at other SCPS schools, Crnkovich told Ars that SCPS does not “provide details regarding the specific school safety systems we utilize.”

It’s clear, however, that SCPS hopes to expand its use of ZeroEyes. In November, Florida state Senator Keith Truenow submitted a request to install “significantly more cameras”—about 850—equipped with ZeroEyes across the school district. Truenow backed up his request for $500,000 in funding over the next year by claiming that “the more [ZeroEyes] coverage there is, the more protected students will be from potential gun violence.”

AI false alarms pose dangers to students

ZeroEyes is among the most popular tools attracting heavy investments from schools in 48 states, which hope that AI gun detection will help prevent school shootings. The AI technology is embedded in security cameras, trained on images of people holding guns, and can supposedly “detect as little as an eighth of an inch of a gun,” an ABC affiliate in New York reported.

Monitoring these systems continually, humans review AI flags, then text any concerning images detected to school superintendents. Police are alerted when human review determines images may constitute actual threats. ZeroEyes’ spokesperson told Ars that “it has detected more than 1,000 weapons in the last three years.” Perhaps most notably, ZeroEyes “detected a minor armed with an AK-47 rifle on an elementary school campus in Texas,” where no shots were fired, StateScoop reported last year.

Schools invest tens or, as the SCPS case shows, even hundreds of thousands annually, the exact amount depending on the number of cameras they want to employ and other variables impacting pricing. ZeroEyes estimates that most schools pay $60 per camera monthly. Bigger contracts can discount costs. In Kansas, a statewide initiative equipping 25 cameras at 1,300 schools with ZeroEyes was reportedly estimated to cost $8.5 million annually. Doubling the number of cameras didn’t provide much savings, though, with ZeroEyes looking to charge $15.2 million annually to expand coverage.

To critics, it appears that ZeroEyes is attempting to corner the market on AI school security, standing to profit off schools’ fears of shootings, while showing little proof of the true value of its systems. Last year, ZeroEyes reported its revenue grew 300 percent year over year from 2023 to 2024, after assisting in “more than ten arrests through its thousands of detections, verifications, and notifications to end users and law enforcement.”

Curt Lavarello, the executive director of the School Safety Advocacy Council, told the ABC News affiliate that “all of this technology is very, very expensive,” considering that “a lot of products … may not necessarily do what they’re being sold to do.”

Another problem, according to experts who have responded to some of the country’s deadliest school shootings, is that while ZeroEyes’ human reviewers can alert police in “seconds,” police response can often take “several minutes.” That delay could diminish ZeroEyes’ impact, one expert suggested, noting that at an Oregon school he responded to, there was a shooter who “shot 25 people in 60 seconds,” StateScoop reported.

In Seminole County, where the clarinet incident happened, ZeroEyes has been used since 2021, but SCPS would not confirm if any guns have ever been detected to justify next year’s desired expansion. It’s possible that SCPS has this information, as Sen. Truenow noted in his funding request that ZeroEyes can share reports with schools “to measure the effectiveness of the ZeroEyes deployment” by reporting on “how many guns were detected and alerted on campus.”

ZeroEyes’ spokesperson told Ars that “trained former law enforcement and military make split-second, life-or-death decisions about whether the threat is real,” which is supposed to help reduce false positives that could become more common as SCPS adds ZeroEyes to many more cameras.

Amanda Klinger, the director of operations at the Educator’s School Safety Network, told the Post that too many false alarms could carry two risks. First, more students could be put in dangerous situations when police descend on schools where they anticipate confronting an active shooter. And second, cops may become fatigued by false alarms, perhaps failing to respond with urgency over time. For students, when AI labels them as suspects, it can also be invasive and humiliating, reports noted.

“We have to be really clear-eyed about what are the limitations of these technologies,” Klinger said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

School security AI flagged clarinet as a gun. Exec says it wasn’t an error. Read More »

google-releases-gemini-3-flash,-promising-improved-intelligence-and-efficiency

Google releases Gemini 3 Flash, promising improved intelligence and efficiency

Google began its transition to Gemini 3 a few weeks ago with the launch of the Pro model, and the arrival of Gemini 3 Flash kicks it into high gear. The new, faster Gemini 3 model is coming to the Gemini app and search, and developers will be able to access it immediately via the Gemini API, Vertex AI, AI Studio, and Antigravity. Google’s bigger gen AI model is also picking up steam, with both Gemini 3 Pro and its image component (Nano Banana Pro) expanding in search.

This may come as a shock, but Google says Gemini 3 Flash is faster and more capable than its previous base model. As usual, Google has a raft of benchmark numbers that show modest improvements for the new model. It bests the old 2.5 Flash in basic academic and reasoning tests like GPQA Diamond and MMMU Pro (where it even beats 3 Pro). It gets a larger boost in Humanity’s Last Exam (HLE), which tests advanced domain-specific knowledge. Gemini 3 Flash has tripled the old models’ score in HLE, landing at 33.7 percent without tool use. That’s just a few points behind the Gemini 3 Pro model.

Gemini HLE test

Credit: Google

Google is talking up Gemini 3 Flash’s coding skills, and the provided benchmarks seem to back that talk up. Over the past year, Google has mostly pushed its Pro models as the best for generating code, but 3 Flash has done a lot of catching up. In the popular SWE-Bench Verified test, Gemini 3 Flash has gained almost 20 points on the 2.5 branch.

The new model is also a lot less likely to get general-knowledge questions wrong. In the Simple QA Verified test, Gemini 3 Flash scored 68.7 percent, which is only a little below Gemini 3 Pro. The last Flash model scored just 28.1 percent on that test. At least as far as the evaluation scores go, Gemini 3 Flash performs much closer to Google’s Pro model versus the older 2.5 family. At the same time, it’s considerably more efficient, according to Google.

One of Gemini 3 Pro’s defining advances was its ability to generate interactive simulations and multimodal content. Gemini 3 Flash reportedly retains that underlying capability. Gemini 3 Flash offers better performance than Gemini 2.5 Pro did, but it runs workloads three times faster. It’s also a lot cheaper than the Pro models if you’re paying per token. One million input tokens for 3 Flash will run devs $0.50, and a million output tokens will cost $3. However, that’s an increase compared to Gemini 2.5 Flash input and output at $0.30 and $2.50, respectively. The Pro model’s tokens are $2 (1M input) and $12 (1M output).

Google releases Gemini 3 Flash, promising improved intelligence and efficiency Read More »

google-translate-expands-live-translation-to-all-earbuds-on-android

Google Translate expands live translation to all earbuds on Android

Gemini text translation

Translate can now use Gemini to interpret the meaning of a phrase rather than simply translating each word.

Credit: Google

Translate can now use Gemini to interpret the meaning of a phrase rather than simply translating each word. Credit: Google

Regardless of whether you’re using live translate or just checking a single phrase, Google claims the Gemini-powered upgrade will serve you well. Google Translate is now apparently better at understanding the nuance of languages, with an awareness of idioms and local slang. Google uses the example of “stealing my thunder,” which wouldn’t make a lick of sense when translated literally into other languages. The new translation model, which is also available in the search-based translation interface, supports over 70 languages.

Google also debuted language-learning features earlier this year, borrowing a page from educational apps like Duolingo. You can tell the app your skill level with a language, as well as whether you need help with travel-oriented conversations or more everyday interactions. The app uses this to create tailored listening and speaking exercises.

AI Translate learning

The Translate app’s learning tools are getting better.

Credit: Google

The Translate app’s learning tools are getting better. Credit: Google

With this big update, Translate will be more of a stickler about your pronunciation. Google promises more feedback and tips based on your spoken replies in the learning modules. The app will also now keep track of how often you complete language practice, showing your daily streak in the app.

If “number go up” will help you learn more, then this update is for you. Practice mode is also launching in almost 20 new countries, including Germany, India, Sweden, and Taiwan.

Google Translate expands live translation to all earbuds on Android Read More »

disney-says-google-ai-infringes-copyright-“on-a-massive-scale”

Disney says Google AI infringes copyright “on a massive scale”

While Disney wants its characters out of Google AI generally, the letter specifically cited the AI tools in YouTube. Google has started adding its Veo AI video model to YouTube, allowing creators to more easily create and publish videos. That seems to be a greater concern for Disney than image models like Nano Banana.

Google has said little about Disney’s warning—a warning Google must have known was coming. A Google spokesperson has issued the following brief statement on the mater.

“We have a longstanding and mutually beneficial relationship with Disney, and will continue to engage with them,” Google says. “More generally, we use public data from the open web to build our AI and have built additional innovative copyright controls like Google-extended and Content ID for YouTube, which give sites and copyright holders control over their content.”

Perhaps this is previewing Google’s argument in a theoretical lawsuit. That copyrighted Disney content was all over the open internet, so is it really Google’s fault it ended up baked into the AI?

Content silos for AI

The generative AI boom has treated copyright as a mere suggestion as companies race to gobble up training data and remix it as “new” content. A cavalcade of companies, including The New York Times and Getty Images, have sued over how their material has been used and replicated by AI. Disney itself threatened a lawsuit against Character.AI earlier this year, leading to the removal of Disney content from the service.

Google isn’t Character.AI, though. It’s probably no coincidence that Disney is challenging Google at the same time it is entering into a content deal with OpenAI. Disney has invested $1 billion in the AI firm and agreed to a three-year licensing deal that officially brings Disney characters to OpenAI’s Sora video app. The specifics of that arrangement are still subject to negotiations.

Disney says Google AI infringes copyright “on a massive scale” Read More »