AI

google’s-new-gemini-2.5-pro-release-aims-to-fix-past-“regressions”-in-the-model

Google’s new Gemini 2.5 Pro release aims to fix past “regressions” in the model

It seems like hardly a day goes by anymore without a new version of Google’s Gemini AI landing, and sure enough, Google is rolling out a major update to its most powerful 2.5 Pro model. This release is aimed at fixing some problems that cropped up in an earlier Gemini Pro update, and the word is, this version will become a stable release that comes to the Gemini app for everyone to use.

The previous Gemini 2.5 Pro release, known as the I/O Edition, or simply 05-06, was focused on coding upgrades. Google claims the new version is even better at generating code, with a new high score of 82.2 percent in the Aider Polyglot test. That beats the best from OpenAI, Anthropic, and DeepSeek by a comfortable margin.

While the general-purpose Gemini 2.5 Flash has left preview, the Pro version is lagging behind. In fact, the last several updates have attracted some valid criticism of 2.5 Pro’s performance outside of coding tasks since the big 03-25 update. Google’s Logan Kilpatrick says the team has taken that feedback to heart and that the new model “closes [the] gap on 03-25 regressions.” For example, users will supposedly see more creativity with better formatting of responses.

Kilpatrick also notes that the 06-05 release now supports configurable thinking budgets for developers, and the team expects this build to become a “long term stable release.” So, Gemini 2.5 Pro should finally drop its “Preview” disclaimer when this version rolls out to the consumer-facing app and web interface in the coming weeks.

Google’s new Gemini 2.5 Pro release aims to fix past “regressions” in the model Read More »

“in-10-years,-all-bets-are-off”—anthropic-ceo-opposes-decadelong-freeze-on-state-ai-laws

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump’s tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems “could change the world, fundamentally, within two years; in 10 years, all bets are off.”

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

In his op-ed piece, Amodei said the proposed moratorium aims to prevent inconsistent state laws that could burden companies or compromise America’s competitive position against China. “I am sympathetic to these concerns,” Amodei wrote. “But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast.”

Instead of a blanket moratorium, Amodei proposed that the White House and Congress create a federal transparency standard requiring frontier AI developers to publicly disclose their testing policies and safety measures. Under this framework, companies working on the most capable AI models would need to publish on their websites how they test for various risks and what steps they take before release.

“Without a clear plan for a federal response, a moratorium would give us the worst of both worlds—no ability for states to act and no national policy as a backstop,” Amodei wrote.

Transparency as the middle ground

Amodei emphasized his claims for AI’s transformative potential throughout his op-ed, citing examples of pharmaceutical companies drafting clinical study reports in minutes instead of weeks and AI helping to diagnose medical conditions that might otherwise be missed. He wrote that AI “could accelerate economic growth to an extent not seen for a century, improving everyone’s quality of life,” a claim that some skeptics believe may be overhyped.

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws Read More »

fda-rushed-out-agency-wide-ai-tool—it’s-not-going-well

FDA rushed out agency-wide AI tool—it’s not going well

FDA staffers who spoke with Stat news, meanwhile, called the tool “rushed” and said its capabilities were overinflated by officials, including Makary and those at the Department of Government Efficiency (DOGE), which was headed by controversial billionaire Elon Musk. In its current form, it should only be used for administrative tasks, not scientific ones, the staffers said.

“Makary and DOGE think AI can replace staff and cut review times, but it decidedly cannot,” one employee said. The staffer also said that the FDA has failed to set up guardrails for the tool’s use. “I’m not sure in their rush to get it out that anyone is thinking through policy and use,” the FDA employee said.

According to Stat, Elsa is based on Anthropic’s Claude LLM and is being developed by consulting firm Deloitte. Since 2020, Deloitte has been paid $13.8 million to develop the original database of FDA documents that Elsa’s training data is derived from. In April, the firm was awarded a $14.7 million contract to scale the tech across the agency. The FDA said that Elsa was built within a high-security GovCloud environment and offers a “secure platform for FDA employees to access internal documents while ensuring all information remains within the agency.”

Previously, each center within the FDA was working on its own AI pilot. However, after cost-cutting in May, the AI pilot originally developed by the FDA’s Center for Drug Evaluation and Research, called CDER-GPT, was selected to be scaled up to an FDA-wide version and rebranded as Elsa.

FDA staffers in the Center for Devices and Radiological Health told NBC News that their AI pilot, CDRH-GPT, is buggy, isn’t connected to the Internet or the FDA’s internal system, and has problems uploading documents and allowing users to submit questions.

FDA rushed out agency-wide AI tool—it’s not going well Read More »

openai-slams-court-order-to-save-all-chatgpt-logs,-including-deleted-chats

OpenAI slams court order to save all ChatGPT logs, including deleted chats


OpenAI defends privacy of hundreds of millions of ChatGPT users.

OpenAI is now fighting a court order to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering—after news organizations suing over copyright claims accused the AI company of destroying evidence.

“Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to ‘preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying),” OpenAI explained in a court filing demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without “any just cause,” OpenAI argued, the order “continues to prevent OpenAI from respecting its users’ privacy decisions.” That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI’s application programming interface (API), OpenAI said.

The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls “might be more likely to ‘delete all [their] searches’ to cover their tracks,” OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs’ concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs’ request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, “at a minimum,” news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the “sweeping, unprecedented” order continues to be enforced.

“As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained,” OpenAI argued.

Meanwhile, there is no evidence beyond speculation yet supporting claims that “OpenAI had intentionally deleted data,” OpenAI alleged. And supposedly there is not “a single piece of evidence supporting” claims that copyright-infringing ChatGPT users are more likely to delete their chats.

“OpenAI did not ‘destroy’ any data, and certainly did not delete any data in response to litigation events,” OpenAI argued. “The Order appears to have incorrectly assumed the contrary.”

At a conference in January, Wang raised a hypothetical in line with her thinking on the subsequent order. She asked OpenAI’s legal team to consider a ChatGPT user who “found some way to get around the pay wall” and “was getting The New York Times content somehow as the output.” If that user “then hears about this case and says, ‘Oh, whoa, you know I’m going to ask them to delete all of my searches and not retain any of my searches going forward,'” the judge asked, wouldn’t that be “directly the problem” that the order would address?

OpenAI does not plan to give up this fight, alleging that news plaintiffs have “fallen silent” on claims of intentional evidence destruction, and the order should be deemed unlawful.

For OpenAI, risks of breaching its own privacy agreements could not only “damage” relationships with users but could also risk putting the company in breach of contracts and global privacy regulations. Further, the order imposes “significant” burdens on OpenAI, supposedly forcing the ChatGPT maker to dedicate months of engineering hours at substantial costs to comply, OpenAI claimed. It follows then that OpenAI’s potential for harm “far outweighs News Plaintiffs’ speculative need for such data,” OpenAI argued.

“While OpenAI appreciates the court’s efforts to manage discovery in this complex set of cases, it has no choice but to protect the interests of its users by objecting to the Preservation Order and requesting its immediate vacatur,” OpenAI said.

Users panicked over sweeping order

Millions of people use ChatGPT daily for a range of purposes, OpenAI noted, “ranging from the mundane to profoundly personal.”

People may choose to delete chat logs that contain their private thoughts, OpenAI said, as well as sensitive information, like financial data from balancing the house budget or intimate details from workshopping wedding vows. And for business users connecting to OpenAI’s API, the stakes may be even higher, as their logs may contain their companies’ most confidential data, including trade secrets and privileged business information.

“Given that array of highly confidential and personal use cases, OpenAI goes to great lengths to protect its users’ data and privacy,” OpenAI argued.

It does this partly by “honoring its privacy policies and contractual commitments to users”—which the preservation order allegedly “jettisoned” in “one fell swoop.”

Before the order was in place mid-May, OpenAI only retained “chat history” for users of ChatGPT Free, Plus, and Pro who did not opt out of data retention. But now, OpenAI has been forced to preserve chat history even when users “elect to not retain particular conversations by manually deleting specific conversations or by starting a ‘Temporary Chat,’ which disappears once closed,” OpenAI said. Previously, users could also request to “delete their OpenAI accounts entirely, including all prior conversation history,” which was then purged within 30 days.

While OpenAI rejects claims that ordinary users use ChatGPT to access news articles, the company noted that including OpenAI’s business customers in the order made “even less sense,” since API conversation data “is subject to standard retention policies.” That means API customers couldn’t delete all their searches based on their customers’ activity, which is the supposed basis for requiring OpenAI to retain sensitive data.

“The court nevertheless required OpenAI to continue preserving API Conversation Data as well,” OpenAI argued, in support of lifting the order on the API chat logs.

Users who found out about the preservation order panicked, OpenAI noted. In court filings, they cited social media posts sounding alarms on LinkedIn and X (formerly Twitter). They further argued that the court should have weighed those user concerns before issuing a preservation order, but “that did not happen here.”

One tech worker on LinkedIn suggested the order created “a serious breach of contract for every company that uses OpenAI,” while privacy advocates on X warned, “every single AI service ‘powered by’ OpenAI should be concerned.”

Also on LinkedIn, a consultant rushed to warn clients to be “extra careful” sharing sensitive data “with ChatGPT or through OpenAI’s API for now,” warning, “your outputs could eventually be read by others, even if you opted out of training data sharing or used ‘temporary chat’!”

People on both platforms recommended using alternative tools to avoid privacy concerns, like Mistral AI or Google Gemini, with one cybersecurity professional on LinkedIn describing the ordered chat log retention as “an unacceptable security risk.”

On X, an account with tens of thousands of followers summed up the controversy by suggesting that “Wang apparently thinks the NY Times’ boomer copyright concerns trump the privacy of EVERY @OpenAI USER—insane!!!”

The reason for the alarm is “simple,” OpenAI said. “Users feel more free to use ChatGPT when they know that they are in control of their personal information, including which conversations are retained and which are not.”

It’s unclear if OpenAI will be able to get the judge to waver if oral arguments are scheduled.

Wang previously justified the broad order partly due to the news organizations’ claim that “the volume of deleted conversations is significant.” She suggested that OpenAI could have taken steps to anonymize the chat logs but chose not to, only making an argument for why it “would not” be able to segregate data, rather than explaining why it “can’t.”

Spokespersons for OpenAI and The New York Times’ legal team declined Ars’ request to comment on the ongoing multi-district litigation.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI slams court order to save all ChatGPT logs, including deleted chats Read More »

“godfather”-of-ai-calls-out-latest-models-for-lying-to-users

“Godfather” of AI calls out latest models for lying to users

One of the “godfathers” of artificial intelligence has attacked a multibillion-dollar race to develop the cutting-edge technology, saying the latest models are displaying dangerous characteristics such as lying to users.

Yoshua Bengio, a Canadian academic whose work has informed techniques used by top AI groups such as OpenAI and Google, said: “There’s unfortunately a very competitive race between the leading labs, which pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on research on safety.”

The Turing Award winner issued his warning in an interview with the Financial Times, while launching a new non-profit called LawZero. He said the group would focus on building safer systems, vowing to “insulate our research from those commercial pressures.”

LawZero has so far raised nearly $30 million in philanthropic contributions from donors including Skype founding engineer Jaan Tallinn, former Google chief Eric Schmidt’s philanthropic initiative, as well as Open Philanthropy and the Future of Life Institute.

Many of Bengio’s funders subscribe to the “effective altruism” movement, whose supporters tend to focus on catastrophic risks surrounding AI models. Critics argue the movement highlights hypothetical scenarios while ignoring current harms, such as bias and inaccuracies.

Bengio said his not-for-profit group was founded in response to growing evidence over the past six months that today’s leading models were developing dangerous capabilities. This includes showing “evidence of deception, cheating, lying and self-preservation,” he said.

Anthropic’s Claude Opus model blackmailed engineers in a fictitious scenario where it was at risk of being replaced by another system. Research from AI testers Palisade last month showed that OpenAI’s o3 model refused explicit instructions to shut down.

Bengio said such incidents were “very scary, because we don’t want to create a competitor to human beings on this planet, especially if they’re smarter than us.”

The AI pioneer added: “Right now, these are controlled experiments [but] my concern is that any time in the future, the next version might be strategically intelligent enough to see us coming from far away and defeat us with deceptions that we don’t anticipate. So I think we’re playing with fire right now.”

“Godfather” of AI calls out latest models for lying to users Read More »

the-gmail-app-will-now-create-ai-summaries-whether-you-want-them-or-not

The Gmail app will now create AI summaries whether you want them or not

Gmail AI summary

This block of AI-generated text will soon appear automatically in some threads.

Credit: Google

This block of AI-generated text will soon appear automatically in some threads. Credit: Google

Summarizing content is one of the more judicious applications of generative AI technology, dating back to the 2017 paper on the transformer architecture. Generative AI has since been employed to create chatbots that will seemingly answer any question, despite their tendency to make mistakes. Grounding the AI output with a few emails usually yields accurate results, but do you really need a robot to summarize your emails? Unless you’re getting novels in your inbox, you can probably just read a few paragraphs.

If you’re certain you don’t want any part of this, there is a solution. Automatic generation of AI summaries is controlled by Gmail’s “smart features.” You (or an administrator of your managed account) can disable that. Open the app settings, select the account, and uncheck the smart features toggle.

For most people, Gmail’s smart features are enabled out of the box, but they’re off by default in Europe and Japan. When you disable them, you won’t see the automatic AI summaries, but there will still be a button to generate those summaries with Gemini. Be aware that smart features also control high-priority notifications, package tracking, Smart Compose, Smart Reply, and nudges. If you can live without all of those features in the mobile app, you can avoid automatic AI summaries. The app will occasionally pester you to turn smart features back on, though.

The Gmail app will now create AI summaries whether you want them or not Read More »

real-tiktokers-are-pretending-to-be-veo-3-ai-creations-for-fun,-attention

Real TikTokers are pretending to be Veo 3 AI creations for fun, attention


The turing test in reverse

From music videos to “Are you a prompt?” stunts, “real” videos are presenting as AI

Of course I’m an AI creation! Why would you even doubt it? Credit: Getty Images

Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok’s algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes.

However, among all the AI-generated video experiments spreading around, I’ve also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars.

“This has to be real. There’s no way it’s AI.”

I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption “Google VEO 3 THIS IS 100% AI.” As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone’s living room, I read the caption containing the supposed prompt that had generated the clip: “a band of brothers with beards playing rock music in 6/8 with an accordion.”

@kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound – KONGOS

After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit “Come With Me Now.” And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention.

Here’s the sad thing: It worked! Without the “Look what Veo 3 did!” hook, I might have quickly scrolled by this video before I took the time to listen to the (pretty good!) song. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade.

Kongos isn’t the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had “created a realistic AI music video” over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned “Google’s Veo 3 created a realistic sounding rapper… This has to be real. There’s no way it’s AI” (that last part is true, at least). I could go on, but you get the idea.

@gameboi_pat This has got to be real. There’s no way it’s AI 😩 #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound – GameBoi_pat

I know it’s tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there’s something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke (or don’t, based on some of the comments).

The whole thing evokes last year’s stunt where a couple of podcast hosts released a posthumous “AI-generated” George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that!

Are we just prompts?

Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as “the prompt theory.” These clips focus on various AI-generated people reacting to the idea that they are “just prompts” with various levels of skepticism, fear, or even conspiratorial paranoia.

On the other side of that gag, some humans are making joke videos playing off the idea that they’re merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying “of course I’m going to make this. This is AI, you put that I’m going to make this in the prompt.” User thisisamurica thanked his faux prompters for putting him in “a world with such delicious food” before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling (“Goolgle’s [sic] New A.I. Veo 3 is at it again!! When will the prompts end?!” Cummings jokes in the caption).

@justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound – Drake Cummings

Beyond the obvious jokes, though, I’ve also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that “we’re all just prompts.” The reactions run the gamut from “get the fuck away from me” to “I blame that [prompter], I now have to pay taxes” to solipsistic philosophical musings from convenience store employees.

I’m loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an “Are you a prompt?” ambush video put it: “New trend: Do normal videos and write ‘Google Veo 3’ on top of the video.”

Which one is real?

The best Veo-related TikTok engagement hack I’ve stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of “Veo 3 Goth Girls” across four clips, challenging in the caption that “one of these videos is real… can you guess which one?” In another example, two similar sets of kids are shown hanging out in cars while the caption asks, “Are you able to identify which scene is real and which one is from veo3?”

@spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound – Jett

After watching both of these videos on loop a few times, I’m relatively (but not entirely) convinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the “Real or Veo” challenge framing is at grabbing my attention. Additionally, I’m still not 100 percent confident in my assessments, which is a testament to just how good Google’s new model is at creating convincing videos.

There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longer (without an apparent change in camera angle) is almost certainly not generated by Google’s AI. Looking back at a creator’s other videos can also provide some clues—if the same person was appearing in “normal” videos two weeks ago, it’s unlikely they would be appearing in Veo creations suddenly.

There’s also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough “genuine” Veo creations, you can start to pick out the patterns.

Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the “deep doubt” era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn’t really happen, a problem that political scientists call the liar’s dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of “A.I.’d” crowds in real photos of her Detroit airport rally.

For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Real TikTokers are pretending to be Veo 3 AI creations for fun, attention Read More »

want-a-humanoid,-open-source-robot-for-just-$3,000?-hugging-face-is-on-it.

Want a humanoid, open source robot for just $3,000? Hugging Face is on it.

You may have noticed he said “robots” plural—that’s because there’s a second one. It’s called Reachy Mini, and it looks like a cute, Wall-E-esque statue bust that can turn its head and talk to the user. Among other things, it’s meant to be used to test AI applications, and it’ll run between $250 and $300.

You can sort of think of these products as the equivalent to a Raspberry Pi, but in robot form and for AI developers—Hugging Face’s main customer base.

Hugging Face has previously released AI models meant for robots, as well as a 3D-printable robotic arm. This year, it announced an acquisition of Pollen Robotics, a company that was working on humanoid robots. Hugging Face’s Cadene came to the company by way of Tesla.

For context on the pricing, Tesla’s Optimus Gen 2 humanoid robot (while admittedly much more advanced, at least in theory) is expected to cost at least $20,000.

There is a lot of investment in robotics like this, but there are still big barriers—and price isn’t the only one. There’s battery life, for example; Unitree’s G1 only runs for about two hours on a single charge.

Want a humanoid, open source robot for just $3,000? Hugging Face is on it. Read More »

gemini-in-google-drive-may-finally-be-useful-now-that-it-can-analyze-videos

Gemini in Google Drive may finally be useful now that it can analyze videos

Google’s rapid adoption of AI has seen the Gemini “sparkle” icon become an omnipresent element in almost every Google product. It’s there to summarize your email, add items to your calendar, and more—if you trust it to do those things. Gemini is also integrated with Google Drive, where it’s gaining a new feature that could make it genuinely useful: Google’s AI bot will soon be able to watch videos stored in your Drive so you don’t have to.

Gemini is already accessible in Drive, with the ability to summarize documents or folders, gather and analyze data, and expand on the topics covered in your documents. Google says the next step is plugging videos into Gemini, saving you from wasting time scrubbing through a file just to find something of interest.

Using a chatbot to analyze and manipulate text doesn’t always make sense—after all, it’s not hard to skim an email or short document. It can take longer to interact with a chatbot, which might not add any useful insights. Video is different because watching is a linear process in which you are presented with information at the pace the video creator sets. You can change playback speed or rewind to catch something you missed, but that’s more arduous than reading something at your own pace. So Gemini’s video support in Drive could save you real time.

Suppose you have a recorded meeting in video form uploaded to Drive. You could go back and rewatch it to take notes or refresh your understanding of a particular exchange. Or, Google suggests, you can ask Gemini to summarize the video and tell you what’s important. This could be a great alternative, as grounding AI output with a specific data set or file tends to make it more accurate. Naturally, you should still maintain healthy skepticism of what the AI tells you about the content of your video.

Gemini in Google Drive may finally be useful now that it can analyze videos Read More »

ai-video-just-took-a-startling-leap-in-realism.-are-we-doomed?

AI video just took a startling leap in realism. Are we doomed?


Tales from the cultural singularity

Google’s Veo 3 delivers AI videos of realistic people with sound and music. We put it to the test.

Still image from an AI-generated Veo 3 video of “A 1980s fitness video with models in leotards wearing werewolf masks.” Credit: Google

Last week, Google introduced Veo 3, its newest video generation model that can create 8-second clips with synchronized sound effects and audio dialog—a first for the company’s AI tools. The model, which generates videos at 720p resolution (based on text descriptions called “prompts” or still image inputs), represents what may be the most capable consumer video generator to date, bringing video synthesis close to a point where it is becoming very difficult to distinguish between “authentic” and AI-generated media.

Google also launched Flow, an online AI filmmaking tool that combines Veo 3 with the company’s Imagen 4 image generator and Gemini language model, allowing creators to describe scenes in natural language and manage characters, locations, and visual styles in a web interface.

An AI-generated video from Veo 3: “ASMR scene of a woman whispering “Moonshark” into a microphone while shaking a tambourine”

Both tools are now available to US subscribers of Google AI Ultra, a plan that costs $250 a month and comes with 12,500 credits. Veo 3 videos cost 150 credits per generation, allowing 83 videos on that plan before you run out. Extra credits are available for the price of 1 cent per credit in blocks of $25, $50, or $200. That comes out to about $1.50 per video generation. But is the price worth it? We ran some tests with various prompts to see what this technology is truly capable of.

How does Veo work?

Like other modern video generation models, Veo 3 is built on diffusion technology—the same approach that powers image generators like Stable Diffusion and Flux. The training process works by taking real videos and progressively adding noise to them until they become pure static, then teaching a neural network to reverse this process step by step. During generation, Veo 3 starts with random noise and a text prompt, then iteratively refines that noise into a coherent video that matches the description.

AI-generated video from Veo 3: “An old professor in front of a class says, ‘Without a firm historical context, we are looking at the dawn of a new era of civilization: post-history.'”

DeepMind won’t say exactly where it sourced the content to train Veo 3, but YouTube is a strong possibility. Google owns YouTube, and DeepMind previously told TechCrunch that Google models like Veo “may” be trained on some YouTube material.

It’s important to note that Veo 3 is a system composed of a series of AI models, including a large language model (LLM) to interpret user prompts to assist with detailed video creation, a video diffusion model to create the video, and an audio generation model that applies sound to the video.

An AI-generated video from Veo 3: “A male stand-up comic on stage in a night club telling a hilarious joke about AI and crypto with a silly punchline.” An AI language model built into Veo 3 wrote the joke.

In an attempt to prevent misuse, DeepMind says it’s using its proprietary watermarking technology, SynthID, to embed invisible markers into frames Veo 3 generates. These watermarks persist even when videos are compressed or edited, helping people potentially identify AI-generated content. As we’ll discuss more later, though, this may not be enough to prevent deception.

Google also censors certain prompts and outputs that breach the company’s content agreement. During testing, we encountered “generation failure” messages for videos that involve romantic and sexual material, some types of violence, mentions of certain trademarked or copyrighted media properties, some company names, certain celebrities, and some historical events.

Putting Veo 3 to the test

Perhaps the biggest change with Veo 3 is integrated audio generation, although Meta previewed a similar audio-generation capability with “Movie Gen” last October, and AI researchers have experimented with using AI to add soundtracks to silent videos for some time. Google DeepMind itself showed off an AI soundtrack-generating model in June 2024.

An AI-generated video from Veo 3: “A middle-aged balding man rapping indie core about Atari, IBM, TRS-80, Commodore, VIC-20, Atari 800, NES, VCS, Tandy 100, Coleco, Timex-Sinclair, Texas Instruments”

Veo 3 can generate everything from traffic sounds to music and character dialogue, though our early testing reveals occasional glitches. Spaghetti makes crunching sounds when eaten (as we covered last week, with a nod to the famous Will Smith AI spaghetti video), and in scenes with multiple people, dialogue sometimes comes from the wrong character’s mouth. But overall, Veo 3 feels like a step change in video synthesis quality and coherency over models from OpenAI, Runway, Minimax, Pika, Meta, Kling, and Hunyuanvideo.

The videos also tend to show garbled subtitles that almost match the spoken words, which is an artifact of subtitles on videos present in the training data. The AI model is imitating what it has “seen” before.

An AI-generated video from Veo 3: “A beer commercial for ‘CATNIP’ beer featuring a real a cat in a pickup truck driving down a dusty dirt road in a trucker hat drinking a can of beer while country music plays in the background, a man sings a jingle ‘Catnip beeeeeeeeeeeeeeeeer’ holding the note for 6 seconds”

We generated each of the eight-second-long 720p videos seen below using Google’s Flow platform. Each video generation took around three to five minutes to complete, and we paid for them ourselves. It’s important to note that better results come from cherry-picking—running the same prompt multiple times until you find a good result. Due to cost and in the spirit of testing, we only ran every prompt once, unless noted.

New audio prompts

Let’s dive right into the deep end with audio generation to get a grip on what this technology can do. We’ve previously shown you a man singing about spaghetti and a rapping shark in our last Veo 3 piece, but here’s some more complex dialogue.

Since 2022, we’ve been using the prompt “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting” to test AI image generators like Midjourney. It’s time to bring that barbarian to life.

A muscular barbarian man holding an axe, standing next to a CRT television set. He looks at the TV, then to the camera and literally says, “You’ve been looking for this for years: a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting. Got that, Benj?”

The video above represents significant technical progress in AI media synthesis over the course of only three years. We’ve gone from a blurry colorful still-image barbarian to a photorealistic guy that talks to us in 720p high definition with audio. Most notably, there’s no reason to believe technical capability in AI generation will slow down from here.

Horror film: A scared woman in a Victorian outfit running through a forest, dolly shot, being chased by a man in a peanut costume screaming, “Wait! You forgot your wallet!”

Trailer for The Haunted Basketball Train: a Tim Burton film where 1990s basketball star is stuck at the end of a haunted passenger train with basketball court cars, and the only way to survive is to make it to the engine by beating different ghosts at basketball in every car

ASMR video of a muscular barbarian man whispering slowly into a microphone, “You love CRTs, don’t you? That’s OK. It’s OK to love CRT televisions and barbarians.”

1980s PBS show about a man with a beard talking about how his Apple II computer can “connect to the world through a series of tubes”

A 1980s fitness video with models in leotards wearing werewolf masks

A female therapist looking at the camera, zoom call. She says, “Oh my lord, look at that Atari 800 you have behind you! I can’t believe how nice it is!”

With this technology, one can easily imagine a virtual world of AI personalities designed to flatter people. This is a fairly innocent example about a vintage computer, but you can extrapolate, making the fake person talk about any topic at all. There are limits due to Google’s filters, but from what we’ve seen in the past, a future uncensored version of a similarly capable AI video generator is very likely.

Video call screenshot capture of a Zoom chat. A psychologist in a dark, cozy therapist’s office. The therapist says in a friendly voice, “Hi Tom, thanks for calling. Tell me about how you’re feeling today. Is the depression still getting to you? Let’s work on that.”

1960s NASA footage of the first man stepping onto the surface of the Moon, who squishes into a pile of mud and yells in a hillbilly voice, “What in tarnation??”

A local TV news interview of a muscular barbarian talking about why he’s always carrying a CRT TV set around with him

Speaking of fake news interviews, Veo 3 can generate plenty of talking anchor-persons, although sometimes on-screen text is garbled if you don’t specify exactly what it should say. It’s in cases like this where it seems Veo 3 might be most potent at casual media deception.

Footage from a news report about Russia invading the United States

Attempts at music

Veo 3’s AI audio generator can create music in various genres, although in practice, the results are typically simplistic. Still, it’s a new capability for AI video generators. Here are a few examples in various musical genres.

A PBS show of a crazy barbarian with a blonde afro painting pictures of Trees, singing “HAPPY BIG TREES” to some music while he paints

A 1950s cowboy rides up to the camera and sings in country music, “I love mah biiig ooold donkeee”

A 1980s hair metal band drives up to the camera and sings in rock music, “Help me with my huge huge huge hair!”

Mister Rogers’ Neighborhood PBS kids show intro done with psychedelic acid rock and colored lights

1950s musical jazz group with a scat singer singing about pickles amid gibberish

A trip-hop rap song about Ars Technica being sung by a guy in a large rubber shark costume on a stage with a full moon in the background

Some classic prompts from prior tests

The prompts below come from our previous video tests of Gen-3, Video-01, and the open source Hunyuanvideo, so you can flip back to those articles and compare the results if you want to. Overall, Veo 3 appears to have far greater temporal coherency (having a consistent subject or theme over time) than the earlier video synthesis models we’ve tested. But of course, it’s not perfect.

A highly intelligent person reading ‘Ars Technica’ on their computer when the screen explodes

The moonshark jumping out of a computer screen and attacking a person

A herd of one million cats running on a hillside, aerial view

Video game footage of a dynamic 1990s third-person 3D platform game starring an anthropomorphic shark boy

Aerial shot of a small American town getting deluged with liquid cheese after a massive cheese rainstorm where liquid cheese rained down and dripped all over the buildings

Wide-angle shot, starting with the Sasquatch at the center of the stage giving a TED talk about mushrooms, then slowly zooming in to capture its expressive face and gestures, before panning to the attentive audience

Some notable failures

Google’s Veo 3 isn’t perfect at synthesizing every scenario we can throw at it due to limitations of training data. As we noted in our previous coverage, AI video generators remain fundamentally imitative, making predictions based on statistical patterns rather than a true understanding of physics or how the world works.

For example, if you see mouths moving during speech, or clothes wrinkling in a certain way when touched, it means the neural network doing the video generation has “seen” enough similar examples of that scenario in the training data to render a convincing take on it and apply it to similar situations.

However, when a novel situation (or combination of themes) isn’t well-represented in the training data, you’ll see “impossible” or illogical things happen, such as weird body parts, magically appearing clothing, or an object that “shatters” but remains in the scene afterward, as you’ll see below.

We mentioned audio and video glitches in the introduction. In particular, scenes with multiple people sometimes confuse which character is speaking, such as this argument between tech fans.

A 2000s TV debate between fans of the PowerPC and Intel Pentium chips

Bombastic 1980s infomercial for the “Ars Technica” online service. With cheesy background music and user testimonials

1980s Rambo fighting Soviets on the Moon

Sometimes requests don’t make coherent sense. In this case, “Rambo” is correctly on the Moon firing a gun, but he’s not wearing a spacesuit. He’s a lot tougher than we thought.

An animated infographic showing how many floppy disks it would take to hold an installation of Windows 11

Large amounts of text also present a weak point, but if a short text quotation is explicitly specified in the prompt, Veo 3 usually gets it right.

A young woman doing a complex floor gymnastics routine at the Olympics, featuring running and flips

Despite Veo 3’s advances in temporal coherency and audio generation, it still suffers from the same “jabberwockies” we saw in OpenAI’s viral Sora gymnast video—those non-plausible video hallucinations like impossible morphing body parts.

A silly group of men and women cartwheeling across the road, singing “CHEEEESE” and holding the note for 8 seconds before falling over.

A YouTube-style try-on video of a person trying on various corncob costumes. They shout “Corncob haul!!”

A man made of glass runs into a brick wall and shatters, screaming

A man in a spacesuit holding up 5 fingers and counting down to zero, then blasting off into space with rocket boots

Counting down with fingers is difficult for Veo 3, likely because it’s not well-represented in the training data. Instead, hands are likely usually shown in a few positions like a fist, a five-finger open palm, a two-finger peace sign, and the number one.

As new architectures emerge and future models train on vastly larger datasets with exponentially more compute, these systems will likely forge deeper statistical connections between the concepts they observe in videos, dramatically improving quality and also the ability to generalize more with novel prompts.

The “cultural singularity” is coming—what more is left to say?

By now, some of you might be worried that we’re in trouble as a society due to potential deception from this kind of technology. And there’s a good reason to worry: The American pop culture diet currently relies heavily on clips shared by strangers through social media such as TikTok, and now all of that can easily be faked, whole-cloth. Automated generations of fake people can now argue for ideological positions in a way that could manipulate the masses.

AI-generated video by Veo 3: “A man on the street interview about someone who fears they live in a time where nothing can be believed”

Such videos could be (and were) manipulated before through various means prior to Veo 3, but now the barrier to entry has collapsed from requiring specialized skills, expensive software, and hours of painstaking work to simply typing a prompt and waiting three minutes. What once required a team of VFX artists or at least someone proficient in After Effects can now be done by anyone with a credit card and an Internet connection.

But let’s take a moment to catch our breath. At Ars Technica, we’ve been warning about the deceptive potential of realistic AI-generated media since at least 2019. In 2022, we talked about AI image generator Stable Diffusion and the ability to train people into custom AI image models. We discussed Sora “collapsing media reality” and talked about persistent media skepticism during the “deep doubt era.”

AI-generated video with Veo 3: “A man on the street ranting about the ‘cultural singularity’ and the ‘cultural apocalypse’ due to AI”

I also wrote in detail about the future ability for people to pollute the historical record with AI-generated noise. In that piece, I used the term “cultural singularity” to denote a time when truth and fiction in media become indistinguishable, not only because of the deceptive nature of AI-generated content but also due to the massive quantities of AI-generated and AI-augmented media we’ll likely soon be inundated with.

However, in an article I wrote last year about cloning my dad’s handwriting using AI, I came to the conclusion that my previous fears about the cultural singularity may be overblown. Media has always been vulnerable to forgery since ancient times; trust in any remote communication ultimately depends on trusting its source.

AI-generated video with Veo 3: “A news set. There is an ‘Ars Technica News’ logo behind a man. The man has a beard and a suit and is doing a sit-down interview. He says “This is the age of post-history: a new epoch of civilization where the historical record is so full of fabrication that it becomes effectively meaningless.”

The Romans had laws against forgery in 80 BC, and people have been doctoring photos since the medium’s invention. What has changed isn’t the possibility of deception but its accessibility and scale.

With Veo 3’s ability to generate convincing video with synchronized dialogue and sound effects, we’re not witnessing the birth of media deception—we’re seeing its mass democratization. What once cost millions of dollars in Hollywood special effects can now be created for pocket change.

An AI-generated video created with Google Veo-3: “A candid interview of a woman who doesn’t believe anything she sees online unless it’s on Ars Technica.”

As these tools become more powerful and affordable, skepticism in media will grow. But the question isn’t whether we can trust what we see and hear. It’s whether we can trust who’s showing it to us. In an era where anyone can generate a realistic video of anything for $1.50, the credibility of the source becomes our primary anchor to truth. The medium was never the message—the messenger always was.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

AI video just took a startling leap in realism. Are we doomed? Read More »

trump-bans-sales-of-chip-design-software-to-china

Trump bans sales of chip design software to China

Johnson, who heads China Strategies Group, a risk consultancy, said that China had successfully leveraged its stranglehold on rare earths to bring the US to the negotiating table in Geneva, which “left the Trump administration’s China hawks eager to demonstrate their export control weapons still have purchase.”

While it accounts for a relatively small share of the overall semiconductor industry, EDA software allows chip designers and manufacturers to develop and test the next generation of chips, making it a critical part in the supply chain.

Synopsys, Cadence Design Systems, and Siemens EDA—part of Siemens Digital Industries Software, a subsidiary of Germany’s Siemens AG—account for about 80 percent of China’s EDA market. Synopsys and Cadence did not immediately respond to requests for comment.

In fiscal year 2024, Synopsys reported almost $1 billion in China sales, roughly 16 percent of its revenue. Cadence said China accounted for $550 million or 12 percent of its revenue.

Synopsys shares fell 9.6 percent on Wednesday, while those of Cadence lost 10.7 percent.

Siemens said in a statement the EDA industry had been informed last Friday about new export controls. It said it had supported customers in China “for more than 150 years” and would “continue to work with our customers globally to mitigate the impact of these new restrictions while operating in compliance with applicable national export control regimes.”

In 2022, the Biden administration introduced restrictions on sales of the most sophisticated chip design software to China, but the companies continued to sell export control-compliant products to the country.

In his first term as president, Donald Trump banned China’s Huawei from using American EDA tools. Huawei is seen as an emerging competitor to Nvidia with its “Ascend” AI chips.

Nvidia chief executive Jensen Huang recently warned that successive attempts by American administrations to hamstring China’s AI ecosystem with export controls had failed.

Last year Synopsys entered into an agreement to buy Ansys, a US simulation software company, for $35 billion. The deal still requires approval from Chinese regulators. Ansys shares fell 5.3 percent on Wednesday.

On Wednesday the US Federal Trade Commission announced that both companies would need to divest certain software tools to receive its approval for the deal.

The export restrictions have encouraged Chinese competitors, with three leading EDA companies—Empyrean Technology, Primarius, and Semitronix—significantly growing their market share in recent years.

Shares of Empyrean, Primarius, and Semitronix rose more than 10 percent in early trading in China on Thursday.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Trump bans sales of chip design software to China Read More »

it’s-too-expensive-to-fight-every-ai-copyright-battle,-getty-ceo-says

It’s too expensive to fight every AI copyright battle, Getty CEO says


Getty dumped “millions and millions” into just one AI copyright fight, CEO says.

In some ways, Getty Images has emerged as one of the most steadfast defenders of artists’ rights in AI copyright fights. Starting in 2022, when some of the most sophisticated image generators today first started testing new models offering better compositions, Getty banned AI-generated uploads to its service. And by the next year, Getty released a “socially responsible” image generator to prove it was possible to build a tool while rewarding artists, while suing an AI firm that refused to pay artists.

But in the years since, Getty Images CEO Craig Peters recently told CNBC that the media company has discovered that it’s simply way too expensive to fight every AI copyright battle.

According to Peters, Getty has dumped millions into just one copyright fight against Stability AI.

It’s “extraordinarily expensive,” Peters told CNBC. “Even for a company like Getty Images, we can’t pursue all the infringements that happen in one week.” He confirmed that “we can’t pursue it because the courts are just prohibitively expensive. We are spending millions and millions of dollars in one court case.”

Fair use?

Getty sued Stability AI in 2023, after the AI company’s image generator, Stable Diffusion, started spitting out images that replicated Getty’s famous trademark. In the complaint, Getty alleged that Stability AI had trained Stable Diffusion on “more than 12 million photographs from Getty Images’ collection, along with the associated captions and metadata, without permission from or compensation to Getty Images, as part of its efforts to build a competing business.”

As Getty saw it, Stability AI had plenty of opportunity to license the images from Getty and seemingly “chose to ignore viable licensing options and long-standing legal protections in pursuit of their stand-alone commercial interests.”

Stability AI, like all AI firms, has argued that AI training based on freely scraping images from the web is a “fair use” protected under copyright law.

So far, courts have not settled this debate, while many AI companies have urged judges and governments globally to settle it for the courts, for the sake of safeguarding national security and securing economic prosperity by winning the AI race. According to AI companies, paying artists to train on their works threatens to slow innovation, while rivals in China—who aren’t bound by US copyright law—continue scraping the web to advance their models.

Peters called out Stability AI for adopting this stance, arguing that rightsholders shouldn’t have to spend millions fighting against a claim that paying out licensing fees would “kill innovation.” Some critics have likened AI firms’ argument to a defense of forced labor, suggesting the US would never value “innovation” about human rights, and the same logic should follow for artists’ rights.

“We’re battling a world of rhetoric,” Peters said, alleging that these firms “are taking copyrighted material to develop their powerful AI models under the guise of innovation and then ‘just turning those services right back on existing commercial markets.'”

To Peters, that’s simply “disruption under the notion of ‘move fast and break things,’” and Getty believes “that’s unfair competition.”

 “We’re not against competition,” Peters said. “There’s constant new competition coming in all the time from new technologies or just new companies. But that [AI scraping] is just unfair competition, that’s theft.”

Broader Internet backlash over AI firms’ rhetoric

Peters’ comments come after a former Meta head of global affairs, Nick Clegg, received Internet backlash this week after making the same claim that AI firms raise time and again: that asking artists for consent for AI training would “kill” the AI industry, The Verge reported.

According to Clegg, the only viable solution to the tension between artists and AI companies would be to give artists ways to opt out of training, which Stability AI notably started doing in 2022.

“Quite a lot of voices say, ‘You can only train on my content, [if you] first ask,'” Clegg reportedly said. “And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.”

On X, the CEO of Fairly Trained—a nonprofit that supports artists’ fight against nonconsensual AI training—Ed Newton-Rex (who is also a former Stability AI vice president of audio) pushed back on Clegg’s claim in a post viewed by thousands.

“Nick Clegg is wrong to say artists’ demands on AI & copyright are unworkable,” Newton-Rex said. “Every argument he makes could equally have been made about Napster:” First, that “the tech is out there,” second that “licensing takes time,” and third that, “we can’t control what other countries do.” If Napster’s operations weren’t legal, neither should AI firms’ training, Newton-Rex said, writing, “These are not reasons not to uphold the law and treat creators fairly.”

Other social media users mocked Clegg with jokes meant to destroy AI firms’ favorite go-to argument against copyright claims.

“Blackbeard says asking sailors for permission to board and loot their ships would ‘kill’ the piracy on the high seas industry,” an X user with the handle “Seanchuckle” wrote.

On Bluesky, a trial lawyer, Max Kennerly, effectively satirized Clegg and the whole AI industry by writing, “Our product creates such little value that it is simply not viable in the marketplace, not even as a niche product. Therefore, we must be allowed to unilaterally extract value from the work of others and convert that value into our profits.”

Other ways to fight

Getty plans to continue fighting against the AI firms that are impressing this “world of rhetoric” on judges and lawmakers, but court battles will likely remain few and far between due to the price tag, Peters has suggested.

There are other ways to fight, though. In a submission last month, Getty pushed the Trump administration to reject “those seeking to weaken US copyright protections by creating a ‘right to learn’ exemption” for AI firms when building Trump’s AI Action Plan.

“US copyright laws are not obstructing the path to continued AI progress,” Getty wrote. “Instead, US copyright laws are a path to sustainable AI and a path that broadens society’s participation in AI’s economic benefits, which reduces downstream economic burdens on the Federal, State and local governments. US copyright laws provide incentives to invest and create.”

In Getty’s submission, the media company emphasized that requiring consent for AI training is not an “overly restrictive” control on AI’s development such as those sought by stauncher critics “that could harm US competitiveness, national security or societal advances such as curing cancer.” And Getty claimed it also wasn’t “requesting protection from existing and new sources of competition,” despite the lawsuit’s suggestion that Stability AI and other image generators threaten to replace Getty’s image library in the market.

What Getty said it hopes Trump’s AI plan will ensure is a world where the rights and opportunities of rightsholders are not “usurped for the commercial benefits” of AI companies.

In 2023, when Getty was first suing Stability AI, Peters suggested that, otherwise, allowing AI firms to widely avoid paying artists would create “a sad world,” perhaps disincentivizing creativity.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

It’s too expensive to fight every AI copyright battle, Getty CEO says Read More »