Artificial Intelligence

google-reveals-sky-high-gemini-usage-numbers-in-antitrust-case

Google reveals sky-high Gemini usage numbers in antitrust case

Despite the uptick in Gemini usage, Google is still far from catching OpenAI. Naturally, Google has been keeping a close eye on ChatGPT traffic. OpenAI has also seen traffic increase, putting ChatGPT around 600 million monthly active users, according to Google’s analysis. Early this year, reports pegged ChatGPT usage at around 400 million users per month.

There are many ways to measure web traffic, and not all of them tell you what you might think. For example, OpenAI has recently claimed weekly traffic as high as 400 million, but companies can choose the seven-day period in a given month they report as weekly active users. A monthly metric is more straightforward, and we have some degree of trust that Google isn’t using fake or unreliable numbers in a case where the company’s past conduct has already harmed its legal position.

While all AI firms strive to lock in as many users as possible, this is not the total win it would be for a retail site or social media platform—each person using Gemini or ChatGPT costs the company money because generative AI is so computationally expensive. Google doesn’t talk about how much it earns (more likely loses) from Gemini subscriptions, but OpenAI has noted that it loses money even on its $200 monthly plan. So while having a broad user base is essential to make these products viable in the long term, it just means higher costs unless the cost of running massive AI models comes down.

Google reveals sky-high Gemini usage numbers in antitrust case Read More »

trump-can’t-keep-china-from-getting-ai-chips,-tsmc-suggests

Trump can’t keep China from getting AI chips, TSMC suggests

“Despite TSMC’s best efforts to comply with all relevant export control and sanctions laws and regulations, there is no assurance that its business activities will not be found incompliant with export control laws and regulations,” TSMC said.

Further, “if TSMC or TSMC’s business partners fail to obtain appropriate import, export or re-export licenses or permits or are found to have violated applicable export control or sanctions laws, TSMC may also be adversely affected, through reputational harm as well as other negative consequences, including government investigations and penalties resulting from relevant legal proceedings,” TSMC warned.

Trump’s tariffs may end TSMC’s “tariff-proof” era

TSMC is thriving despite years of tariffs and export controls, its report said, with at least one analyst suggesting that, so far, the company appears “somewhat tariff-proof.” However, all of that could be changing fast, as “US President Donald Trump announced in 2025 an intention to impose more expansive tariffs on imports into the United States,” TSMC said.

“Any tariffs imposed on imports of semiconductors and products incorporating chips into the United States may result in increased costs for purchasing such products, which may, in turn, lead to decreased demand for TSMC’s products and services and adversely affect its business and future growth,” TSMC said.

And if TSMC’s business is rattled by escalations in the US-China trade war, TSMC warned, that risks disrupting the entire global semiconductor supply chain.

Trump’s semiconductor tariff plans remain uncertain. About a week ago, Trump claimed the rates would be unveiled “over the next week,” Reuters reported, which means they could be announced any day now.

Trump can’t keep China from getting AI chips, TSMC suggests Read More »

regrets:-actors-who-sold-ai-avatars-stuck-in-black-mirror-esque-dystopia

Regrets: Actors who sold AI avatars stuck in Black Mirror-esque dystopia

In a Black Mirror-esque turn, some cash-strapped actors who didn’t fully understand the consequences are regretting selling their likenesses to be used in AI videos that they consider embarrassing, damaging, or harmful, AFP reported.

Among them is a 29-year-old New York-based actor, Adam Coy, who licensed rights to his face and voice to a company called MCM for one year for $1,000 without thinking, “am I crossing a line by doing this?” His partner’s mother later found videos where he appeared as a doomsayer predicting disasters, he told the AFP.

South Korean actor Simon Lee’s AI likeness was similarly used to spook naïve Internet users but in a potentially more harmful way. He told the AFP that he was “stunned” to find his AI avatar promoting “questionable health cures on TikTok and Instagram,” feeling ashamed to have his face linked to obvious scams.

As AI avatar technology improves, the temptation to license likenesses will likely grow. One of the most successful companies that’s recruiting AI avatars, UK-based Synthesia, doubled its valuation to $2.1 billion in January, CNBC reported. And just last week, Synthesia struck a $2 billion deal with Shutterstock that will make its AI avatars more human-like, The Guardian reported.

To ensure that actors are incentivized to license their likenesses, Synthesia also recently launched an equity fund. According to the company, actors behind the most popular AI avatars or featured in Synthesia marketing campaigns will be granted options in “a pool of our company shares” worth $1 million.

“These actors will be part of the program for up to four years, during which their equity awards will vest monthly,” Synthesia said.

For actors, selling their AI likeness seems quick and painless—and perhaps increasingly more lucrative. All they have to do is show up and make a bunch of different facial expressions in front of a green screen, then collect their checks. But Alyssa Malchiodi, a lawyer who has advocated on behalf of actors, told the AFP that “the clients I’ve worked with didn’t fully understand what they were agreeing to at the time,” blindly signing contracts with “clauses considered abusive,” even sometimes granting “worldwide, unlimited, irrevocable exploitation, with no right of withdrawal.”

Regrets: Actors who sold AI avatars stuck in Black Mirror-esque dystopia Read More »

gemini-2.5-flash-comes-to-the-gemini-app-as-google-seeks-to-improve-“dynamic-thinking”

Gemini 2.5 Flash comes to the Gemini app as Google seeks to improve “dynamic thinking”

Gemini 2.5 Flash will allow developers to set a token limit for thinking or simply disable thinking altogether. Google has provided pricing per 1 million tokens at $0.15 for input, and output comes in two flavors. Without thinking, outputs are $0.60, but enabling thinking boosts it to $3.50. The thinking budget option will allow developers to fine-tune the model to do what they want for an amount of money they’re willing to pay. According to Doshi, you can actually see the reasoning improvements in benchmarks as you add more token budget.

2.5 Flash benchmark

2.5 Flash outputs get better as you add more reasoning tokens.

Credit: Google

2.5 Flash outputs get better as you add more reasoning tokens. Credit: Google

Like 2.5 Pro, this model supports Dynamic Thinking, which can automatically adjust the amount of work that goes into generating an output based on the complexity of the input. The new Flash model goes further by allowing developers to control thinking. According to Doshi, Google is launching the model now to guide improvements in these dynamic features.

“Part of the reason we’re putting the model out in preview is to get feedback from developers on where the model meets their expectations, where it under-thinks or over-thinks, so that we can continue to iterate on [dynamic thinking],” says Doshi.

Don’t expect that kind of precise control for consumer Gemini products right now, though. Doshi notes that the main reason you’d want to toggle thinking or set a budget is to control costs and latency, which matters to developers. However, Google is hoping that what it learns from the preview phase will help it understand what users and developers expect from the model. “Creating a simpler Gemini app experience for consumers while still offering flexibility is the goal,” Doshi says.

With the rapid cadence of releases, a final release for Gemini 2.5 doesn’t seem that far off. Google still doesn’t have any specifics to share on that front, but with the new developer options and availability in the Gemini app, Doshi tells us the team hopes to move the 2.5 family to general availability soon.

Gemini 2.5 Flash comes to the Gemini app as Google seeks to improve “dynamic thinking” Read More »

elon-musk-wants-to-be-“agi-dictator,”-openai-tells-court

Elon Musk wants to be “AGI dictator,” OpenAI tells court


Elon Musk’s “relentless” attacks on OpenAI must cease, court filing says.

Yesterday, OpenAI counter-sued Elon Musk, alleging that Musk’s “sham” bid to buy OpenAI was intentionally timed to maximally disrupt and potentially even frighten off investments from honest bidders.

Slamming Musk for attempting to become an “AGI dictator,” OpenAI said that if Musk’s allegedly “relentless” yearslong campaign of “harassment” isn’t stopped, Musk could end up taking over OpenAI and tanking its revenue the same way he did with Twitter.

In its filing, OpenAI argued that Musk and the other investors who joined his bid completely fabricated the $97.375 billion offer. It was allegedly not based on OpenAI’s projections or historical performance, like Musk claimed, but instead appeared to be “a comedic reference to Musk’s favorite sci-fi” novel, Iain Banks’ Look to Windward. Musk and others also provided “no evidence of financing to pay the nearly $100 billion purchase price,” OpenAI said.

And perhaps most damning, one of Musk’s backers, Ron Baron, appeared “flustered” when asked about the deal on CNBC, OpenAI alleged. On air, Baron admitted that he didn’t follow the deal closely and that “the point of the bid, as pitched to him (plainly by Musk) was not to buy OpenAI’s assets, but instead to obtain ‘discovery’ and get ‘behind the wall’ at OpenAI,” the AI company’s court filing alleged.

Likely poisoning potential deals most, OpenAI suggested, was the idea that Musk might take over OpenAI and damage its revenue like he did with Twitter. Just the specter of that could repel talent, OpenAI feared, since “the prospect of a Musk takeover means chaos and arbitrary employment action.”

And “still worse, the threat of a Musk takeover is a threat to the very mission of building beneficial AGI,” since xAI is allegedly “the worst offender” in terms of “inadequate safety measures,” according to one study, and X’s chatbot, Grok, has “become a leading spreader of misinformation and inflammatory political rhetoric,” OpenAI said. Even xAI representatives had to admit that users discovering that Grok consistently responds that “President Donald Trump and Musk deserve the death penalty” was a “really terrible and bad failure,” OpenAI’s filing said.

Despite Musk appearing to only be “pretending” to be interested in purchasing OpenAI—and OpenAI ultimately rejecting the offer—the company still had to cover the costs of reviewing the bid. And beyond bearing costs and confronting an artificially raised floor on the company’s valuation supposedly frightening off investors, “a more serious toll” of “Musk’s most recent ploy” would be OpenAI lacking resources to fulfill its mission to benefit humanity with AI “on terms uncorrupted by unlawful harassment and interference,” OpenAI said.

OpenAI has demanded a jury trial and is seeking an injunction to stop Musk’s alleged unfair business practices—which they claimed are designed to impair competition in the nascent AI field “for the sole benefit of Musk’s xAI” and “at the expense of the public interest.”

“The risk of future, irreparable harm from Musk’s unlawful conduct is acute, and the risk that that conduct continues is high,” OpenAI alleged. “With every month that has passed, Musk has intensified and expanded the fronts of his campaign against OpenAI, and has proven himself willing to take ever more dramatic steps to seek a competitive advantage for xAI and to harm [OpenAI CEO Sam] Altman, whom, in the words of the president of the United States, Musk ‘hates.'”

OpenAI also wants Musk to cover the costs it incurred from entertaining the supposedly fake bid, as well as pay punitive damages to be determined at trial for allegedly engaging “in wrongful conduct with malice, oppression, and fraud.”

OpenAI’s filing also largely denies Musk’s claims that OpenAI abandoned its mission and made a fool out of early investors like Musk by currently seeking to restructure its core business into a for-profit benefit corporation (which removes control by its nonprofit board).

“You can’t sue your way to AGI,” an OpenAI blog said.

In response to OpenAI’s filing, Musk’s lawyer, Marc Toberoff, provided a statement to Ars.

“Had OpenAI’s Board genuinely considered the bid, as they were obligated to do, they would have seen just how serious it was,” Toberoff said. “It’s telling that having to pay fair market value for OpenAI’s assets allegedly ‘interferes’ with their business plans. It’s apparent they prefer to negotiate with themselves on both sides of the table than engage in a bona fide transaction in the best interests of the charity and the public interest.”

Musk’s attempt to become an “AGI dictator”

According to OpenAI’s filing, “Musk has tried every tool available to harm OpenAI” ever since OpenAI refused to allow Musk to become an “AGI dictator” and fully control OpenAI by absorbing it into Tesla in 2018.

Musk allegedly “demanded sole control of the new for-profit, at least in the short term: He would be CEO, own a majority equity stake, and control a majority of the board,” OpenAI said. “He would—in his own words—’unequivocally have initial control of the company.'”

At the time, OpenAI rejected Musk’s offer, viewing it as in conflict with its mission to avoid corporate control and telling Musk:

“You stated that you don’t want to control the final AGI, but during this negotiation, you’ve shown to us that absolute control is extremely important to you. … The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. … So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.”

This news did not sit well with Musk, OpenAI said.

“Musk was incensed,” OpenAI told the court. “If he could not control the contemplated for-profit entity, he would not participate in it.”

Back then, Musk departed from OpenAI somewhat “amicably,” OpenAI said, although Musk insisted it was “obvious” that OpenAI would fail without him. However, after OpenAI instead became a global AI leader, Musk quietly founded xAI, OpenAI alleged, failing to publicly announce his new company while deceptively seeking a “moratorium” on AI development, apparently to slow down rivals so that xAI could catch up.

OpenAI also alleges that this is when Musk began intensifying his attacks on OpenAI while attempting to poach its top talent and demanding access to OpenAI’s confidential, sensitive information as a former donor and director—”without ever disclosing he was building a competitor in secret.”

And the attacks have only grown more intense since then, said OpenAI, claiming that Musk planted stories in the media, wielded his influence on X, requested government probes into OpenAI, and filed multiple legal claims, including seeking an injunction to halt OpenAI’s business.

“Most explosively,” OpenAI alleged that Musk pushed attorneys general of California and Delaware “to force OpenAI, Inc., without legal basis, to auction off its assets for the benefit of Musk and his associates.”

Meanwhile, OpenAI noted, Musk has folded his social media platform X into xAI, announcing its valuation was at $80 billion and gaining “a major competitive advantage” by getting “unprecedented direct access to all the user data flowing through” X. Further, Musk intends to expand his “Colossus,” which is “believed to be the world’s largest supercomputer,” “tenfold.” That could help Musk “leap ahead” of OpenAI, suggesting Musk has motive to delay OpenAI’s growth while he pursues that goal.

That’s why Musk “set in motion a campaign of harassment, interference, and misinformation designed to take down OpenAI and clear the field for himself,” OpenAI alleged.

Even while counter-suing, OpenAI appears careful not to poke the bear too hard. In the court filing and on X, OpenAI praised Musk’s leadership skills and the potential for xAI to dominate the AI industry, partly due to its unique access to X data. But ultimately, OpenAI seems to be happy to be operating independently of Musk now, asking the court to agree that “Elon’s never been about the mission” of benefiting humanity with AI, “he’s always had his own agenda.”

“Elon is undoubtedly one of the greatest entrepreneurs of our time,” OpenAI said on X. “But these antics are just history on repeat—Elon being all about Elon.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Elon Musk wants to be “AGI dictator,” OpenAI tells court Read More »

google-announces-faster,-more-efficient-gemini-ai-model

Google announces faster, more efficient Gemini AI model

We recently spoke with Google’s Tulsee Doshi, who noted that the 2.5 Pro (Experimental) release was still prone to “overthinking” its responses to simple queries. However, the plan was to further improve dynamic thinking for the final release, and the team also hoped to give developers more control over the feature. That appears to be happening with Gemini 2.5 Flash, which includes “dynamic and controllable reasoning.”

The newest Gemini models will choose a “thinking budget” based on the complexity of the prompt. This helps reduce wait times and processing for 2.5 Flash. Developers even get granular control over the budget to lower costs and speed things along where appropriate. Gemini 2.5 models are also getting supervised tuning and context caching for Vertex AI in the coming weeks.

In addition to the arrival of Gemini 2.5 Flash, the larger Pro model has picked up a new gig. Google’s largest Gemini model is now powering its Deep Research tool, which was previously running Gemini 2.0 Pro. Deep Research lets you explore a topic in greater detail simply by entering a prompt. The agent then goes out into the Internet to collect data and synthesize a lengthy report.

Gemini vs. ChatGPT chart

Credit: Google

Google says that the move to Gemini 2.5 has boosted the accuracy and usefulness of Deep Research. The graphic above shows Google’s alleged advantage compared to OpenAI’s deep research tool. These stats are based on user evaluations (not synthetic benchmarks) and show a greater than 2-to-1 preference for Gemini 2.5 Pro reports.

Deep Research is available for limited use on non-paid accounts, but you won’t get the latest model. Deep Research with 2.5 Pro is currently limited to Gemini Advanced subscribers. However, we expect before long that all models in the Gemini app will move to the 2.5 branch. With dynamic reasoning and new TPUs, Google could begin lowering the sky-high costs that have thus far made generative AI unprofitable.

Google announces faster, more efficient Gemini AI model Read More »

google’s-ai-mode-search-can-now-answer-questions-about-images

Google’s AI Mode search can now answer questions about images

Google started cramming AI features into search in 2024, but last month marked an escalation. With the release of AI Mode, Google previewed a future in which searching the web does not return a list of 10 blue links. Google says it’s getting positive feedback on AI Mode from users, so it’s forging ahead by adding multimodal functionality to its robotic results.

AI Mode relies on a custom version of the Gemini large language model (LLM) to produce results. Google confirms that this model now supports multimodal input, which means you can now show images to AI Mode when conducting a search.

As this change rolls out, the search bar in AI Mode will gain a new button that lets you snap a photo or upload an image. The updated Gemini model can interpret the content of images, but it gets a little help from Google Lens. Google notes that Lens can identify specific objects in the images you upload, passing that context along so AI Mode can make multiple sub-queries, known as a “fan-out technique.”

Google illustrates how this could work in the example below. The user shows AI Mode a few books, asking questions about similar titles. Lens identifies each individual title, allowing AI Mode to incorporate the specifics of the books into its response. This is key to the model’s ability to suggest similar books and make suggestions based on the user’s follow-up question.

Google’s AI Mode search can now answer questions about images Read More »

gemini-“coming-together-in-really-awesome-ways,”-google-says-after-2.5-pro-release

Gemini “coming together in really awesome ways,” Google says after 2.5 Pro release


Google’s Tulsee Doshi talks vibes and efficiency in Gemini 2.5 Pro.

Google was caught flat-footed by the sudden skyrocketing interest in generative AI despite its role in developing the underlying technology. This prompted the company to refocus its considerable resources on catching up to OpenAI. Since then, we’ve seen the detail-flubbing Bard and numerous versions of the multimodal Gemini models. While Gemini has struggled to make progress in benchmarks and user experience, that could be changing with the new 2.5 Pro (Experimental) release. With big gains in benchmarks and vibes, this might be the first Google model that can make a dent in ChatGPT’s dominance.

We recently spoke to Google’s Tulsee Doshi, director of product management for Gemini, to talk about the process of releasing Gemini 2.5, as well as where Google’s AI models are going in the future.

Welcome to the vibes era

Google may have had a slow start in building generative AI products, but the Gemini team has picked up the pace in recent months. The company released Gemini 2.0 in December, showing a modest improvement over the 1.5 branch. It only took three months to reach 2.5, meaning Gemini 2.0 Pro wasn’t even out of the experimental stage yet. To hear Doshi tell it, this was the result of Google’s long-term investments in Gemini.

“A big part of it is honestly that a lot of the pieces and the fundamentals we’ve been building are now coming together in really awesome ways, ” Doshi said. “And so we feel like we’re able to pick up the pace here.”

The process of releasing a new model involves testing a lot of candidates. According to Doshi, Google takes a multilayered approach to inspecting those models, starting with benchmarks. “We have a set of evals, both external academic benchmarks as well as internal evals that we created for use cases that we care about,” she said.

Credit: Google

The team also uses these tests to work on safety, which, as Google points out at every given opportunity, is still a core part of how it develops Gemini. Doshi noted that making a model safe and ready for wide release involves adversarial testing and lots of hands-on time.

But we can’t forget the vibes, which have become an increasingly important part of AI models. There’s great focus on the vibe of outputs—how engaging and useful they are. There’s also the emerging trend of vibe coding, in which you use AI prompts to build things instead of typing the code yourself. For the Gemini team, these concepts are connected. The team uses product and user feedback to understand the “vibes” of the output, be that code or just an answer to a question.

Google has noted on a few occasions that Gemini 2.5 is at the top of the LM Arena leaderboard, which shows that people who have used the model prefer the output by a considerable margin—it has good vibes. That’s certainly a positive place for Gemini to be after a long climb, but there is some concern in the field that too much emphasis on vibes could push us toward models that make us feel good regardless of whether the output is good, a property known as sycophancy.

If the Gemini team has concerns about feel-good models, they’re not letting it show. Doshi mentioned the team’s focus on code generation, which she noted can be optimized for “delightful experiences” without stoking the user’s ego. “I think about vibe less as a certain type of personality trait that we’re trying to work towards,” Doshi said.

Hallucinations are another area of concern with generative AI models. Google has had plenty of embarrassing experiences with Gemini and Bard making things up, but the Gemini team believes they’re on the right path. Gemini 2.5 apparently has set a high-water mark in the team’s factuality metrics. But will hallucinations ever be reduced to the point we can fully trust the AI? No comment on that front.

Don’t overthink it

Perhaps the most interesting thing you’ll notice when using Gemini 2.5 is that it’s very fast compared to other models that use simulated reasoning. Google says it’s building this “thinking” capability into all of its models going forward, which should lead to improved outputs. The expansion of reasoning in large language models in 2024 resulted in a noticeable improvement in the quality of these tools. It also made them even more expensive to run, exacerbating an already serious problem with generative AI.

The larger and more complex an LLM becomes, the more expensive it is to run. Google hasn’t released technical data like parameter count on its newer models—you’ll have to go back to the 1.5 branch to get that kind of detail. However, Doshi explained that Gemini 2.5 is not a substantially larger model than Google’s last iteration, calling it “comparable” in size to 2.0.

Gemini 2.5 is more efficient in one key area: the chain of thought. It’s Google’s first public model to support a feature called Dynamic Thinking, which allows the model to modulate the amount of reasoning that goes into an output. This is just the first step, though.

“I think right now, the 2.5 Pro model we ship still does overthink for simpler prompts in a way that we’re hoping to continue to improve,” Doshi said. “So one big area we are investing in is Dynamic Thinking as a way to get towards our [general availability] version of 2.5 Pro where it thinks even less for simpler prompts.”

Gemini models on phone

Credit: Ryan Whitwam

Google doesn’t break out earnings from its new AI ventures, but we can safely assume there’s no profit to be had. No one has managed to turn these huge LLMs into a viable business yet. OpenAI, which has the largest user base with ChatGPT, loses money even on the users paying for its $200 Pro plan. Google is planning to spend $75 billion on AI infrastructure in 2025, so it will be crucial to make the most of this very expensive hardware. Building models that don’t waste cycles on overthinking “Hi, how are you?” could be a big help.

Missing technical details

Google plays it close to the chest with Gemini, but the 2.5 Pro release has offered more insight into where the company plans to go than ever before. To really understand this model, though, we’ll need to see the technical report. Google last released such a document for Gemini 1.5. We still haven’t seen the 2.0 version, and we may never see that document now that 2.5 has supplanted 2.0.

Doshi notes that 2.5 Pro is still an experimental model. So, don’t expect full evaluation reports to happen right away. A Google spokesperson clarified that a full technical evaluation report on the 2.5 branch is planned, but there is no firm timeline. Google hasn’t even released updated model cards for Gemini 2.0, let alone 2.5. These documents are brief one-page summaries of a model’s training, intended use, evaluation data, and more. They’re essentially LLM nutrition labels. It’s much less detailed than a technical report, but it’s better than nothing. Google confirms model cards are on the way for Gemini 2.0 and 2.5.

Given the recent rapid pace of releases, it’s possible Gemini 2.5 Pro could be rolling out more widely around Google I/O in May. We certainly hope Google has more details when the 2.5 branch expands. As Gemini development picks up steam, transparency shouldn’t fall by the wayside.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Gemini “coming together in really awesome ways,” Google says after 2.5 Pro release Read More »

nj-teen-wins-fight-to-put-nudify-app-users-in-prison,-impose-fines-up-to-$30k

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K


Here’s how one teen plans to fix schools failing kids affected by nudify apps.

When Francesca Mani was 14 years old, boys at her New Jersey high school used nudify apps to target her and other girls. At the time, adults did not seem to take the harassment seriously, telling her to move on after she demanded more severe consequences than just a single boy’s one or two-day suspension.

Mani refused to take adults’ advice, going over their heads to lawmakers who were more sensitive to her demands. And now, she’s won her fight to criminalize deepfakes. On Wednesday, New Jersey Governor Phil Murphy signed a law that he said would help victims “take a stand against deceptive and dangerous deepfakes” by making it a crime to create or share fake AI nudes of minors or non-consenting adults—as well as deepfakes seeking to meddle with elections or damage any individuals’ or corporations’ reputations.

Under the law, victims targeted by nudify apps like Mani can sue bad actors, collecting up to $1,000 per harmful image created either knowingly or recklessly. New Jersey hopes these “more severe consequences” will deter kids and adults from creating harmful images, as well as emphasize to schools—whose lax response to fake nudes has been heavily criticized—that AI-generated nude images depicting minors are illegal and must be taken seriously and reported to police. It imposes a maximum fine of $30,000 on anyone creating or sharing deepfakes for malicious purposes, as well as possible punitive damages if a victim can prove that images were created in willful defiance of the law.

Ars could not reach Mani for comment, but she celebrated the win in the governor’s press release, saying, “This victory belongs to every woman and teenager told nothing could be done, that it was impossible, and to just move on. It’s proof that with the right support, we can create change together.”

On LinkedIn, her mother, Dorota Mani—who has been working with the governor’s office on a commission to protect kids from online harms—thanked lawmakers like Murphy and former New Jersey Assemblyman Herb Conaway, who sponsored the law, for “standing with us.”

“When used maliciously, deepfake technology can dismantle lives, distort reality, and exploit the most vulnerable among us,” Conaway said. “I’m proud to have sponsored this legislation when I was still in the Assembly, as it will help us keep pace with advancing technology. This is about drawing a clear line between innovation and harm. It’s time we take a firm stand to protect individuals from digital deception, ensuring that AI serves to empower our communities.”

Doing nothing is no longer an option for schools, teen says

Around the country, as cases like Mani’s continue to pop up, experts expect that shame prevents most victims from coming forward to flag abuses, suspecting that the problem is much more widespread than media reports suggest.

Encode Justice has a tracker monitoring reported cases involving minors, including allowing victims to anonymously report harms around the US. But the true extent of the harm currently remains unknown, as cops warn of a flood of AI child sex images obscuring investigations into real-world child abuse.

Confronting this shadowy threat to kids everywhere, Mani was named as one of TIME’s most influential people in AI last year due to her advocacy fighting deepfakes. She’s not only pressured lawmakers to take strong action to protect vulnerable people, but she’s also pushed for change at tech companies and in schools nationwide.

“When that happened to me and my classmates, we had zero protection whatsoever,” Mani told TIME, and neither did other girls around the world who had been targeted and reached out to thank her for fighting for them. “There were so many girls from different states, different countries. And we all had three things in common: the lack of AI school policies, the lack of laws, and the disregard of consent.”

Yiota Souras, chief legal officer at the National Center for Missing and Exploited Children, told CBS News last year that protecting teens started with laws that criminalize sharing fake nudes and provide civil remedies, just as New Jersey’s law does. That way, “schools would have protocols,” she said, and “investigators and law enforcement would have roadmaps on how to investigate” and “what charges to bring.”

Clarity is urgently needed in schools, advocates say. At Mani’s school, the boys who shared the photos had their names shielded and were pulled out of class individually to be interrogated, but victims like Mani had no privacy whatsoever. Their names were blared over the school’s loud system, as boys mocked their tears in the hallway. To this day, it’s unclear who exactly shared and possibly still has copies of the images, which experts say could haunt Mani throughout her life. And the school’s inadequate response was a major reason why Mani decided to take a stand, seemingly viewing the school as a vehicle furthering her harassment.

“I realized I should stop crying and be mad, because this is unacceptable,” Mani told CBS News.

Mani pushed for NJ’s new law and claimed the win, but she thinks that change must start at schools, where the harassment starts. In her school district, the “harassment, intimidation and bullying” policy was updated to incorporate AI harms, but she thinks schools should go even further. Working with Encode Justice, she is helping to push a plan to fix schools failing kids targeted by nudify apps.

“My goal is to protect women and children—and we first need to start with AI school policies, because this is where most of the targeting is happening,” Mani told TIME.

Encode Justice did not respond to Ars’ request to comment. But their plan noted a common pattern in schools throughout the US. Students learn about nudify apps through ads on social media—such as Instagram reportedly driving 90 percent of traffic to one such nudify app—where they can also usually find innocuous photos of classmates to screenshot. Within seconds, the apps can nudify the screenshotted images, which Mani told CBS News then spread “rapid fire”  by text message and DMs, and often shared over school networks.

To end the abuse, schools need to be prepared, Encode Justice said, especially since “their initial response can sometimes exacerbate the situation.”

At Mani’s school, for example, leadership was criticized for announcing the victims’ names over the loudspeaker, which Encode Justice said never should have happened. Another misstep was at a California middle school, which delayed action for four months until parents went to police, Encode Justice said. In Texas, a school failed to stop images from spreading for eight months while a victim pleaded for help from administrators and police who failed to intervene. The longer the delays, the more victims will likely be targeted. In Pennsylvania, a single ninth grader targeted 46 girls before anyone stepped in.

Students deserve better, Mani feels, and Encode Justice’s plan recommends that all schools create action plans to stop failing students and respond promptly to stop image sharing.

That starts with updating policies to ban deepfake sexual imagery, then clearly communicating to students “the seriousness of the issue and the severity of the consequences.” Consequences should include identifying all perpetrators and issuing suspensions or expulsions on top of any legal consequences students face, Encode Justice suggested. They also recommend establishing “written procedures to discreetly inform relevant authorities about incidents and to support victims at the start of an investigation on deepfake sexual abuse.” And, critically, all teachers must be trained on these new policies.

“Doing nothing is no longer an option,” Mani said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K Read More »

deepmind-has-detailed-all-the-ways-agi-could-wreck-the-world

DeepMind has detailed all the ways AGI could wreck the world

As AI hype permeates the Internet, tech and business leaders are already looking toward the next step. AGI, or artificial general intelligence, refers to a machine with human-like intelligence and capabilities. If today’s AI systems are on a path to AGI, we will need new approaches to ensure such a machine doesn’t work against human interests.

Unfortunately, we don’t have anything as elegant as Isaac Asimov’s Three Laws of Robotics. Researchers at DeepMind have been working on this problem and have released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience.

It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to “severe harm.”

All the ways AGI could harm humanity

This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks. Misuse and misalignment are discussed in the paper at length, but the latter two are only covered briefly.

table of AGI risks

The four categories of AGI risk, as determined by DeepMind.

Credit: Google DeepMind

The four categories of AGI risk, as determined by DeepMind. Credit: Google DeepMind

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne’er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon.

DeepMind has detailed all the ways AGI could wreck the world Read More »

feeling-curious?-google’s-notebooklm-can-now-discover-data-sources-for-you

Feeling curious? Google’s NotebookLM can now discover data sources for you

In addition to the chatbot functionality, NotebookLM can use the source data to build FAQs, briefing summaries, and, of course, Audio Overviews—that’s a podcast-style conversation between two fake people, a feature that manages to be simultaneously informative and deeply unsettling. It’s probably the most notable capability of NotebookLM, though. Google recently brought Audio Overviews to its Gemini Deep Research product, too.

And that’s not all—Google is lowering the barrier to entry even more. You don’t even need to have any particular goal to play around with NotebookLM. In addition to the Discover button, Google has added an “I’m Feeling Curious” button, a callback to its iconic randomized “I’m feeling lucky” search button. Register your curiosity with NotebookLM, and it will seek out sources on a random topic.

Google says the new NotebookLM features are available starting today, but you might not see them right away. It could take about a week until everyone has Discover Sources and I’m Feeling Curious. Both of these features are available for free users, but be aware that the app has limits on the number of Audio Overviews and sources unless you pay for Google’s AI Premium subscription for $20 per month.

Feeling curious? Google’s NotebookLM can now discover data sources for you Read More »

google’s-new-experimental-gemini-2.5-model-rolls-out-to-free-users

Google’s new experimental Gemini 2.5 model rolls out to free users

Google released its latest and greatest Gemini AI model last week, but it was only made available to paying subscribers. Google has moved with uncharacteristic speed to release Gemini 2.5 Pro (Experimental) for free users, too. The next time you check in with Gemini, you can access most of the new AI’s features without a Gemini Advanced subscription.

The Gemini 2.5 branch will eventually replace 2.0, which was only released in late 2024. It supports simulated reasoning, as all Google’s models will in the future. This approach to producing an output can avoid some of the common mistakes that AI models have made in the past. We’ve also been impressed with Gemini 2.5’s vibe, which has landed it at the top of the LMSYS Chatbot arena leaderboard.

Google says Gemini 2.5 Pro (Experimental) is ready and waiting for free users to try on the web. Simply select the model from the drop-down menu and enter your prompt to watch the “thinking” happen. The model will roll out to the mobile app for free users soon.

While the free tier gets access to this model, it won’t have all the advanced features. You still cannot upload files to Gemini without a paid account, which may make it hard to take advantage of the model’s large context window—although you won’t get the full 1 million-token window anyway. Google says the free version of Gemini 2.5 Pro (Experimental) will have a lower limit, which it has not specified. We’ve added a few thousand words without issue, but there’s another roadblock in the way.

Google’s new experimental Gemini 2.5 model rolls out to free users Read More »