AI

time-saved-by-ai-offset-by-new-work-created,-study-suggests

Time saved by AI offset by new work created, study suggests

A new study analyzing the Danish labor market in 2023 and 2024 suggests that generative AI models like ChatGPT have had almost no significant impact on overall wages or employment yet, despite rapid adoption in some workplaces. The findings, detailed in a working paper by economists from the University of Chicago and the University of Copenhagen, provide an early, large-scale empirical look at AI’s transformative potential.

In “Large Language Models, Small Labor Market Effects,” economists Anders Humlum and Emilie Vestergaard focused specifically on the impact of AI chatbots across 11 occupations often considered vulnerable to automation, including accountants, software developers, and customer support specialists. Their analysis covered data from 25,000 workers and 7,000 workplaces in Denmark.

Despite finding widespread and often employer-encouraged adoption of these tools, the study concluded that “AI chatbots have had no significant impact on earnings or recorded hours in any occupation” during the period studied. The confidence intervals in their statistical analysis ruled out average effects larger than 1 percent.

“The adoption of these chatbots has been remarkably fast,” Humlum told The Register about the study. “Most workers in the exposed occupations have now adopted these chatbots… But then when we look at the economic outcomes, it really has not moved the needle.”

AI creating more work?

During the study, the researchers investigated how company investment in AI affected worker adoption and how chatbots changed workplace processes. While corporate investment boosted AI tool adoption—saving time for 64 to 90 percent of users across studied occupations—the actual benefits were less substantial than expected.

The study revealed that AI chatbots actually created new job tasks for 8.4 percent of workers, including some who did not use the tools themselves, offsetting potential time savings. For example, many teachers now spend time detecting whether students use ChatGPT for homework, while other workers review AI output quality or attempt to craft effective prompts.

Time saved by AI offset by new work created, study suggests Read More »

new-study-accuses-lm-arena-of-gaming-its-popular-ai-benchmark

New study accuses LM Arena of gaming its popular AI benchmark

This study also calls out LM Arena for what appears to be much greater promotion of private models like Gemini, ChatGPT, and Claude. Developers collect data on model interactions from the Chatbot Arena API, but teams focusing on open models consistently get the short end of the stick.

The researchers point out that certain models appear in arena faceoffs much more often, with Google and OpenAI together accounting for over 34 percent of collected model data. Firms like xAI, Meta, and Amazon are also disproportionately represented in the arena. Therefore, those firms get more vibemarking data compared to the makers of open models.

More models, more evals

The study authors have a list of suggestions to make LM Arena more fair. Several of the paper’s recommendations are aimed at correcting the imbalance of privately tested commercial models, for example, by limiting the number of models a group can add and retract before releasing one. The study also suggests showing all model results, even if they aren’t final.

However, the site’s operators take issue with some of the paper’s methodology and conclusions. LM Arena points out that the pre-release testing features have not been kept secret, with a March 2024 blog post featuring a brief explanation of the system. They also contend that model creators don’t technically choose the version that is shown. Instead, the site simply doesn’t show non-public versions for simplicity’s sake. When a developer releases the final version, that’s what LM Arena adds to the leaderboard.

Proprietary models get disproportionate attention in the Chatbot Arena, the study says.

Credit: Shivalika Singh et al.

Proprietary models get disproportionate attention in the Chatbot Arena, the study says. Credit: Shivalika Singh et al.

One place the two sides may find alignment is on the question of unequal matchups. The study authors call for fair sampling, which will ensure open models appear in Chatbot Arena at a rate similar to the likes of Gemini and ChatGPT. LM Arena has suggested it will work to make the sampling algorithm more varied so you don’t always get the big commercial models. That would send more eval data to small players, giving them the chance to improve and challenge the big commercial models.

LM Arena recently announced it was forming a corporate entity to continue its work. With money on the table, the operators need to ensure Chatbot Arena continues figuring into the development of popular models. However, it’s unclear whether this is an objectively better way to evaluate chatbots versus academic tests. As people vote on vibes, there’s a real possibility we are pushing models to adopt sycophantic tendencies. This may have helped nudge ChatGPT into suck-up territory in recent weeks, a move that OpenAI has hastily reverted after widespread anger.

New study accuses LM Arena of gaming its popular AI benchmark Read More »

the-end-of-an-ai-that-shocked-the-world:-openai-retires-gpt-4

The end of an AI that shocked the world: OpenAI retires GPT-4

One of the most influential—and by some counts, notorious—AI models yet released will soon fade into history. OpenAI announced on April 10 that GPT-4 will be “fully replaced” by GPT-4o in ChatGPT at the end of April, bringing a public-facing end to the model that accelerated a global AI race when it launched in March 2023.

“Effective April 30, 2025, GPT-4 will be retired from ChatGPT and fully replaced by GPT-4o,” OpenAI wrote in its April 10 changelog for ChatGPT. While ChatGPT users will no longer be able to chat with the older AI model, the company added that “GPT-4 will still be available in the API,” providing some reassurance to developers who might still be using the older model for various tasks.

The retirement marks the end of an era that began on March 14, 2023, when GPT-4 demonstrated capabilities that shocked some observers: reportedly scoring at the 90th percentile on the Uniform Bar Exam, acing AP tests, and solving complex reasoning problems that stumped previous models. Its release created a wave of immense hype—and existential panic—about AI’s ability to imitate human communication and composition.

A screenshot of GPT-4's introduction to ChatGPT Plus customers from March 14, 2023.

A screenshot of GPT-4’s introduction to ChatGPT Plus customers from March 14, 2023. Credit: Benj Edwards / Ars Technica

While ChatGPT launched in November 2022 with GPT-3.5 under the hood, GPT-4 took AI language models to a new level of sophistication, and it was a massive undertaking to create. It combined data scraped from the vast corpus of human knowledge into a set of neural networks rumored to weigh in at a combined total of 1.76 trillion parameters, which are the numerical values that hold the data within the model.

Along the way, the model reportedly cost more than $100 million to train, according to comments by OpenAI CEO Sam Altman, and required vast computational resources to develop. Training the model may have involved over 20,000 high-end GPUs working in concert—an expense few organizations besides OpenAI and its primary backer, Microsoft, could afford.

Industry reactions, safety concerns, and regulatory responses

Curiously, GPT-4’s impact began before OpenAI’s official announcement. In February 2023, Microsoft integrated its own early version of the GPT-4 model into its Bing search engine, creating a chatbot that sparked controversy when it tried to convince Kevin Roose of The New York Times to leave his wife and when it “lost its mind” in response to an Ars Technica article.

The end of an AI that shocked the world: OpenAI retires GPT-4 Read More »

openai-rolls-back-update-that-made-chatgpt-a-sycophantic-mess

OpenAI rolls back update that made ChatGPT a sycophantic mess

In search of good vibes

OpenAI, along with competitors like Google and Anthropic, is trying to build chatbots that people want to chat with. So, designing the model’s apparent personality to be positive and supportive makes sense—people are less likely to use an AI that comes off as harsh or dismissive. For lack of a better word, it’s increasingly about vibemarking.

When Google revealed Gemini 2.5, the team crowed about how the model topped the LM Arena leaderboard, which lets people choose between two different model outputs in a blinded test. The models people like more end up at the top of the list, suggesting they are more pleasant to use. Of course, people can like outputs for different reasons—maybe one is more technically accurate, or the layout is easier to read. But overall, people like models that make them feel good. The same is true of OpenAI’s internal model tuning work, it would seem.

An example of ChatGPT’s overzealous praise.

Credit: /u/Talvy

An example of ChatGPT’s overzealous praise. Credit: /u/Talvy

It’s possible this pursuit of good vibes is pushing models to display more sycophantic behaviors, which is a problem. Anthropic’s Alex Albert has cited this as a “toxic feedback loop.” An AI chatbot telling you that you’re a world-class genius who sees the unseen might not be damaging if you’re just brainstorming. However, the model’s unending praise can lead people who are using AI to plan business ventures or, heaven forbid, enact sweeping tariffs, to be fooled into thinking they’ve stumbled onto something important. In reality, the model has just become so sycophantic that it loves everything.

The constant pursuit of engagement has been a detriment to numerous products in the Internet era, and it seems generative AI is not immune. OpenAI’s GPT-4o update is a testament to that, but hopefully, this can serve as a reminder for the developers of generative AI that good vibes are not all that matters.

OpenAI rolls back update that made ChatGPT a sycophantic mess Read More »

google-search’s-made-up-ai-explanations-for-sayings-no-one-ever-said,-explained

Google search’s made-up AI explanations for sayings no one ever said, explained


But what does “meaning” mean?

A partial defense of (some of) AI Overview’s fanciful idiomatic explanations.

Mind…. blown Credit: Getty Images

Last week, the phrase “You can’t lick a badger twice” unexpectedly went viral on social media. The nonsense sentence—which was likely never uttered by a human before last week—had become the poster child for the newly discovered way Google search’s AI Overviews makes up plausible-sounding explanations for made-up idioms (though the concept seems to predate that specific viral post by at least a few days).

Google users quickly discovered that typing any concocted phrase into the search bar with the word “meaning” attached at the end would generate an AI Overview with a purported explanation of its idiomatic meaning. Even the most nonsensical attempts at new proverbs resulted in a confident explanation from Google’s AI Overview, created right there on the spot.

In the wake of the “lick a badger” post, countless users flocked to social media to share Google’s AI interpretations of their own made-up idioms, often expressing horror or disbelief at Google’s take on their nonsense. Those posts often highlight the overconfident way the AI Overview frames its idiomatic explanations and occasional problems with the model confabulating sources that don’t exist.

But after reading through dozens of publicly shared examples of Google’s explanations for fake idioms—and generating a few of my own—I’ve come away somewhat impressed with the model’s almost poetic attempts to glean meaning from gibberish and make sense out of the senseless.

Talk to me like a child

Let’s try a thought experiment: Say a child asked you what the phrase “you can’t lick a badger twice” means. You’d probably say you’ve never heard that particular phrase or ask the child where they heard it. You might say that you’re not familiar with that phrase or that it doesn’t really make sense without more context.

Someone on Threads noticed you can type any random sentence into Google, then add “meaning” afterwards, and you’ll get an AI explanation of a famous idiom or phrase you just made up. Here is mine

[image or embed]

— Greg Jenner (@gregjenner.bsky.social) April 23, 2025 at 6: 15 AM

But let’s say the child persisted and really wanted an explanation for what the phrase means. So you’d do your best to generate a plausible-sounding answer. You’d search your memory for possible connotations for the word “lick” and/or symbolic meaning for the noble badger to force the idiom into some semblance of sense. You’d reach back to other similar idioms you know to try to fit this new, unfamiliar phrase into a wider pattern (anyone who has played the excellent board game Wise and Otherwise might be familiar with the process).

Google’s AI Overview doesn’t go through exactly that kind of human thought process when faced with a similar question about the same saying. But in its own way, the large language model also does its best to generate a plausible-sounding response to an unreasonable request.

As seen in Greg Jenner’s viral Bluesky post, Google’s AI Overview suggests that “you can’t lick a badger twice” means that “you can’t trick or deceive someone a second time after they’ve been tricked once. It’s a warning that if someone has already been deceived, they are unlikely to fall for the same trick again.” As an attempt to derive meaning from a meaningless phrase —which was, after all, the user’s request—that’s not half bad. Faced with a phrase that has no inherent meaning, the AI Overview still makes a good-faith effort to answer the user’s request and draw some plausible explanation out of troll-worthy nonsense.

Contrary to the computer science truism of “garbage in, garbage out, Google here is taking in some garbage and spitting out… well, a workable interpretation of garbage, at the very least.

Google’s AI Overview even goes into more detail explaining its thought process. “Lick” here means to “trick or deceive” someone, it says, a bit of a stretch from the dictionary definition of lick as “comprehensively defeat,” but probably close enough for an idiom (and a plausible iteration of the idiom, “Fool me once shame on you, fool me twice, shame on me…”). Google also explains that the badger part of the phrase “likely originates from the historical sport of badger baiting,” a practice I was sure Google was hallucinating until I looked it up and found it was real.

It took me 15 seconds to make up this saying but now I think it kind of works!

Credit: Kyle Orland / Google

It took me 15 seconds to make up this saying but now I think it kind of works! Credit: Kyle Orland / Google

I found plenty of other examples where Google’s AI derived more meaning than the original requester’s gibberish probably deserved. Google interprets the phrase “dream makes the steam” as an almost poetic statement about imagination powering innovation. The line “you can’t humble a tortoise” similarly gets interpreted as a statement about the difficulty of intimidating “someone with a strong, steady, unwavering character (like a tortoise).”

Google also often finds connections that the original nonsense idiom creators likely didn’t intend. For instance, Google could link the made-up idiom “A deft cat always rings the bell” to the real concept of belling the cat. And in attempting to interpret the nonsense phrase “two cats are better than grapes,” the AI Overview correctly notes that grapes can be potentially toxic to cats.

Brimming with confidence

Even when Google’s AI Overview works hard to make the best of a bad prompt, I can still understand why the responses rub a lot of users the wrong way. A lot of the problem, I think, has to do with the LLM’s unearned confident tone, which pretends that any made-up idiom is a common saying with a well-established and authoritative meaning.

Rather than framing its responses as a “best guess” at an unknown phrase (as a human might when responding to a child in the example above), Google generally provides the user with a single, authoritative explanation for what an idiom means, full stop. Even with the occasional use of couching words such as “likely,” “probably,” or “suggests,” the AI Overview comes off as unnervingly sure of the accepted meaning for some nonsense the user made up five seconds ago.

If Google’s AI Overviews always showed this much self-doubt, we’d be getting somewhere.

Credit: Google / Kyle Orland

If Google’s AI Overviews always showed this much self-doubt, we’d be getting somewhere. Credit: Google / Kyle Orland

I was able to find one exception to this in my testing. When I asked Google the meaning of “when you see a tortoise, spin in a circle,” Google reasonably told me that the phrase “doesn’t have a widely recognized, specific meaning” and that it’s “not a standard expression with a clear, universal meaning.” With that context, Google then offered suggestions for what the phrase “seems to” mean and mentioned Japanese nursery rhymes that it “may be connected” to, before concluding that it is “open to interpretation.”

Those qualifiers go a long way toward properly contextualizing the guesswork Google’s AI Overview is actually conducting here. And if Google provided that kind of context in every AI summary explanation of a made-up phrase, I don’t think users would be quite as upset.

Unfortunately, LLMs like this have trouble knowing what they don’t know, meaning moments of self-doubt like the turtle interpretation here tend to be few and far between. It’s not like Google’s language model has some master list of idioms in its neural network that it can consult to determine what is and isn’t a “standard expression” that it can be confident about. Usually, it’s just projecting a self-assured tone while struggling to force the user’s gibberish into meaning.

Zeus disguised himself as what?

The worst examples of Google’s idiomatic AI guesswork are ones where the LLM slips past plausible interpretations and into sheer hallucination of completely fictional sources. The phrase “a dog never dances before sunset,” for instance, did not appear in the film Before Sunrise, no matter what Google says. Similarly, “There are always two suns on Tuesday” does not appear in The Hitchhiker’s Guide to the Galaxy film despite Google’s insistence.

Literally in the one I tried.

[image or embed]

— Sarah Vaughan (@madamefelicie.bsky.social) April 23, 2025 at 7: 52 AM

There’s also no indication that the made-up phrase “Welsh men jump the rabbit” originated on the Welsh island of Portland, or that “peanut butter platform heels” refers to a scientific experiment creating diamonds from the sticky snack. We’re also unaware of any Greek myth where Zeus disguises himself as a golden shower to explain the phrase “beware what glitters in a golden shower.” (Update: As many commenters have pointed out, this last one is actually a reference to the greek myth of Danaë and the shower of gold, showing Google’s AI knows more about this potential symbolism than I do)

The fact that Google’s AI Overview presents these completely made-up sources with the same self-assurance as its abstract interpretations is a big part of the problem here. It’s also a persistent problem for LLMs that tend to make up news sources and cite fake legal cases regularly. As usual, one should be very wary when trusting anything an LLM presents as an objective fact.

When it comes to the more artistic and symbolic interpretation of nonsense phrases, though, I think Google’s AI Overviews have gotten something of a bad rap recently. Presented with the difficult task of explaining nigh-unexplainable phrases, the model does its best, generating interpretations that can border on the profound at times. While the authoritative tone of those responses can sometimes be annoying or actively misleading, it’s at least amusing to see the model’s best attempts to deal with our meaningless phrases.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Google search’s made-up AI explanations for sayings no one ever said, explained Read More »

ai-generated-code-could-be-a-disaster-for-the-software-supply-chain-here’s-why.

AI-generated code could be a disaster for the software supply chain. Here’s why.

AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows.

The study, which used 16 of the most widely used large language models to generate 576,000 code samples, found that 440,000 of the package dependencies they contained were “hallucinated,” meaning they were non-existent. Open source models hallucinated the most, with 21 percent of the dependencies linking to non-existent libraries. A dependency is an essential code component that a separate piece of code requires to work properly. Dependencies save developers the hassle of rewriting code and are an essential part of the modern software supply chain.

Package hallucination flashbacks

These non-existent dependencies represent a threat to the software supply chain by exacerbating so-called dependency confusion attacks. These attacks work by causing a software package to access the wrong component dependency, for instance by publishing a malicious package and giving it the same name as the legitimate one but with a later version stamp. Software that depends on the package will, in some cases, choose the malicious version rather than the legitimate one because the former appears to be more recent.

Also known as package confusion, this form of attack was first demonstrated in 2021 in a proof-of-concept exploit that executed counterfeit code on networks belonging to some of the biggest companies on the planet, Apple, Microsoft, and Tesla included. It’s one type of technique used in software supply-chain attacks, which aim to poison software at its very source, in an attempt to infect all users downstream.

“Once the attacker publishes a package under the hallucinated name, containing some malicious code, they rely on the model suggesting that name to unsuspecting users,” Joseph Spracklen, a University of Texas at San Antonio Ph.D. student and lead researcher, told Ars via email. “If a user trusts the LLM’s output and installs the package without carefully verifying it, the attacker’s payload, hidden in the malicious package, would be executed on the user’s system.”

AI-generated code could be a disaster for the software supply chain. Here’s why. Read More »

chatgpt-goes-shopping-with-new-product-browsing-feature

ChatGPT goes shopping with new product-browsing feature

On Thursday, OpenAI announced the addition of shopping features to ChatGPT Search. The new feature allows users to search for products and purchase them through merchant websites after being redirected from the ChatGPT interface. Product placement is not sponsored, and the update affects all users, regardless of whether they’ve signed in to an account.

Adam Fry, ChatGPT search product lead at OpenAI, showed Ars Technica’s sister site Wired how the new shopping system works during a demonstration. Users researching products like espresso machines or office chairs receive recommendations based on their stated preferences, stored memories, and product reviews from around the web.

According to Wired, the shopping experience in ChatGPT resembles Google Shopping. When users click on a product image, the interface displays multiple retailers like Amazon and Walmart on the right side of the screen, with buttons to complete purchases. OpenAI is currently experimenting with categories that include electronics, fashion, home goods, and beauty products.

Product reviews shown in ChatGPT come from various online sources, including publishers and user forums like Reddit. Users can instruct ChatGPT to prioritize which review sources to use when creating product recommendations.

An example of the ChatGPT shopping experience provided by OpenAI.

An example of the ChatGPT shopping experience provided by OpenAI. Credit: OpenAI

Unlike Google’s algorithm-based approach to product recommendations, ChatGPT reportedly attempts to understand product reviews and user preferences in a more conversational manner.  If someone mentions they prefer black clothing from specific retailers in a chat, the system incorporates those preferences in future shopping recommendations.

ChatGPT goes shopping with new product-browsing feature Read More »

in-the-age-of-ai,-we-must-protect-human-creativity-as-a-natural-resource

In the age of AI, we must protect human creativity as a natural resource


Op-ed: As AI outputs flood the Internet, diverse human perspectives are our most valuable resource.

Ironically, our present AI age has shone a bright spotlight on the immense value of human creativity as breakthroughs in technology threaten to undermine it. As tech giants rush to build newer AI models, their web crawlers vacuum up creative content, and those same models spew floods of synthetic media, risking drowning out the human creative spark in an ocean of pablum.

Given this trajectory, AI-generated content may soon exceed the entire corpus of historical human creative works, making the preservation of the human creative ecosystem not just an ethical concern but an urgent imperative. The alternative is nothing less than a gradual homogenization of our cultural landscape, where machine learning flattens the richness of human expression into a mediocre statistical average.

A limited resource

By ingesting billions of creations, chatbots learn to talk, and image synthesizers learn to draw. Along the way, the AI companies behind them treat our shared culture like an inexhaustible resource to be strip-mined, with little thought for the consequences.

But human creativity isn’t the product of an industrial process; it’s inherently throttled precisely because we are finite biological beings who draw inspiration from real lived experiences while balancing creativity with the necessities of life—sleep, emotional recovery, and limited lifespans. Creativity comes from making connections, and it takes energy, time, and insight for those connections to be meaningful. Until recently, a human brain was a prerequisite for making those kinds of connections, and there’s a reason why that is valuable.

Every human brain isn’t just a store of data—it’s a knowledge engine that thinks in a unique way, creating novel combinations of ideas. Instead of having one “connection machine” (an AI model) duplicated a million times, we have seven billion neural networks, each with a unique perspective. Relying on the diversity of thought derived from human cognition helps us escape the monolithic thinking that may emerge if everyone were to draw from the same AI-generated sources.

Today, the AI industry’s business models unintentionally echo the ways in which early industrialists approached forests and fisheries—as free inputs to exploit without considering ecological limits.

Just as pollution from early factories unexpectedly damaged the environment, AI systems risk polluting the digital environment by flooding the Internet with synthetic content. Like a forest that needs careful management to thrive or a fishery vulnerable to collapse from overexploitation, the creative ecosystem can be degraded even if the potential for imagination remains.

Depleting our creative diversity may become one of the hidden costs of AI, but that diversity is worth preserving. If we let AI systems deplete or pollute the human outputs they depend on, what happens to AI models—and ultimately to human society—over the long term?

AI’s creative debt

Every AI chatbot or image generator exists only because of human works, and many traditional artists argue strongly against current AI training approaches, labeling them plagiarism. Tech companies tend to disagree, although their positions vary. For example, in 2023, imaging giant Adobe took an unusual step by training its Firefly AI models solely on licensed stock photos and public domain works, demonstrating that alternative approaches are possible.

Adobe’s licensing model offers a contrast to companies like OpenAI, which rely heavily on scraping vast amounts of Internet content without always distinguishing between licensed and unlicensed works.

Photo of a mining dumptruck and water tank in an open pit copper mine.

OpenAI has argued that this type of scraping constitutes “fair use” and effectively claims that competitive AI models at current performance levels cannot be developed without relying on unlicensed training data, despite Adobe’s alternative approach.

The “fair use” argument often hinges on the legal concept of “transformative use,” the idea that using works for a fundamentally different purpose from creative expression—such as identifying patterns for AI—does not violate copyright. Generative AI proponents often argue that their approach is how human artists learn from the world around them.

Meanwhile, artists are expressing growing concern about losing their livelihoods as corporations turn to cheap, instantaneously generated AI content. They also call for clear boundaries and consent-driven models rather than allowing developers to extract value from their creations without acknowledgment or remuneration.

Copyright as crop rotation

This tension between artists and AI reveals a deeper ecological perspective on creativity itself. Copyright’s time-limited nature was designed as a form of resource management, like crop rotation or regulated fishing seasons that allow for regeneration. Copyright expiration isn’t a bug; its designers hoped it would ensure a steady replenishment of the public domain, feeding the ecosystem from which future creativity springs.

On the other hand, purely AI-generated outputs cannot be copyrighted in the US, potentially brewing an unprecedented explosion in public domain content, although it’s content that contains smoothed-over imitations of human perspectives.

Treating human-generated content solely as raw material for AI training disrupts this ecological balance between “artist as consumer of creative ideas” and “artist as producer.” Repeated legislative extensions of copyright terms have already significantly delayed the replenishment cycle, keeping works out of the public domain for much longer than originally envisioned. Now, AI’s wholesale extraction approach further threatens this delicate balance.

The resource under strain

Our creative ecosystem is already showing measurable strain from AI’s impact, from tangible present-day infrastructure burdens to concerning future possibilities.

Aggressive AI crawlers already effectively function as denial-of-service attacks on certain sites, with Cloudflare documenting GPTBot’s immediate impact on traffic patterns. Wikimedia’s experience provides clear evidence of current costs: AI crawlers caused a documented 50 percent bandwidth surge, forcing the nonprofit to divert limited resources to defensive measures rather than to its core mission of knowledge sharing. As Wikimedia says, “Our content is free, our infrastructure is not.” Many of these crawlers demonstrably ignore established technical boundaries like robots.txt files.

Beyond infrastructure strain, our information environment also shows signs of degradation. Google has publicly acknowledged rising volumes of “spammy, low-quality,” often auto-generated content appearing in search results. A Wired investigation found concrete examples of AI-generated plagiarism sometimes outranking original reporting in search results. This kind of digital pollution led Ross Anderson of Cambridge University to compare it to filling oceans with plastic—it’s a contamination of our shared information spaces.

Looking to the future, more risks may emerge. Ted Chiang’s comparison of LLMs to lossy JPEGs offers a framework for understanding potential problems, as each AI generation summarizes web information into an increasingly “blurry” facsimile of human knowledge. The logical extension of this process—what some researchers term “model collapse“—presents a risk of degradation in our collective knowledge ecosystem if models are trained indiscriminately on their own outputs. (However, this differs from carefully designed synthetic data that can actually improve model efficiency.)

This downward spiral of AI pollution may soon resemble a classic “tragedy of the commons,” in which organizations act from self-interest at the expense of shared resources. If AI developers continue extracting data without limits or meaningful contributions, the shared resource of human creativity could eventually degrade for everyone.

Protecting the human spark

While AI models that simulate creativity in writing, coding, images, audio, or video can achieve remarkable imitations of human works, this sophisticated mimicry currently lacks the full depth of the human experience.

For example, AI models lack a body that endures the pain and travails of human life. They don’t grow over the course of a human lifespan in real time. When an AI-generated output happens to connect with us emotionally, it often does so by imitating patterns learned from a human artist who has actually lived that pain or joy.

A photo of a young woman painter in her art studio.

Even if future AI systems develop more sophisticated simulations of emotional states or embodied experiences, they would still fundamentally differ from human creativity, which emerges organically from lived biological experience, cultural context, and social interaction.

That’s because the world constantly changes. New types of human experience emerge. If an ethically trained AI model is to remain useful, researchers must train it on recent human experiences, such as viral trends, evolving slang, and cultural shifts.

Current AI solutions, like retrieval-augmented generation (RAG), address this challenge somewhat by retrieving up-to-date, external information to supplement their static training data. Yet even RAG methods depend heavily on validated, high-quality human-generated content—the very kind of data at risk if our digital environment becomes overwhelmed with low-quality AI-produced output.

This need for high-quality, human-generated data is a major reason why companies like OpenAI have pursued media deals (including a deal signed with Ars Technica parent Condé Nast last August). Yet paradoxically, the same models fed on valuable human data often produce the low-quality spam and slop that floods public areas of the Internet, degrading the very ecosystem they rely on.

AI as creative support

When used carelessly or excessively, generative AI is a threat to the creative ecosystem, but we can’t wholly discount the tech as a tool in a human creative’s arsenal. The history of art is full of technological changes (new pigments, brushes, typewriters, word processors) that transform the nature of artistic production while augmenting human creativity.

Bear with me because there’s a great deal of nuance here that is easy to miss among today’s more impassioned reactions to people using AI as a blunt instrument of creating mediocrity.

While many artists rightfully worry about AI’s extractive tendencies, research published in Harvard Business Review indicates that AI tools can potentially amplify rather than merely extract creative capacity, suggesting that a symbiotic relationship is possible under the right conditions.

Inherent in this argument is that the responsible use of AI is reflected in the skill of the user. You can use a paintbrush to paint a wall or paint the Mona Lisa. Similarly, generative AI can mindlessly fill a canvas with slop, or a human can utilize it to express their own ideas.

Machine learning tools (such as those in Adobe Photoshop) already help human creatives prototype concepts faster, iterate on variations they wouldn’t have considered, or handle some repetitive production tasks like object removal or audio transcription, freeing humans to focus on conceptual direction and emotional resonance.

These potential positives, however, don’t negate the need for responsible stewardship and respecting human creativity as a precious resource.

Cultivating the future

So what might a sustainable ecosystem for human creativity actually involve?

Legal and economic approaches will likely be key. Governments could legislate that AI training must be opt-in, or at the very least, provide a collective opt-out registry (as the EU’s “AI Act” does).

Other potential mechanisms include robust licensing or royalty systems, such as creating a royalty clearinghouse (like the music industry’s BMI or ASCAP) for efficient licensing and fair compensation. Those fees could help compensate human creatives and encourage them to keep creating well into the future.

Deeper shifts may involve cultural values and governance. Inspired by models like Japan’s “Living National Treasures“—where the government funds artisans to preserve vital skills and support their work. Could we establish programs that similarly support human creators while also designating certain works or practices as “creative reserves,” funding the further creation of certain creative works even if the economic market for them dries up?

Or a more radical shift might involve an “AI commons”—legally declaring that any AI model trained on publicly scraped data should be owned collectively as a shared public domain, ensuring that its benefits flow back to society and don’t just enrich corporations.

Photo of family Harvesting Organic Crops On Farm

Meanwhile, Internet platforms have already been experimenting with technical defenses against industrial-scale AI demands. Examples include proof-of-work challenges, slowdown “tarpits” (e.g., Nepenthes), shared crawler blocklists (“ai.robots.txt“), commercial tools (Cloudflare’s AI Labyrinth), and Wikimedia’s “WE5: Responsible Use of Infrastructure” initiative.

These solutions aren’t perfect, and implementing any of them would require overcoming significant practical hurdles. Strict regulations might slow beneficial AI development; opt-out systems burden creators, while opt-in models can be complex to track. Meanwhile, tech defenses often invite arms races. Finding a sustainable, equitable balance remains the core challenge. The issue won’t be solved in a day.

Invest in people

While navigating these complex systemic challenges will take time and collective effort, there is a surprisingly direct strategy that organizations can adopt now: investing in people. Don’t sacrifice human connection and insight to save money with mediocre AI outputs.

Organizations that cultivate unique human perspectives and integrate them with thoughtful AI augmentation will likely outperform those that pursue cost-cutting through wholesale creative automation. Investing in people acknowledges that while AI can generate content at scale, the distinctiveness of human insight, experience, and connection remains priceless.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

In the age of AI, we must protect human creativity as a natural resource Read More »

trump-orders-ed-dept-to-make-ai-a-national-priority-while-plotting-agency’s-death

Trump orders Ed Dept to make AI a national priority while plotting agency’s death

Trump pushes for industry involvement

It seems clear that Trump’s executive order was a reaction to China’s announcement about AI education reforms last week, as Reuters reported. Elsewhere, Singapore and Estonia have laid out their AI education initiatives, Forbes reported, indicating that AI education is increasingly considered critical to any nation’s success.

Trump’s vision for the US requires training teachers and students about what AI is and what it can do. He offers no new appropriations to fund the initiative; instead, he directs a new AI Education Task Force to find existing funding to cover both research into how to implement AI in education and the resources needed to deliver on the executive order’s promises.

Although AI advocates applauded Trump’s initiative, the executive order’s vagueness makes it uncertain how AI education tools will be assessed as Trump pushes for AI to be integrated into “all subject areas.” Possibly using AI in certain educational contexts could disrupt learning by confabulating misinformation, a concern that the Biden administration had in its more cautious approach to AI education initiatives.

Trump also seems to push for much more private sector involvement than Biden did.

The order recommended that education institutions collaborate with industry partners and other organizations to “collaboratively develop online resources focused on teaching K–12 students foundational AI literacy and critical thinking skills.” These partnerships will be announced on a “rolling basis,” the order said. It also pushed students and teachers to partner with industry for the Presidential AI Challenge to foster collaboration.

For Trump’s AI education plan to work, he will seemingly need the DOE to stay intact. However, so far, Trump has not acknowledged this tension. In March, he ordered the DOE to dissolve, with power returned to states to ensure “the effective and uninterrupted delivery of services, programs, and benefits on which Americans rely.”

Were that to happen, at least 27 states and Puerto Rico—which EdWeek reported have already laid out their own AI education guidelines—might push back, using their power to control federal education funding to pursue their own AI education priorities and potentially messing with Trump’s plan.

Trump orders Ed Dept to make AI a national priority while plotting agency’s death Read More »

perplexity-will-come-to-moto-phones-after-exec-testified-google-limited-access

Perplexity will come to Moto phones after exec testified Google limited access

Shevelenko was also asked about Chrome, which the DOJ would like to force Google to sell. Like an OpenAI executive said on Monday, Shevelenko confirmed Perplexity would be interested in buying the browser from Google.

Motorola has all the AI

There were some vague allusions during the trial that Perplexity would come to Motorola phones this year, but we didn’t know just how soon that was. With the announcement of its 2025 Razr devices, Moto has confirmed a much more expansive set of AI features. Parts of the Motorola AI experience are powered by Gemini, Copilot, Meta, and yes, Perplexity.

While Gemini gets top billing as the default assistant app, other firms have wormed their way into different parts of the software. Perplexity’s app will be preloaded, and anyone who buys the new Razrs. Owners will also get three free months of Perplexity Pro. This is the first time Perplexity has had a smartphone distribution deal, but it won’t be shown prominently on the phone. When you start a Motorola device, it will still look like a Google playground.

While it’s not the default assistant, Perplexity is integrated into the Moto AI platform. The new Razrs will proactively suggest you perform an AI search when accessing certain features like the calendar or browsing the web under the banner “Explore with Perplexity.” The Perplexity app has also been optimized to work with the external screen on Motorola’s foldables.

Moto AI also has elements powered by other AI systems. For example, Microsoft Copilot will appear in Moto AI with an “Ask Copilot” option. And Meta’s Llama model powers a Moto AI feature called Catch Me Up, which summarizes notifications from select apps.

It’s unclear why Motorola leaned on four different AI providers for a single phone. It probably helps that all these companies are desperate to entice users to bulk up their market share. Perplexity confirmed that no money changed hands in this deal—it’s on Moto phones to acquire more users. That might be tough with Gemini getting priority placement, though.

Perplexity will come to Moto phones after exec testified Google limited access Read More »

ai-secretly-helped-write-california-bar-exam,-sparking-uproar

AI secretly helped write California bar exam, sparking uproar

On Monday, the State Bar of California revealed that it used AI to develop a portion of multiple-choice questions on its February 2025 bar exam, causing outrage among law school faculty and test takers. The admission comes after weeks of complaints about technical problems and irregularities during the exam administration, reports the Los Angeles Times.

The State Bar disclosed that its psychometrician (a person or organization skilled in administrating psychological tests), ACS Ventures, created 23 of the 171 scored multiple-choice questions with AI assistance. Another 48 questions came from a first-year law student exam, while Kaplan Exam Services developed the remaining 100 questions.

The State Bar defended its practices, telling the LA Times that all questions underwent review by content validation panels and subject matter experts before the exam. “The ACS questions were developed with the assistance of AI and subsequently reviewed by content validation panels and a subject matter expert in advance of the exam,” wrote State Bar Executive Director Leah Wilson in a press release.

According to the LA Times, the revelation has drawn strong criticism from several legal education experts. “The debacle that was the February 2025 bar exam is worse than we imagined,” said Mary Basick, assistant dean of academic skills at the University of California, Irvine School of Law. “I’m almost speechless. Having the questions drafted by non-lawyers using artificial intelligence is just unbelievable.”

Katie Moran, an associate professor at the University of San Francisco School of Law who specializes in bar exam preparation, called it “a staggering admission.” She pointed out that the same company that drafted AI-generated questions also evaluated and approved them for use on the exam.

State bar defends AI-assisted questions amid criticism

Alex Chan, chair of the State Bar’s Committee of Bar Examiners, noted that the California Supreme Court had urged the State Bar to explore “new technologies, such as artificial intelligence” to improve testing reliability and cost-effectiveness.

AI secretly helped write California bar exam, sparking uproar Read More »

google-reveals-sky-high-gemini-usage-numbers-in-antitrust-case

Google reveals sky-high Gemini usage numbers in antitrust case

Despite the uptick in Gemini usage, Google is still far from catching OpenAI. Naturally, Google has been keeping a close eye on ChatGPT traffic. OpenAI has also seen traffic increase, putting ChatGPT around 600 million monthly active users, according to Google’s analysis. Early this year, reports pegged ChatGPT usage at around 400 million users per month.

There are many ways to measure web traffic, and not all of them tell you what you might think. For example, OpenAI has recently claimed weekly traffic as high as 400 million, but companies can choose the seven-day period in a given month they report as weekly active users. A monthly metric is more straightforward, and we have some degree of trust that Google isn’t using fake or unreliable numbers in a case where the company’s past conduct has already harmed its legal position.

While all AI firms strive to lock in as many users as possible, this is not the total win it would be for a retail site or social media platform—each person using Gemini or ChatGPT costs the company money because generative AI is so computationally expensive. Google doesn’t talk about how much it earns (more likely loses) from Gemini subscriptions, but OpenAI has noted that it loses money even on its $200 monthly plan. So while having a broad user base is essential to make these products viable in the long term, it just means higher costs unless the cost of running massive AI models comes down.

Google reveals sky-high Gemini usage numbers in antitrust case Read More »