Artificial Intelligence

reddit-blocks-internet-archive-to-end-sneaky-ai-scraping

Reddit blocks Internet Archive to end sneaky AI scraping

“Until they’re able to defend their site and comply with platform policies (e.g., respecting user privacy, re: deleting removed content) we’re limiting some of their access to Reddit data to protect redditors,” Rathschmidt said.

A review of social media comments suggests that in the past, some Redditors have used the Wayback Machine to research deleted comments or threads. Those commenters noted that myriad other tools exist for surfacing deleted posts or researching a user’s activity, with some suggesting that the Wayback Machine was maybe not the easiest platform to navigate for that purpose.

Redditors have also turned to resources like IA during times when Reddit’s platform changes trigger content removals. Most recently in 2023, when changes to Reddit’s public API threatened to kill beloved subreddits, archives stepped in to preserve content before it was lost.

IA has not signaled whether it’s looking into fixes to get Reddit’s restrictions lifted and did not respond to Ars’ request to comment on how this change might impact the archive’s utility as an open web resource, given Reddit’s popularity.

The director of the Wayback Machine, Mark Graham, told Ars that IA has “a longstanding relationship with Reddit” and continues to have “ongoing discussions about this matter.”

It seems likely that Reddit is financially motivated to restrict AI firms from taking advantage of Wayback Machine archives, perhaps hoping to spur more lucrative licensing deals like Reddit struck with OpenAI and Google. The terms of the OpenAI deal were kept quiet, but the Google deal was reportedly worth $60 million. Over the next three years, Reddit expects to make more than $200 million off such licensing deals.

Disclosure: Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.

Reddit blocks Internet Archive to end sneaky AI scraping Read More »

ai-industry-horrified-to-face-largest-copyright-class-action-ever-certified

AI industry horrified to face largest copyright class action ever certified

According to the groups, allowing copyright class actions in AI training cases will result in a future where copyright questions remain unresolved and the risk of “emboldened” claimants forcing enormous settlements will chill investments in AI.

“Such potential liability in this case exerts incredibly coercive settlement pressure for Anthropic,” industry groups argued, concluding that “as generative AI begins to shape the trajectory of the global economy, the technology industry cannot withstand such devastating litigation. The United States currently may be the global leader in AI development, but that could change if litigation stymies investment by imposing excessive damages on AI companies.”

Some authors won’t benefit from class actions

Industry groups joined Anthropic in arguing that, generally, copyright suits are considered a bad fit for class actions because each individual author must prove ownership of their works. And the groups weren’t alone.

Also backing Anthropic’s appeal, advocates representing authors—including Authors Alliance, the Electronic Frontier Foundation, American Library Association, Association of Research Libraries, and Public Knowledge—pointed out that the Google Books case showed that proving ownership is anything but straightforward.

In the Anthropic case, advocates for authors criticized Alsup for basically judging all 7 million books in the lawsuit by their covers. The judge allegedly made “almost no meaningful inquiry into who the actual members are likely to be,” as well as “no analysis of what types of books are included in the class, who authored them, what kinds of licenses are likely to apply to those works, what the rightsholders’ interests might be, or whether they are likely to support the class representatives’ positions.”

Ignoring “decades of research, multiple bills in Congress, and numerous studies from the US Copyright Office attempting to address the challenges of determining rights across a vast number of books,” the district court seemed to expect that authors and publishers would easily be able to “work out the best way to recover” damages.

AI industry horrified to face largest copyright class action ever certified Read More »

chatgpt-users-hate-gpt-5’s-“overworked-secretary”-energy,-miss-their-gpt-4o-buddy

ChatGPT users hate GPT-5’s “overworked secretary” energy, miss their GPT-4o buddy

Others are irked by how quickly they run up against usage limits on the free tier, which pushes them toward the Plus ($20) and Pro ($200) subscriptions. But running generative AI is hugely expensive, and OpenAI is hemorrhaging cash. It wouldn’t be surprising if the wide rollout of GPT-5 is aimed at increasing revenue. At the same time, OpenAI can point to AI evaluations that show GPT-5 is more intelligent than its predecessor.

RIP your AI buddy

OpenAI built ChatGPT to be a tool people want to use. It’s a fine line to walk—OpenAI has occasionally made its flagship AI too friendly and complimentary. Several months ago, the company had to roll back a change that made the bot into a sycophantic mess that would suck up to the user at every opportunity. That was a bridge too far, certainly, but many of the company’s users liked the generally friendly tone of the chatbot. They tuned the AI with custom prompts and built it into a personal companion. They’ve lost that with GPT-5.

No new AI

Naturally, ChatGPT users have turned to AI to express their frustration.

Credit: /u/Responsible_Cow2236

Naturally, ChatGPT users have turned to AI to express their frustration. Credit: /u/Responsible_Cow2236

There are reasons to be wary of this kind of parasocial attachment to artificial intelligence. As companies have tuned these systems to increase engagement, they prioritize outputs that make people feel good. This results in interactions that can reinforce delusions, eventually leading to serious mental health episodes and dangerous medical beliefs. It can be hard to understand for those of us who don’t spend our days having casual conversations with ChatGPT, but the Internet is teeming with folks who build their emotional lives around AI.

Is GPT-5 safer? Early impressions from frequent chatters decry the bot’s more corporate, less effusively creative tone. In short, a significant number of people don’t like the outputs as much. GPT-5 could be a more able analyst and worker, but it isn’t the digital companion people have come to expect, and in some cases, love. That might be good in the long term, both for users’ mental health and OpenAI’s bottom line, but there’s going to be an adjustment period for fans of GPT-4o.

Chatters who are unhappy with the more straightforward tone of GPT-5 can always go elsewhere. Elon Musk’s xAI has shown it is happy to push the envelope with Grok, featuring Taylor Swift nudes and AI waifus. Of course, Ars does not recommend you do that.

ChatGPT users hate GPT-5’s “overworked secretary” energy, miss their GPT-4o buddy Read More »

openai-releases-its-first-open-source-models-since-2019

OpenAI releases its first open source models since 2019

OpenAI is releasing new generative AI models today, and no, GPT-5 is not one of them. Depending on how you feel about generative AI, these new models may be even more interesting, though. The company is rolling out gpt-oss-120b and gpt-oss-20b, its first open weight models since the release of GPT-2 in 2019. You can download and run these models on your own hardware, with support for simulated reasoning, tool use, and deep customization.

When you access the company’s proprietary models in the cloud, they’re running on powerful server infrastructure that cannot be replicated easily, even in enterprise. The new OpenAI models come in two variants (120b and 20b) to be run on less powerful hardware configurations. Both are transformers with configurable chain of thought (CoT), supporting low, medium, and high settings. The lower settings are faster and use fewer compute resources, but the outputs are better with the highest setting. You can set the CoT level with a single line in the system prompt.

The smaller gpt-oss-20b has a total of 21 billion parameters, utilizing mixture-of-experts (MoE) to reduce that to 3.6 billion parameters per token. As for gpt-oss-120b, its 117 billion parameters come down to 5.1 billion per token with MoE. The company says the smaller model can run on a consumer-level machine with 16GB or more of memory. To run gpt-oss-120b, you need 80GB of memory, which is more than you’re likely to find in the average consumer machine. It should fit on a single AI accelerator GPU like the Nvidia H100, though. Both models have a context window of 128,000 tokens.

Credit: OpenAI

The team says users of gpt-oss can expect robust performance similar to its leading cloud-based models. The larger one benchmarks between the o3 and o4-mini proprietary models in most tests, with the smaller version running just a little behind. It gets closest in math and coding tasks. In the knowledge-based Humanity’s Last Exam, o3 is far out in front with 24.9 percent (with tools), while gpt-oss-120b only manages 19 percent. For comparison, Google’s leading Gemini Deep Think hits 34.8 percent in that test.

OpenAI releases its first open source models since 2019 Read More »

deepmind-reveals-genie-3-“world-model”-that-creates-real-time-interactive-simulations

DeepMind reveals Genie 3 “world model” that creates real-time interactive simulations

While no one has figured out how to make money from generative artificial intelligence, that hasn’t stopped Google DeepMind from pushing the boundaries of what’s possible with a big pile of inference. The capabilities (and costs) of these models have been on an impressive upward trajectory, a trend exemplified by the reveal of Genie 3. A mere seven months after showing off the Genie 2 “foundational world model,” which was itself a significant improvement over its predecessor, Google now has Genie 3.

With Genie 3, all it takes is a prompt or image to create an interactive world. Since the environment is continuously generated, it can be changed on the fly. You can add or change objects, alter weather conditions, or insert new characters—DeepMind calls these “promptable events.” The ability to create alterable 3D environments could make games more dynamic for players and offer developers new ways to prove out concepts and level designs. However, many in the gaming industry have expressed doubt that such tools would help.

Genie 3: building better worlds.

It’s tempting to think of Genie 3 simply as a way to create games, but DeepMind sees this as a research tool, too. Games play a significant role in the development of artificial intelligence because they provide challenging, interactive environments with measurable progress. That’s why DeepMind previously turned to games like Go and StarCraft to expand the bounds of AI.

World models take that to the next level, generating an interactive world frame by frame. This provides an opportunity to refine how AI models—including so-called “embodied agents”—behave when they encounter real-world situations. One of the primary limitations as companies work toward the goal of artificial general intelligence (AGI) is the scarcity of reliable training data. After piping basically every webpage and video on the planet into AI models, researchers are turning toward synthetic data for many applications. DeepMind believes world models could be a key part of this effort, as they can be used to train AI agents with essentially unlimited interactive worlds.

DeepMind says Genie 3 is an important advancement because it offers much higher visual fidelity than Genie 2, and it’s truly real-time. Using keyboard input, it’s possible to navigate the simulated world in 720p resolution at 24 frames per second. Perhaps even more importantly, Genie 3 can remember the world it creates.

DeepMind reveals Genie 3 “world model” that creates real-time interactive simulations Read More »

delta-denies-using-ai-to-come-up-with-inflated,-personalized-prices

Delta denies using AI to come up with inflated, personalized prices

Delta scandal highlights value of transparency

According to Delta, the company has “zero tolerance for discriminatory or predatory pricing” and only feeds its AI system aggregated data “to enhance our existing fare pricing processes.”

Rather than basing fare prices on customers’ personal information, Carter clarified that “all customers have access to the same fares and offers based on objective criteria provided by the customer such as origin and destination, advance purchase, length of stay, refundability, and travel experience selected.”

The AI use can result in higher or lower prices, but not personalized fares for different customers, Carter said. Instead, Delta plans to use AI pricing to “enhance market competitiveness and drive sales, benefiting both our customers and our business.”

Factors weighed by the AI system, Carter explained, include “customer demand for seats and purchasing data at an aggregated level, competitive offers and schedules, route performance, and cost of providing the service inclusive of jet fuel.” That could potentially mean a rival’s promotion or schedule change could trigger the AI system to lower prices to stay competitive, or it might increase prices based on rising fuel costs to help increase revenue or meet business goals.

“Given the tens of millions of fares and hundreds of thousands of routes for sale at any given time, the use of new technology like AI promises to streamline the process by which we analyze existing data and the speed and scale at which we can respond to changing market dynamics,” Carter wrote.

He explained the AI system helps Delta aggregate purchasing data for specific routes and flights, adapt to new market conditions, and factor in “thousands of variables simultaneously.” AI could also eventually be used to assist with crew scheduling, improve flight availability, or help reservation specialists answer complex questions or resolve disputes.

But “to reiterate, prices are not targeted to individual consumers,” Carter emphasized.

Delta further pointed out that the company does not require customers to log in to search for tickets, which means customers can search for flights without sharing any personal information.

For AI companies paying attention to the Delta backlash, there may be a lesson about the value of transparency in Delta’s scandal. Critics noted Delta was among the first to admit it was using AI to influence pricing, but the vague explanation on the earnings call stoked confusion over how, as Delta seemed to drag its feet amid calls by groups like Consumer Watchdog for more transparency.

Delta denies using AI to come up with inflated, personalized prices Read More »

google-releases-gemini-2.5-deep-think-for-ai-ultra-subscribers

Google releases Gemini 2.5 Deep Think for AI Ultra subscribers

Google is unleashing its most powerful Gemini model today, but you probably won’t be able to try it. After revealing Gemini 2.5 Deep Think at the I/O conference back in May, Google is making this AI available in the Gemini app. Deep Think is designed for the most complex queries, which means it uses more compute resources than other models. So it should come as no surprise that only those subscribing to Google’s $250 AI Ultra plan will be able to access it.

Deep Think is based on the same foundation as Gemini 2.5 Pro, but it increases the “thinking time” with greater parallel analysis. According to Google, Deep Think explores multiple approaches to a problem, even revisiting and remixing the various hypotheses it generates. This process helps it create a higher-quality output.

Deep Think benchmarks

Credit: Google

Like some other heavyweight Gemini tools, Deep Think takes several minutes to come up with an answer. This apparently makes the AI more adept at design aesthetics, scientific reasoning, and coding. Google has exposed Deep Think to the usual battery of benchmarks, showing that it surpasses the standard Gemini 2.5 Pro and competing models like OpenAI o3 and Grok 4. Deep Think shows a particularly large gain in Humanity’s Last Exam, a collection of 2,500 complex, multi-modal questions that cover more than 100 subjects. Other models top out at 20 or 25 percent, but Gemini 2.5 Deep Think managed a score of 34.8 percent.

Google releases Gemini 2.5 Deep Think for AI Ultra subscribers Read More »

google-confirms-it-will-sign-the-eu-ai-code-of-practice

Google confirms it will sign the EU AI Code of Practice

The regulation of AI systems could be the next hurdle as Big Tech aims to deploy technologies framed as transformative and vital to the future. Google products like search and Android have been in the sights of EU regulators for years, so getting in on the ground floor with the AI code would help it navigate what will surely be a tumultuous legal environment.

A comprehensive AI framework

The US has shied away from AI regulation, and the current administration is actively working to remove what few limits are in place. The White House even attempted to ban all state-level AI regulation for a period of ten years in the recent tax bill. Europe, meanwhile, is taking the possible negative impacts of AI tools seriously with a rapidly evolving regulatory framework.

The AI Code of Practice aims to provide AI firms with a bit more certainty in the face of a shifting landscape. It was developed with the input of more than 1,000 citizen groups, academics, and industry experts. The EU Commission says companies that adopt the voluntary code will enjoy a lower bureaucratic burden, easing compliance with the block’s AI Act, which came into force last year.

Under the terms of the code, Google will have to publish summaries of its model training data and disclose additional model features to regulators. The code also includes guidance on how firms should manage safety and security in compliance with the AI Act. Likewise, it includes paths to align a company’s model development with EU copyright law as it pertains to AI, a sore spot for Google and others.

Companies like Meta that don’t sign the code will not escape regulation. All AI companies operating in Europe will have to abide by the AI Act, which includes the most detailed regulatory framework for generative AI systems in the world. The law bans high-risk uses of AI like intentional deception or manipulation of users, social scoring systems, and real-time biometric scanning in public spaces. Companies that violate the rules in the AI Act could be hit with fines as high as 35 million euros ($40.1 million) or up to 7 percent of the offender’s global revenue.

Google confirms it will sign the EU AI Code of Practice Read More »

delta’s-ai-spying-to-“jack-up”-prices-must-be-banned,-lawmakers-say

Delta’s AI spying to “jack up” prices must be banned, lawmakers say

“There is no fare product Delta has ever used, is testing or plans to use that targets customers with individualized offers based on personal information or otherwise,” Delta said. “A variety of market forces drive the dynamic pricing model that’s been used in the global industry for decades, with new tech simply streamlining this process. Delta always complies with regulations around pricing and disclosures.”

Other companies “engaging in surveillance-based price setting” include giants like Amazon and Kroger, as well as a ride-sharing app that has been “charging a customer more when their phone battery is low.”

Public Citizen, a progressive consumer rights group that endorsed the bill, condemned the practice in the press release, urging Congress to pass the law and draw “a clear line in the sand: companies can offer discounts and fair wages—but not by spying on people.”

“Surveillance-based price gouging and wage setting are exploitative practices that deepen inequality and strip consumers and workers of dignity,” Public Citizen said.

AI pricing will cause “full-blown crisis”

In January, the Federal Trade Commission requested information from eight companies—including MasterCard, Revionics, Bloomreach, JPMorgan Chase, Task Software, PROS, Accenture, and McKinsey & Co—joining a “shadowy market” that provides AI pricing services. Those companies confirmed they’ve provided services to at least 250 companies “that sell goods or services ranging from grocery stores to apparel retailers,” lawmakers noted.

That inquiry led the FTC to conclude that “widespread adoption of this practice may fundamentally upend how consumers buy products and how companies compete.”

In the press release, the anti-monopoly watchdog, the American Economic Liberties Project, was counted among advocacy groups endorsing the Democrats’ bill. Their senior legal counsel, Lee Hepner, pointed out that “grocery prices have risen 26 percent since the pandemic-era explosion of online shopping,” and that’s “dovetailing with new technology designed to squeeze every last penny from consumers.”

Delta’s AI spying to “jack up” prices must be banned, lawmakers say Read More »

google’s-new-“web-guide”-will-use-ai-to-organize-your-search-results

Google’s new “Web Guide” will use AI to organize your search results

Web Guide is halfway between normal search and AI Mode.

Credit: Google

Web Guide is halfway between normal search and AI Mode. Credit: Google

Google suggests trying Web Guide with longer or open-ended queries, like “how to solo travel in Japan.” The video below uses that search as an example. It has many of the links you might expect, but there are also AI-generated headings with summaries and suggestions. It really looks halfway between standard search and AI Mode. Because it has to run additional searches and generate content, Web Guide takes a beat longer to produce results compared to a standard search. There’s no AI Overview at the top, though.

Web Guide is a Search Labs experiment, meaning you have to opt-in before you’ll see any AI organization in your search results. When enabled, this feature takes over the “Web” tab of Google search. Even if you turn it on, Google notes there will be a toggle that allows you to revert to the normal, non-AI-optimized page.

An example of the Web Guide test.

Eventually, the test will expand to encompass more parts of the search experience, like the “All” tab—that’s the default search experience when you input a query from a browser or phone search bar. Google says it’s approaching this as an opt-in feature to start. So that sounds like Web Guide might be another AI Mode situation in which the feature rolls out widely after a short testing period. It’s technically possible the test will not result in a new universal search feature, but Google hasn’t yet met a generative AI implementation that it hasn’t liked.

Google’s new “Web Guide” will use AI to organize your search results Read More »

trump’s-order-to-make-chatbots-anti-woke-is-unconstitutional,-senator-says

Trump’s order to make chatbots anti-woke is unconstitutional, senator says


Trump plans to use chatbots to eliminate dissent, senator alleged.

The CEOs of every major artificial intelligence company received letters Wednesday urging them to fight Donald Trump’s anti-woke AI order.

Trump’s executive order requires any AI company hoping to contract with the federal government to jump through two hoops to win funding. First, they must prove their AI systems are “truth-seeking”—with outputs based on “historical accuracy, scientific inquiry, and objectivity” or else acknowledge when facts are uncertain. Second, they must train AI models to be “neutral,” which is vaguely defined as not favoring DEI (diversity, equity, and inclusion), “dogmas,” or otherwise being “intentionally encoded” to produce “partisan or ideological judgments” in outputs “unless those judgments are prompted by or otherwise readily accessible to the end user.”

Announcing the order in a speech, Trump said that the US winning the AI race depended on removing allegedly liberal biases, proclaiming that “once and for all, we are getting rid of woke.”

“The American people do not want woke Marxist lunacy in the AI models, and neither do other countries,” Trump said.

Senator Ed Markey (D.-Mass.) accused Republicans of basing their policies on feelings, not facts, joining critics who suggest that AI isn’t “woke” just because of a few “anecdotal” outputs that reflect a liberal bias. And he suggested it was hypocritical that Trump’s order “ignores even more egregious evidence” that contradicts claims that AI is trained to be woke, such as xAI’s Elon Musk explicitly confirming that Grok was trained to be more right-wing.

“On May 1, 2025, Grok—the AI chatbot developed by xAI, Elon Musk’s AI company—acknowledged that ‘xAI tried to train me to appeal to the right,’” Markey wrote in his letters to tech giants. “If OpenAI’s ChatGPT or Google’s Gemini had responded that it was trained to appeal to the left, congressional Republicans would have been outraged and opened an investigation. Instead, they were silent.”

He warned the heads of Alphabet, Anthropic, Meta, Microsoft, OpenAI, and xAI that Trump’s AI agenda was allegedly “an authoritarian power grab” intended to “eliminate dissent” and was both “dangerous” and “patently unconstitutional.”

Even if companies’ AI models are clearly biased, Markey argued that “Republicans are using state power to pressure private companies to adopt certain political viewpoints,” which he claimed is a clear violation of the First Amendment. If AI makers cave, Markey warned, they’d be allowing Trump to create “significant financial incentives” to ensure that “their AI chatbots do not produce speech that would upset the Trump administration.”

“This type of interference with private speech is precisely why the US Constitution has a First Amendment,” Markey wrote, while claiming that Trump’s order is factually baseless.

It’s “based on the erroneous belief that today’s AI chatbots are ‘woke’ and biased against Trump,” Markey said, urging companies “to fight this unconstitutional executive order and not become a pawn in Trump’s effort to eliminate dissent in this country.”

One big reason AI companies may fight order

Some experts agreed with Markey that Trump’s order was likely unconstitutional or otherwise unlawful, The New York Times reported.

For example, Trump may struggle to convince courts that the government isn’t impermissibly interfering with AI companies’ protected speech or that such interference may be necessary to ensure federal procurement of unbiased AI systems.

Genevieve Lakier, a law professor at the University of Chicago, told the NYT that the lack of clarity around what makes a model biased could be a problem. Courts could deem the order an act of “unconstitutional jawboning,” with the Trump administration and Republicans generally perceived as using legal threats to pressure private companies into producing outputs that they like.

Lakier suggested that AI companies may be so motivated to win government contracts or intimidated by possible retaliation from Trump that they may not even challenge the order, though.

Markey is hoping that AI companies will refuse to comply with the order; however, despite recognizing that it places companies “in a difficult position: Either stand on your principles and face the wrath of the Trump administration or cave to Trump and modify your company’s political speech.”

There is one big possible reason that AI companies may have to resist, though.

Oren Etzioni, the former CEO of the AI research nonprofit Allen Institute for Artificial Intelligence, told CNN that Trump’s anti-woke AI order may contradict the top priority of his AI Action Plan—speeding up AI innovation in the US—and actually threaten to hamper innovation.

If AI developers struggle to produce what the Trump administration considers “neutral” outputs—a technical challenge that experts agree is not straightforward—that could delay model advancements.

“This type of thing… creates all kinds of concerns and liability and complexity for the people developing these models—all of a sudden, they have to slow down,” Etzioni told CNN.

Senator: Grok scandal spotlights GOP hypocrisy

Some experts have suggested that rather than chatbots adopting liberal viewpoints, chatbots are instead possibly filtering out conservative misinformation and unintentionally appearing to favor liberal views.

Andrew Hall, a professor of political economy at Stanford Graduate School of Business—who published a May paper finding that “Americans view responses from certain popular AI models as being slanted to the left”—told CNN that “tech companies may have put extra guardrails in place to prevent their chatbots from producing content that could be deemed offensive.”

Markey seemed to agree, writing that Republicans’ “selective outrage matches conservatives’ similar refusal to acknowledge that the Big Tech platforms suspend or impose other penalties disproportionately on conservative users because those users are disproportionately likely to share misinformation, rather than due to any political bias by the platforms.”

It remains unclear what amount of supposed bias detected in outputs could cause a contract bid to be rejected or an ongoing contract to be canceled, but AI companies will likely be on the hook to pay any fees in terminating contracts.

Complying with Trump’s order could pose a struggle for AI makers for several reasons. First, they’ll have to determine what’s fact and what’s ideology, contending with conflicting government standards in how Trump defines DEI. For example, the president’s order counts among “pervasive and destructive” DEI ideologies any outputs that align with long-standing federal protections against discrimination on the basis of race or sex. In addition, they must figure out what counts as “suppression or distortion of factual information about” historical topics like critical race theory, systemic racism, or transgenderism.

The examples in Trump’s order highlighting outputs offensive to conservatives seem inconsequential. He calls out image generators depicting the Pope, the Founding Fathers, and Vikings as not white as problematic, as well as models refusing to misgender a person “even if necessary to stop a nuclear apocalypse” or show white people celebrating their achievements.

It’s hard to imagine how these kinds of flawed outputs could impact government processes, as compared to, say, government contracts granted to models that could be hiding covert racism or sexism.

So far, there has been one example of an AI model displaying a right-wing bias earning a government contract with no red flags raised about its outputs.

Earlier this summer, Grok shocked the world after Musk announced he would be updating the bot to eliminate a supposed liberal bias. The unhinged chatbot began spouting offensive outputs, including antisemitic posts that praised Hitler as well as proclaiming itself “MechaHitler.”

But those obvious biases did not conflict with the Pentagon’s decision to grant xAI a $200 million federal contract. In a statement, a Pentagon spokesperson insisted that “the antisemitism episode wasn’t enough to disqualify” xAI, NBC News reported, partly since “several frontier AI models have produced questionable outputs.”

The Pentagon’s statement suggested that the government expected to deal with such risks while seizing the opportunity of rapidly deploying emerging AI technology into government prototype processes. And perhaps notably, Trump provides a carveout for any agencies using AI models to safeguard national security, which could exclude the Pentagon from experiencing any “anti-woke” delays in accessing frontier models.

But that won’t help other agencies that must figure out how to assess models to meet anti-woke AI requirements over the next few months. And those assessments could cause delays that Trump may wish to avoid in pushing for widespread AI adoption across government.

Trump’s anti-woke AI agenda may be impossible

On the same day that Trump issued his anti-woke AI order, his AI Action Plan promised an AI “renaissance” fueling “intellectual achievements” by “unraveling ancient scrolls once thought unreadable, making breakthroughs in scientific and mathematical theory, and creating new kinds of digital and physical art.”

To achieve that, the US must “innovate faster and more comprehensively than our competitors” and eliminate regulatory barriers impeding innovation in order to “set the gold standard for AI worldwide.”

However, achieving the anti-woke ambitions of both orders raises a technical problem that even the president must accept currently has no solution. In his AI Action Plan, Trump acknowledged that “the inner workings of frontier AI systems are poorly understood,” with even “advanced technologists” unable to explain “why a model produced a specific output.”

Whether requiring AI companies to explain their AI outputs to win government contracts will mess with other parts of Trump’s action plan remains to be seen. But Samir Jain, vice president of policy at a civil liberties group called the Center for Democracy and Technology, told the NYT that he predicts the anti-woke AI agenda will set “a really vague standard that’s going to be impossible for providers to meet.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Trump’s order to make chatbots anti-woke is unconstitutional, senator says Read More »

ai-video-is-invading-youtube-shorts-and-google-photos-starting-today

AI video is invading YouTube Shorts and Google Photos starting today

Google is following through on recent promises to add more generative AI features to its photo and video products. Over on YouTube, Google is rolling out the first wave of generative AI video for YouTube Shorts, but even if you’re not a YouTuber, you’ll be exposed to more AI videos soon. Google Photos, which is integrated with virtually every Android phone on the market, is also getting AI video-generation capabilities. In both cases, the features are currently based on the older Veo 2 model, not the more capable Veo 3 that has been meming across the Internet since it was announced at I/O in May.

YouTube CEO Neal Mohan confirmed earlier this summer that the company planned to add generative AI to the creator tools for YouTube Shorts. There were already tools to generate backgrounds for videos, but the next phase will involve creating new video elements from a text prompt.

Starting today, creators will be able to use a photo as the basis for a new generative AI video. YouTube also promises a collection of easily applied generative effects, which will be accessible from the Shorts camera. There’s also a new AI playground hub that the company says will be home to all its AI tools, along with examples and suggested prompts to help people pump out AI content.

The Veo 2-based videos aren’t as realistic as Veo 3 clips, but an upgrade is planned.

So far, all the YouTube AI video features are running on the Veo 2 model. The plan is still to move to Veo 3 later this summer. The AI features in YouTube Shorts are currently limited to the United States, Canada, Australia, and New Zealand, but they will expand to more countries later.

AI video is invading YouTube Shorts and Google Photos starting today Read More »