Policy

fcc-chairman-celebrates-court-loss-in-case-over-biden-era-diversity-rule

FCC chairman celebrates court loss in case over Biden-era diversity rule

Federal Communications Commission Chairman Brendan Carr celebrated an FCC court loss yesterday after a ruling that struck down Biden-era diversity reporting requirements that Carr voted against while Democrats were in charge.

“An appellate court just struck down the Biden FCC’s 2024 decision to force broadcasters to post race and gender scorecards,” Carr wrote. “As I said in my dissent back then, the FCC’s 2024 decision was an unlawful effort to pressure businesses into discriminating based on race & gender.”

The FCC mandate was challenged in court by National Religious Broadcasters, a group for Christian TV and radio broadcasters and the American Family Association. They sued in the conservative-leaning US Court of Appeals for the 5th Circuit, where a three-judge panel yesterday ruled unanimously against the FCC.

The FCC order struck down by the court required broadcasters to file an annual form with race, ethnicity, and gender data for employees within specified job categories. “The Federal Communications Commission issued an order requiring most television and radio broadcasters to compile employment-demographics data and to disclose the data to the FCC, which the agency will then post on its website on a broadcaster-identifiable basis,” the 5th Circuit court said.

The FCC’s February 2024 order revived a data-collection requirement that was previously enforced from 1970 to 2001. The FCC suspended the data collection in 2001 after court rulings limiting how the commission could use the data, though the data collection itself had not been found to be unconstitutional.

FCC’s public interest authority not enough, court says

Led by then-Chairwoman Jessica Rosenworcel, the FCC last year said that reviving the data collection would serve the public interest by helping the agency “report on and analyze employment trends in the broadcast sector and also to compare trends across other sectors regulated by the Commission.”

But the FCC’s public-interest authority isn’t enough to justify the rule, the 5th Circuit judges found.

“The FCC undoubtedly has broad authority to act in the public interest,” the ruling said. “That authority, however, must be linked ‘to a distinct grant of authority’ contained in its statutes. The FCC has not shown that it is authorized to require broadcasters to file employment-demographics data or to analyze industry employment trends, so it cannot fall back on ‘public interest’ to fill the gap.”

FCC chairman celebrates court loss in case over Biden-era diversity rule Read More »

biotech-company-regeneron-to-buy-bankrupt-23andme-for-$256m

Biotech company Regeneron to buy bankrupt 23andMe for $256M

Biotechnology company Regeneron will acquire 23andMe out of bankruptcy for $256 million, with a plan to keep the DNA-testing company running without interruption and uphold its privacy-protection promises.

In its announcement of the acquisition, Regeneron assured 23andMe’s 15 million customers that their data—including genetic and health information, genealogy, and other sensitive personal information—would be safe and in good hands. Regeneron aims to use the large trove of genetic data to further its own work using genetics to develop medical advances—something 23andMe tried and failed to do.

“As a world leader in human genetics, Regeneron Genetics Center is committed to and has a proven track record of safeguarding the genetic data of people across the globe, and, with their consent, using this data to pursue discoveries that benefit science and society,” Aris Baras, senior vice president and head of the Regeneron Genetics Center, said in a statement. “We assure 23andMe customers that we are committed to protecting the 23andMe dataset with our high standards of data privacy, security, and ethical oversight and will advance its full potential to improve human health.”

Baras said that Regeneron’s Genetic Center already has its own genetic dataset from nearly 3 million people.

The safety of 23andMe’s dataset has drawn considerable concern among consumers, lawmakers, and regulators amid the company’s downfall. For instance, in March, California Attorney General Rob Bonta made the unusual move to urge Californians to delete their genetic data amid 23andMe’s financial distress. Federal Trade Commission Chairman Andrew Ferguson also weighed in, making clear in a March letter that “any purchaser should expressly agree to be bound by and adhere to the terms of 23andMe’s privacy policies and applicable law.”

Biotech company Regeneron to buy bankrupt 23andMe for $256M Read More »

labor-dispute-erupts-over-ai-voiced-darth-vader-in-fortnite

Labor dispute erupts over AI-voiced Darth Vader in Fortnite

For voice actors who previously portrayed Darth Vader in video games, the Fortnite feature starkly illustrates how AI voice synthesis could reshape their profession. While James Earl Jones created the iconic voice for films, at least 54 voice actors have performed as Vader in various media games over the years when Jones wasn’t available—work that could vanish if AI replicas become the industry standard.

The union strikes back

SAG-AFTRA’s labor complaint (which can be read online here) doesn’t focus on the AI feature’s technical problems or on permission from the Jones estate, which explicitly authorized the use of a synthesized version of his voice for the character in Fortnite. The late actor, who died in 2024, had signed over his Darth Vader voice rights before his death.

Instead, the union’s grievance centers on labor rights and collective bargaining. In the NLRB filing, SAG-AFTRA alleges that Llama Productions “failed and refused to bargain in good faith with the union by making unilateral changes to terms and conditions of employment, without providing notice to the union or the opportunity to bargain, by utilizing AI-generated voices to replace bargaining unit work on the Interactive Program Fortnite.”

The action comes amid SAG-AFTRA’s ongoing interactive media strike, which began in July 2024 after negotiations with video game producers stalled primarily over AI protections. The strike continues, with more than 100 games signing interim agreements, while others, including those from major publishers like Epic, remain in dispute.

Labor dispute erupts over AI-voiced Darth Vader in Fortnite Read More »

trump-to-sign-law-forcing-platforms-to-remove-revenge-porn-in-48-hours

Trump to sign law forcing platforms to remove revenge porn in 48 hours

Likely wearisome for victims, the law won’t be widely enforced for about a year, while any revenge porn already online continues spreading. Perhaps most frustrating, once the law kicks in, victims will still need to police their own revenge porn online. And the 48-hour window leaves time for content to be downloaded and reposted, leaving them vulnerable on any unmonitored platforms.

Some victims are already tired of fighting this fight. Last July, when Google started downranking deepfake porn apps to make AI-generated NCII less discoverable, one deepfake victim, Sabrina Javellana, told The New York Times that she spent months reporting harmful content on various platforms online. And that didn’t stop the fake images from spreading. Joe Morelle, a Democratic US representative who has talked to victims of deepfake porn and sponsored laws to help them, agreed that “these images live forever.”

“It just never ends,” Javellana said. “I just have to accept it.”

Andrea Powell—director of Alecto AI, an app founded by a revenge porn survivor that helps victims remove NCII online—warned on a 2024 panel that Ars attended that requiring victims to track down “their own imagery [and submit] multiple claims across different platforms [increases] their sense of isolation, shame, and fear.”

While the Take It Down Act seems flawed, passing a federal law imposing penalties for allowing deepfake porn posts could serve as a deterrent for bad actors or possibly spark a culture shift by making it clear that posting AI-generated NCII is harmful.

Victims have long suggested that consistency is key to keeping revenge porn offline, and the Take It Down Act certainly offers that, creating a moderately delayed delete button on every major platform.

Although it seems clear that the Take It Down Act will surely make it easier than ever to report NCII, whether the law will effectively reduce the spread of NCII online is an unknown and will likely hinge on the 48-hour timeline overcoming criticisms.

Trump to sign law forcing platforms to remove revenge porn in 48 hours Read More »

fcc-chair-brendan-carr-is-letting-isps-merge—as-long-as-they-end-dei-programs

FCC Chair Brendan Carr is letting ISPs merge—as long as they end DEI programs

Verizon’s letter said that because of the “changing landscape,” the firm “has been evaluating its DEI-related programs, HR processes, supplier programs, training programs and materials, and other initiatives.” Among other changes, Verizon said it “will no longer have a team or any individual roles focused on DEI” and will reassign DEI-focused employees to “HR talent objectives.”

“Verizon recognizes that some DEI policies and practices could be associated with discrimination,” the letter said.

T-Mobile sent a similar letter to Carr on March 27, saying it “is fully committed to identifying and rooting out any policies and practices that enable such discrimination, whether in fulfillment of DEI or any other purpose,” and is thus “conducting a comprehensive review of its DEI policies, programs, and activities.” One day later, the FCC approved a T-Mobile joint venture to acquire fiber provider Lumos.

With the Verizon and T-Mobile deals approved, Carr has another opportunity to make demands on a major telecom company. On Friday, Charter announced a $34.5 billion merger with Cox that would make it the largest home Internet provider in the US, passing Comcast. Several Charter and Cox programs could be on the chopping block because of Carr’s animosity toward diversity initiatives.

Verizon criticized as “cowardly”

Media advocacy group Free Press criticized Verizon for agreeing to Carr’s demands.

“Verizon’s cowardly decision to modify or kill its diversity, equity and inclusion practices is the latest shameful episode in a litany of surrenders to appease our authoritarian president,” Free Press Vice President of Policy Matt Wood said. “The government alleges no specific instances of unlawful employment discrimination, and Verizon admits none. Yet to win a merger approval and the prospect of a few extra dollars, the company meekly suggests that some of its ‘DEI policies and practices could be associated with discrimination’—lawyer-speak for we’ve done nothing wrong, but we can see which way the political winds are blowing.”

Wood said that Carr “once defended his agency’s independence from the White House when a Democrat was in charge” but is “now gleefully carrying out the president’s orders to roll back civil-rights protections and equal-opportunity gains at all costs.”

FCC Chair Brendan Carr is letting ISPs merge—as long as they end DEI programs Read More »

spotify-caught-hosting-hundreds-of-fake-podcasts-that-advertise-selling-drugs

Spotify caught hosting hundreds of fake podcasts that advertise selling drugs

This week, Spotify rushed to remove hundreds of obviously fake podcasts found to be marketing prescription drugs in violation of Spotify’s policies and, likely, federal law.

On Thursday, Business Insider (BI) reported that Spotify removed 200 podcasts advertising the sale of opioids and other drugs, but that wasn’t the end of the scandal. Today, CNN revealed that it easily uncovered dozens more fake podcasts peddling drugs.

Some of the podcasts may have raised a red flag for a human moderator—with titles like “My Adderall Store” or “Xtrapharma.com” and episodes titled “Order Codeine Online Safe Pharmacy Louisiana” or “Order Xanax 2 mg Online Big Deal On Christmas Season,” CNN reported.

But Spotify’s auto-detection did not flag the fake podcasts for removal. Some of them remained up for months, CNN reported, which could create trouble for the music streamer at a time when the US government is cracking down on illegal drug sales online.

“Multiple teens have died of overdoses from pills bought online,” CNN noted, sparking backlash against tech companies. And Donald Trump’s aggressive tariffs were specifically raised to stop deadly drugs from bombarding the US, which the president declared a national emergency.

BI found that many podcast episodes featured a computerized voice and were under a minute long, while CNN noted some episodes were as short as 10 seconds. Some of them didn’t contain any audio at all, BI reported.

Spotify caught hosting hundreds of fake podcasts that advertise selling drugs Read More »

meta-argues-enshittification-isn’t-real-in-bid-to-toss-ftc-monopoly-case

Meta argues enshittification isn’t real in bid to toss FTC monopoly case

Further, Meta argued that the FTC did not show evidence that users sharing friends-and-family content were shown more ads. Meta noted that it “does not profit by showing more ads to users who do not click on them,” so it only shows more ads to users who click ads.

Meta also insisted that there’s “nothing but speculation” showing that Instagram or WhatsApp would have been better off or grown into rivals had Meta not acquired them.

The company claimed that without Meta’s resources, Instagram may have died off. Meta noted that Instagram co-founder Kevin Systrom testified that his app was “pretty broken and duct-taped” together, making it “vulnerable to spam” before Meta bought it.

Rather than enshittification, what Meta did to Instagram could be considered “a consumer-welfare bonanza,” Meta argued, while dismissing “smoking gun” emails from Mark Zuckerberg discussing buying Instagram to bury it as “legally irrelevant.”

Dismissing these as “a few dated emails,” Meta argued that “efforts to litigate Mr. Zuckerberg’s state of mind before the acquisition in 2012 are pointless.”

“What matters is what Meta did,” Meta argued, which was pump Instagram with resources that allowed it “to ‘thrive’—adding many new features, attracting hundreds of millions and then billions of users, and monetizing with great success.”

In the case of WhatsApp, Meta argued that nobody thinks WhatsApp had any intention to pivot to social media when the founders testified that their goal was to never add social features, preferring to offer a simple, clean messaging app. And Meta disputed any claim that it feared Google might buy WhatsApp as the basis for creating a Facebook rival, arguing that “the sole Meta witness to (supposedly) learn of Google’s acquisition efforts testified that he did not have that worry.”

Meta argues enshittification isn’t real in bid to toss FTC monopoly case Read More »

telegram-bans-$35b-black-markets-used-to-sell-stolen-data,-launder-crypto

Telegram bans $35B black markets used to sell stolen data, launder crypto

On Thursday, Telegram announced it had removed two huge black markets estimated to have generated more than $35 billion since 2021 by serving cybercriminals and scammers.

Blockchain research firm Elliptic told Reuters that the Chinese-language markets Xinbi Guarantee and Huione Guarantee together were far more lucrative than Silk Road, an illegal drug marketplace that the FBI notoriously seized in 2013, which was valued at about $3.4 billion.

Both markets were forced offline on Tuesday, Elliptic reported, and already, Huione Guarantee has confirmed that its market will cease to operate entirely due to the Telegram removal.

The disruption of both markets will be “a big blow for online fraudsters,” Elliptic confirmed, cutting them off from a dependable source for “stolen data, money laundering services, and telecoms infrastructure.”

Huione Guarantee is a subsidiary of Huione Group, which the US has alleged also owns Huione Pay and Huione Crypto. Telegram’s move comes after the US Treasury launched a plan to ban the Huione Group from the US financial system earlier this month, Reuters reported. Citing money laundering concerns, Treasury Secretary Scott Bessent accused the group of supporting “criminal syndicates who have stolen billions of dollars from Americans.”

Telegram bans $35B black markets used to sell stolen data, launder crypto Read More »

report:-terrorists-seem-to-be-paying-x-to-generate-propaganda-with-grok

Report: Terrorists seem to be paying X to generate propaganda with Grok

Back in February, Elon Musk skewered the Treasury Department for lacking “basic controls” to stop payments to terrorist organizations, boasting at the Oval Office that “any company” has those controls.

Fast-forward three months, and now Musk’s social media platform X is suspected of taking payments from sanctioned terrorists and providing premium features that make it easier to raise funds and spread propaganda—including through X’s chatbot, Grok. Groups seemingly benefiting from X include Houthi rebels, Hezbollah, and Hamas, as well as groups from Syria, Kuwait, and Iran. Some accounts have amassed hundreds of thousands of followers, paying to boost their reach while X apparently looks the other way.

In a report released Thursday, the Tech Transparency Project (TTP) flagged popular accounts likely linked to US-sanctioned terrorists. Some of the accounts bear “ID verified” badges, suggesting that X may be going against its own policies that ban sanctioned terrorists from benefiting from its platform.

Even more troubling, “several made use of revenue-generating features offered by X, including a button for tips,” the TTP reported.

On X, Premium subscribers pay $8 monthly or $84 annually, and Premium+ subscribers pay $40 monthly or $395 annually. Verified organizations pay X between $200 and $1,000 monthly, or up to $10,000 annually for access to Premium+. These subscriptions come with perks, allowing suspected terrorist accounts to share longer text and video posts, offer subscribers paid content, create communities, accept gifts, and amplify their propaganda.

Disturbingly, the TTP found that X’s chatbot, Grok, also appears to be helping to whitewash accounts linked to sanctioned terrorists.

In its report, the TTP noted that an account with the handle “hasmokaled”—which apparently belongs to “a key Hezbollah money exchanger,” Hassan Moukalled—at one point had a blue checkmark with 60,000 followers. While the Treasury Department has sanctioned Moukalled for propping up efforts “to continue to exploit and exacerbate Lebanon’s economic crisis,” clicking the Grok AI profile summary button seems to rely on Moukalled’s own posts and his followers’ impressions of his posts and therefore generated praise.

Report: Terrorists seem to be paying X to generate propaganda with Grok Read More »

fcc-threatens-echostar-licenses-for-spectrum-that-spacex-wants-to-use

FCC threatens EchoStar licenses for spectrum that SpaceX wants to use

“If SpaceX had done a basic search of public filings, it would know that EchoStar extensively utilizes the 2 GHz band and that the Commission itself has confirmed the coverage, utilization, and methodology for assessing the quality of EchoStar’s 5G network based on independent drive-tests,” EchoStar told the FCC. “EchoStar’s deployment already reaches over 80 percent of the United States population with over 23,000 5G sites deployed.”

There is also a pending petition filed by Vermont-based VTel Wireless, which asked the FCC to reconsider a 2024 decision to extend EchoStar construction deadlines for several spectrum bands. VTel was outbid by Dish in auctions for licenses to use AWS H Block and AWS-3 bands.

“In this case, teetering on the verge of bankruptcy, EchoStar found itself unable to meet the commitments previously made to the Commission in connection with its approval of T-Mobile’s merger with Sprint—an approval predicated on EchoStar constructing a fourth nationwide 5G broadband network by June 14, 2025,” VTel wrote in its October 2024 petition. “But with no notice to or input from the public, WTB [the FCC’s Wireless Telecommunications Bureau] apparently cut a deal with EchoStar to give it yet more time to complete that network and finally put its wireless licenses to use.”

FCC seeks public input

Carr’s letter said he asked FCC staff to investigate EchoStar’s compliance with construction deadlines and “to issue a public notice seeking comment on the scope and scale of MSS [mobile satellite service] utilization in the 2 GHz band that is currently licensed to EchoStar or its affiliates.” The AWS-4 band (2000-2020 MHz and 2180-2200 MHz) was originally designated for satellite service. The FCC decided to also allow terrestrial use of the frequencies in 2012 to expand mobile broadband access.

The FCC Space Bureau announced yesterday that it is seeking comment on EchoStar’s use of the 2GHz spectrum, and the Wireless Telecommunications Bureau is seeking comment on VTel’s petition for reconsideration.

“In 2019, EchoStar’s predecessor, Dish, agreed to meet specific buildout obligations in connection with a number of spectrum licenses across several different bands,” Carr wrote. “In particular, the FCC agreed to relax some of EchoStar’s then-existing buildout obligations in exchange for EchoStar’s commitment to put its licensed spectrum to work deploying a nationwide 5G broadband network. EchoStar promised—among other things—that its network would cover, by June 14, 2025, at least 70 percent of the population within each of its licensed geographic areas for its AWS-4 and 700 MHz licenses, and at least 75 percent of the population within each of its licensed geographic areas for its H Block and 600 MHz licenses.”

FCC threatens EchoStar licenses for spectrum that SpaceX wants to use Read More »

fcc-commissioner-writes-op-ed-titled,-“it’s-time-for-trump-to-doge-the-fcc“

FCC commissioner writes op-ed titled, “It’s time for Trump to DOGE the FCC“

In addition to cutting Universal Service, Simington proposed a broad streamlining of the FCC licensing process. Manual processing of license applications “consumes vast staff hours and introduces unnecessary delay into markets that thrive on speed and innovation,” he wrote.

“For non-contentious licenses, automated workflows should be the default,” Simington argued. “By implementing intelligent review systems and processing software, the FCC could drastically reduce the time and labor involved in issuing standard licenses.”

Moving staff, deleting rules

Simington also proposed taking employees out of the FCC Media Bureau and moving them “to other offices within the FCC—such as the Space Bureau—that are grappling with staffing shortages in high-growth, high-need sectors.” Much of the Media Bureau’s “work is concentrated on regulating traditional broadcast media—specifically, over-the-air television and radio—a sector that continues to contract in relevance,” he wrote.

Simington acknowledged that cutting the Media Bureau would seem to conflict with his own proposal to regulate fees paid by local stations to broadcast networks. It might also conflict with FCC Chairman Brendan Carr’s attempts to regulate news content that he perceives as biased against Republicans. But Simington argued that the Media Bureau is “significantly overstaffed relative to its current responsibilities.”

Simington became an FCC commissioner at the end of Trump’s first term in 2020. Trump picked Simington as a replacement for Republican Michael O’Rielly, who earned Trump’s ire by opposing a crackdown on social media websites.

The FCC is currently operating with two Republicans and two Democrats, preventing any major votes that require a Republican majority. But Democratic Commissioner Geoffrey Starks said he is leaving sometime this spring, and Republican nominee Olivia Trusty is on track to be confirmed by the Senate.

The agency is likely to cut numerous regulations once there’s a Republican majority. Carr started a “Delete, Delete, Delete” proceeding that aims to eliminate as many rules as possible. Congress is also pushing FCC cost cuts, as the Senate voted to kill a Biden-era attempt to use E-Rate to subsidize Wi-Fi hotspots for schoolchildren who lack reliable Internet access to complete their homework.

FCC commissioner writes op-ed titled, “It’s time for Trump to DOGE the FCC“ Read More »

copyright-office-head-fired-after-reporting-ai-training-isn’t-always-fair-use

Copyright Office head fired after reporting AI training isn’t always fair use


Cops scuffle with Trump picks at Copyright Office after AI report stuns tech industry.

A man holds a flag that reads “Shame” outside the Library of Congress on May 12, 2025 in Washington, DC. On May 8th, President Donald Trump fired Carla Hayden, the head of the Library of Congress, and Shira Perlmutter, the head of the US Copyright Office, just days after. Credit: Kayla Bartkowski / Staff | Getty Images News

A day after the US Copyright Office dropped a bombshell pre-publication report challenging artificial intelligence firms’ argument that all AI training should be considered fair use, the Trump administration fired the head of the Copyright Office, Shira Perlmutter—sparking speculation that the controversial report hastened her removal.

Tensions have apparently only escalated since. Now, as industry advocates decry the report as overstepping the office’s authority, social media posts on Monday described an apparent standoff at the Copyright Office between Capitol Police and men rumored to be with Elon Musk’s Department of Government Efficiency (DOGE).

A source familiar with the matter told Wired that the men were actually “Brian Nieves, who claimed he was the new deputy librarian, and Paul Perkins, who said he was the new acting director of the Copyright Office, as well as acting Registrar,” but it remains “unclear whether the men accurately identified themselves.” A spokesperson for the Capitol Police told Wired that no one was escorted off the premises or denied entry to the office.

Perlmutter’s firing followed Donald Trump’s removal of Librarian of Congress Carla Hayden, who, NPR noted, was the first African American to hold the post. Responding to public backlash, White House Press Secretary Karoline Leavitt claimed that the firing was due to “quite concerning things that she had done at the Library of Congress in the pursuit of DEI and putting inappropriate books in the library for children.”

The Library of Congress houses the Copyright Office, and critics suggested Trump’s firings were unacceptable intrusions into cultural institutions that are supposed to operate independently of the executive branch. In a statement, Rep. Joe Morelle (D.-N.Y.) condemned Perlmutter’s removal as “a brazen, unprecedented power grab with no legal basis.”

Accusing Trump of trampling Congress’ authority, he suggested that Musk and other tech leaders racing to dominate the AI industry stood to directly benefit from Trump’s meddling at the Copyright Office. Likely most threatening to tech firms, the guidance from Perlmutter’s Office not only suggested that AI training on copyrighted works may not be fair use when outputs threaten to disrupt creative markets—as publishers and authors have argued in several lawsuits aimed at the biggest AI firms—but also encouraged more licensing to compensate creators.

“It is surely no coincidence [Trump] acted less than a day after she refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models,” Morelle said, seemingly referencing Musk’s xAI chatbot, Grok.

Agreeing with Morelle, Courtney Radsch—the director of the Center for Journalism & Liberty at the left-leaning think tank the Open Markets Institute—said in a statement provided to Ars that Perlmutter’s firing “appears directly linked to her office’s new AI report questioning unlimited harvesting of copyrighted materials.”

“This unprecedented executive intrusion into the Library of Congress comes directly after Perlmutter released a copyright report challenging the tech elite’s fundamental claim: unlimited access to creators’ work without permission or compensation,” Radsch said. And it comes “after months of lobbying by the corporate billionaires” who “donated” millions to Trump’s inauguration and “have lapped up the largess of government subsidies as they pursue AI dominance.”

What the Copyright Office says about fair use

The report that the Copyright Office released on Friday is not finalized but is not expected to change radically, unless Trump’s new acting head potentially intervenes to overhaul the guidance.

It comes after the Copyright Office parsed more than 10,000 comments debating whether creators should and could feasibly be compensated for the use of their works in AI training.

“The stakes are high,” the office acknowledged, but ultimately, there must be an effective balance struck between the public interests in “maintaining a thriving creative community” and “allowing technological innovation to flourish.” Notably, the office concluded that the first and fourth factors of fair use—which assess the character of the use (and whether it is transformative) and how that use affects the market—are likely to hold the most weight in court.

According to Radsch, the report “raised crucial points that the tech elite don’t want acknowledged.” First, the Copyright Office acknowledged that it’s an open question how much data an AI developer needs to build an effective model. Then, they noted that there’s a need for a consent framework beyond putting the onus on creators to opt their works out of AI training, and perhaps most alarmingly, they concluded that “AI trained on copyrighted works could replace original creators in the marketplace.”

“Commenters painted a dire picture of what unlicensed training would mean for artists’ livelihoods,” the Copyright Office said, while industry advocates argued that giving artists the power to hamper or “kill” AI development could result in “far less competition, far less innovation, and very likely the loss of the United States’ position as the leader in global AI development.”

To prevent both harms, the Copyright Office expects that some AI training will be deemed fair use, such as training viewed as transformative, because resulting models don’t compete with creative works. Those uses threaten no market harm but rather solve a societal need, such as language models translating texts, moderating content, or correcting grammar. Or in the case of audio models, technology that helps producers clean up unwanted distortion might be fair use, where models that generate songs in the style of popular artists might not, the office opined.

But while “training a generative AI foundation model on a large and diverse dataset will often be transformative,” the office said that “not every transformative use is a fair one,” especially if the AI model’s function performs the same purpose as the copyrighted works they were trained on. Consider an example like chatbots regurgitating news articles, as is alleged in The New York Times’ dispute with OpenAI over ChatGPT.

“In such cases, unless the original work itself is being targeted for comment or parody, it is hard to see the use as transformative,” the Copyright Office said. One possible solution for AI firms hoping to preserve utility of their chatbots could be effective filters that “prevent the generation of infringing content,” though.

Tech industry accuses Copyright Office of overreach

Only courts can effectively weigh the balance of fair use, the Copyright Office said. Perhaps importantly, however, the thinking of one of the first judges to weigh the question—in a case challenging Meta’s torrenting of a pirated books dataset to train its AI models—seemed to align with the Copyright Office guidance at a recent hearing. Mulling whether Meta infringed on book authors’ rights, US District Judge Vince Chhabria explained why he doesn’t immediately “understand how that can be fair use.”

“You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products,” Chhabria said. “You are dramatically changing, you might even say obliterating, the market for that person’s work, and you’re saying that you don’t even have to pay a license to that person.”

Some AI critics think the courts have already indicated which way they are leaning. In a statement to Ars, a New York Times spokesperson suggested that “both the Copyright Office and courts have recognized what should be obvious: when generative AI products give users outputs that compete with the original works on which they were trained, that unprecedented theft of millions of copyrighted works by developers for their own commercial benefit is not fair use.”

The NYT spokesperson further praised the Copyright Office for agreeing that using Retrieval-Augmented Generation (RAG) AI to surface copyrighted content “is less likely to be transformative where the purpose is to generate outputs that summarize or provide abridged versions of retrieved copyrighted works, such as news articles, as opposed to hyperlinks.” If courts agreed on the RAG finding, that could potentially disrupt AI search models from every major tech company.

The backlash from industry stakeholders was immediate.

The president and CEO of a trade association called the Computer & Communications Industry Association, Matt Schruers, said the report raised several concerns, particularly by endorsing “an expansive theory of market harm for fair use purposes that would allow rightsholders to block any use that might have a general effect on the market for copyrighted works, even if it doesn’t impact the rightsholder themself.”

Similarly, the tech industry policy coalition Chamber of Progress warned that “the report does not go far enough to support innovation and unnecessarily muddies the waters on what should be clear cases of transformative use with copyrighted works.” Both groups celebrated the fact that the final decision on fair use would rest with courts.

The Copyright Office agreed that “it is not possible to prejudge the result in any particular case” but said that precedent supports some “general observations.” Those included suggesting that licensing deals may be appropriate where uses are not considered fair without disrupting “American leadership” in AI, as some AI firms have claimed.

“These groundbreaking technologies should benefit both the innovators who design them and the creators whose content fuels them, as well as the general public,” the report said, ending with the office promising to continue working with Congress to inform AI laws.

Copyright Office seemingly opposes Meta’s torrenting

Also among those “general observations,” the Copyright Office wrote that “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.”

The report seemed to suggest that courts and the Copyright Office may also be aligned on AI firms’ use of pirated or illegally accessed paywalled content for AI training.

Judge Chhabria only considered Meta’s torrenting in the book authors’ case to be “kind of messed up,” prioritizing the fair use question, and the Copyright Office similarly only recommended that “the knowing use of a dataset that consists of pirated or illegally accessed works should weigh against fair use without being determinative.”

However, torrenting should be a black mark, the Copyright Office suggested. “Gaining unlawful access” does bear “on the character of the use,” the office noted, arguing that “training on pirated or illegally accessed material goes a step further” than simply using copyrighted works “despite the owners’ denial of permission.” Perhaps if authors can prove that AI models trained on pirated works led to lost sales, the office suggested that a fair use defense might not fly.

“The use of pirated collections of copyrighted works to build a training library, or the distribution of such a library to the public, would harm the market for access to those Works,” the office wrote. “And where training enables a model to output verbatim or substantially similar copies of the works trained on, and those copies are readily accessible by end users, they can substitute for sales of those works.”

Likely frustrating Meta—which is currently fighting to keep leeching evidence out of the book authors’ case—the Copyright Office suggested that “the copying of expressive works from pirate sources in order to generate unrestricted content that competes in the marketplace, when licensing is reasonably available, is unlikely to qualify as fair use.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Copyright Office head fired after reporting AI training isn’t always fair use Read More »