Policy

us-gives-local-police-a-face-scanning-app-similar-to-one-used-by-ice-agents

US gives local police a face-scanning app similar to one used by ICE agents

“Biometric data used to identify individuals through TVS are collected by government authorities consistent with the law, including [when] issuing documents or processing illegal aliens,” the CBP statement said. “The Mobile Fortify Application provides a mobile capability that uses facial comparison as well as fingerprint matching to verify the identity of individuals against specific immigration related holdings.”

404 Media quoted Cooper Quintin, senior staff technologist at the Electronic Frontier Foundation, as saying that “face surveillance in general, and this tool specifically, was already a dangerous infringement of civil liberties when in the hands of ICE agents. Putting a powerful surveillance tool like this in the hands of state and local law enforcement officials around the country will only further erode peoples’ Fourth Amendment rights, for citizens and non-citizens alike. This will further erode due process, and subject even more Americans to omnipresent surveillance and unjust detainment.”

DHS proposes new biometrics rules, definition

In related news this week, the Department of Homeland Security is proposing rule changes to expand the collection and use of biometric information. The proposed changes are open for public comment until January 2, 2026.

“The purpose of this rule is to establish a standard and provide notice that every individual filing or associated with a benefit request, other request, or collection of information is subject to the biometrics requirement, unless DHS exempts a category of requests or individuals, or a specific individual,” the proposal said. “This includes any alien apprehended, arrested or encountered by DHS in the course of performing its functions related to administering and enforcing the immigration and naturalization laws of the United States. As it relates to benefit requests, other requests and collections of information, notice of this requirement will be added in the form instructions for the relevant forms, as needed.”

The proposed rule change would expand the agency’s definition of biometrics “to include a wider range of modalities than just fingerprints, photographs and signatures.” The proposed definition of biometrics is “measurable biological (anatomical, physiological or molecular structure) or behavioral characteristics of an individual.” This includes face and eye scans, vocal signatures, and DNA.

US gives local police a face-scanning app similar to one used by ICE agents Read More »

real-humans-don’t-stream-drake-songs-23-hours-a-day,-rapper-suing-spotify-says

Real humans don’t stream Drake songs 23 hours a day, rapper suing Spotify says


“Irregular” Drake streams

Proposed class action may force Spotify to pay back artists harmed by streaming fraud.

Lawsuit questions if Drake really is the most-streamed artist on Spotify after the musician became “the first artist to nominally achieve 120 billion total streams on Spotify.” Credit: Mark Blinch / Stringer | Getty Images Sport

Spotify profits off fake Drake streams that rob other artists of perhaps hundreds of millions in revenue shares, a lawsuit filed Sunday alleged—hoping to force Spotify to reimburse every artist impacted.

The lawsuit was filed by an American rapper known as RBX, who may be best known for cameos on two of the 1990s’ biggest hip-hop records, Dr. Dre’s The Chronic and Snoop Dogg’s Doggystyle.

The problem goes beyond Drake, RBX’s lawsuit alleged. It claims Spotify ignores “billions of fraudulent streams” each month, selfishly benefiting from bot networks that artificially inflate user numbers to help Spotify attract significantly higher ad revenue.

Drake’s account is a prime example of the kinds of fake streams Spotify is inclined to overlook, RBX alleged, since Drake is “the most streamed artist of all time on the platform,” in September becoming “the first artist to nominally achieve 120 billion total streams.” Watching Drake hit this milestone, the platform chose to ignore a “substantial” amount of inauthentic activity that contributed to about 37 billion streams between January 2022 and September 2025, the lawsuit alleged.

This activity, RBX alleged, “appeared to be the work of a sprawling network of Bot Accounts” that Spotify reasonably should have detected.

Apparently, RBX noticed that while most artists see an “initial spike” in streams when a song or album is released, followed by a predictable drop-off as more time passes, the listening patterns of Drake’s fans weren’t as predictable. After releases, some of Drake’s music would see “significant and irregular uptick months” over not just ensuing months, but years, allegedly “with no reasonable explanations for those upticks other than streaming fraud.”

Most suspiciously, individual accounts would sometimes listen to Drake “exclusively” for “23 hours a day”—which seems like the sort of “staggering and irregular” streaming that Spotify should flag, the lawsuit alleged.

It’s unclear how RBX’s legal team conducted this analysis. At this stage, they’ve told the court that claims are based on “information and belief” that discovery will reveal “there is voluminous information” to back up the rapper’s arguments.

Fake Drake streams may have robbed artists of millions

Spotify artists are supposed to get paid based on valid streams that represent their rightful portion of revenue pools. If RBX’s claims are true, based on the allegedly fake boosting of Drake’s streams alone, losses to all other artists in the revenue pool are “estimated to be in the hundreds of millions of dollars,” the complaint said. Actual damages, including punitive damages, are to be determined at trial, the lawsuit noted, and are likely much higher.

“Drake’s music streams are but one notable example of the rampant streaming fraud that Spotify has allowed to occur, across myriad artists, through negligence and/or willful blindness,” the lawsuit alleged.

If granted, the class would cover more than 100,000 rights holders who collected royalties from music hosted on the platform from “January 1, 2018, through the present.” That class could be expanded, the lawsuit noted, depending on how discovery goes. Since Spotify allegedly “concealed” the fake streams, there can be no time limitations for how far the claims could go back, the lawsuit argued. Attorney Mark Pifko of Baron & Budd, who is representing RBX, suggested in a statement provided to Ars that even one bad actor on Spotify cheats countless artists out of rightful earnings.

“Given the way Spotify pays royalty holders, allocating a limited pool of money based on each song’s proportional share of streams for a particular period, if someone cheats the system, fraudulently inflating their streams, it takes from everyone else,” Pifko said. “Not everyone who makes a living in the music business is a household name like Taylor Swift—there are thousands of songwriters, performers, and producers who earn revenue from music streaming who you’ve never heard of. These people are the backbone of the music business and this case is about them.”

Spotify did not immediately respond to Ars’ request for comment. However, a spokesperson told Rolling Stone that while the platform cannot comment on pending litigation, Spotify denies allegations that it profits from fake streams.

“Spotify in no way benefits from the industry-wide challenge of artificial streaming,” Spotify’s spokesperson said. “We heavily invest in always-improving, best-in-class systems to combat it and safeguard artist payouts with strong protections like removing fake streams, withholding royalties, and charging penalties.”

Fake fans appear to move hundreds of miles between plays

Spotify has publicly discussed ramping up efforts to detect and penalize streaming fraud. But RBX alleged that instead, Spotify “deliberately” “deploys insufficient measures to address fraudulent streaming,” allowing fraud to run “rampant.”

The platform appears least capable at handling so-called “Bot Vendors” that “typically design Bots to mimic human behavior and resemble real social media or streaming accounts in order to avoid detection,” the lawsuit alleged.

These vendors rely on virtual private networks (VPNs) to obscure locations of streams, but “with reasonable diligence,” Spotify could better detect them, RBX alleged—especially when streams are coming “from areas that lack the population to support a high volume of streams.”

For example, RBX again points to Drake’s streams. During a four-day period in 2024, “at least 250,000 streams of Drake’s song ‘No Face’ originated in Turkey but were falsely geomapped through the coordinated use of VPNs to the United Kingdom,” the lawsuit alleged, based on “information and belief.”

Additionally, “a large percentage of the accounts streaming Drake’s music were geographically concentrated around areas whose populations could not support the volume of streams emanating therefrom. In some cases, massive amounts of music streams, more than a hundred million streams, originated in areas with zero residential addresses,” the lawsuit alleged.

Just looking at how Drake’s fans move should raise a red flag, RBX alleged:

“Geohash data shows that nearly 10 percent of Drake’s streams come from users whose location data showed that they traveled a minimum of 15,000 kilometers in a month, moved unreasonable locations between songs (consecutive plays separated by mere seconds but spanning thousands of kilometers), including more than 500 kilometers between songs (roughly the distance from New York City to Pittsburgh).”

Spotify could cut off a lot of this activity, RBX alleged, by ending its practice of allowing free ad-supported accounts to sign up without a credit card. But supposedly it doesn’t, because “Spotify has an incentive for turning a blind eye to the blatant streaming fraud occurring on its service,” the lawsuit said.

Spotify has admitted fake streams impact revenue

RBX’s lawsuit pointed out that Spotify has told investors that, despite its best efforts, artificial streams “may contribute, from time to time, to an overstatement” in the number of reported monthly average users—a stat that helps drive ad revenue.

Spotify also somewhat tacitly acknowledges fears that the platform may be financially motivated to overlook when big artists pay for fake streams. In an FAQ, Spotify confirmed that “artificial streaming is something we take seriously at every level,” promising to withhold royalties, correct public streaming numbers, and take other steps, like possibly even removing tracks, no matter how big the artist is. Artists’ labels and distributors can also get hit with penalties if fake streams are detected, Spotify said. Spotify has defended its prevention methods as better than its rivals’ efforts.

“Our systems are working: In a case from last year, one bad actor was indicted for stealing $10 million from streaming services, only $60,000 of which came from Spotify, proving how effective we are at limiting the impact of artificial streaming on our platform,” Spotify’s spokesperson told Rolling Stone.

However, RBX alleged that Spotify is actually “one of the easiest platforms to defraud using Bots due to its negligent, lax, and/or non-existent—Bot-related security measures.” And supposedly that’s by design, since “the higher the volume of individual streams, the more Spotify could charge for ads,” RBX alleged.

“By properly detecting and/or removing fraudulent streams from its service, Spotify would lose significant advertising revenue,” the theory goes, with RBX directly accusing Spotify of concealing “both the enormity of this problem, and its detrimental financial impact to legitimate Rights Holders.”

For RBX to succeed, it will likely matter what evidence was used to analyze Drake’s streaming numbers. Last month, a lawsuit that Drake filed was dismissed, ultimately failing to convince a judge that Kendrick Lamar’s record label artificially inflated Spotify streams of “Not Like Us.” Drake’s failure to show any evidence beyond some online comments and reports (which suggested that the label was at least aware that Lamar’s manager supposedly paid a bot network to “jumpstart” the song’s streams) was deemed insufficient to keep the case alive.

Industry group slowly preparing to fight streaming fraud

A loss could smear Spotify’s public image after the platform joined an industry coalition formed in 2023 to fight streaming fraud, the Music Fights Fraud Alliance (MFFA). This coalition is often cited as a major step that Spotify and the rest of the industry are taking; however, the group’s website does not indicate the progress made in the years since.

As of this writing, the website showed that task forces were formed, as well as a partnership with a nonprofit called the National Cyber-Forensics and Training Alliance, with a goal to “work closely together to identify and disrupt streaming fraud.” The partnership was also supposed to produce “intelligence reports and other actionable information in support of fraud prevention and mitigation.”

Ars reached out to MFFA to see if there are any updates to share on the group’s work over the past two years. MFFA’s executive director, Michael Lewan, told Ars that “admittedly MFFA is still relatively nascent and growing,” “not even formally incorporated until” he joined in February of this year.

“We have accomplished a lot, and are going to continue to grow as the industry is taking fraud seriously,” Lewan said.

Lewan can’t “shed too many details on our initiatives,” he said, suggesting that MFFA is “a bit different from other trade orgs that are much more public facing.” However, several initiatives have been launched, he confirmed, which will help “improve coordination and communication amongst member companies”—which include streamers like Spotify and Amazon, as well as distributors like CD Baby and social platforms like SoundCloud and Meta apps—“to identify and disrupt suspicious activity, including sharing of data.”

“We also have efforts to raise awareness on what fraud looks like and how to mitigate against fraudulent activity,” Lewan said. “And we’re in continuous communication with other partners (in and outside the industry) on data standards, artist education, enforcement and deterrence.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Real humans don’t stream Drake songs 23 hours a day, rapper suing Spotify says Read More »

trump-on-why-he-pardoned-binance-ceo:-“are-you-ready?-i-don’t-know-who-he-is.”

Trump on why he pardoned Binance CEO: “Are you ready? I don’t know who he is.”

“My sons are involved in crypto much more than I—me,” Trump said on 60 Minutes. “I—I know very little about it, other than one thing. It’s a huge industry. And if we’re not gonna be the head of it, China, Japan, or someplace else is. So I am behind it 100 percent.”

Did Trump ever meet Zhao? Did he form his own opinion about Zhao’s conviction, or was he merely “told about it”? Trump doesn’t seem to know:

This man was treated really badly by the Biden administration. And he was given a jail term. He’s highly respected. He’s a very successful guy. They sent him to jail and they really set him up. That’s my opinion. I was told about it.

I said, “Eh, it may look bad if I do it. I have to do the right thing.” I don’t know the man at all. I don’t think I ever met him. Maybe I did. Or, you know, somebody shook my hand or something. But I don’t think I ever met him. I have no idea who he is. I was told that he was a victim, just like I was and just like many other people, of a vicious, horrible group of people in the Biden administration.

Trump: “A lot people say that he wasn’t guilty”

Pointing out that Trump’s pardon of Zhao came after Binance helped facilitate a $2 billion purchase of World Liberty’s stablecoin, O’Donnell asked Trump to address the appearance of a pay-to-play deal.

“Well, here’s the thing, I know nothing about it because I’m too busy doing the other… I can only tell you this. My sons are into it. I’m glad they are, because it’s probably a great industry, crypto. I think it’s good… I know nothing about the guy, other than I hear he was a victim of weaponization by government. When you say the government, you’re talking about the Biden government. It’s a corrupt government. Biden was the most corrupt president and he was the worst president we’ve ever had.”

Trump on why he pardoned Binance CEO: “Are you ready? I don’t know who he is.” Read More »

internet-archive’s-legal-fights-are-over,-but-its-founder-mourns-what-was-lost

Internet Archive’s legal fights are over, but its founder mourns what was lost


“We survived, but it wiped out the library,” Internet Archive’s founder says.

Internet Archive founder Brewster Kahle celebrates 1 trillion web pages on stage with staff. Credit: via the Internet Archive

Last month, the Internet Archive’s Wayback Machine archived its trillionth webpage, and the nonprofit invited its more than 1,200 library partners and 800,000 daily users to join a celebration of the moment. To honor “three decades of safeguarding the world’s online heritage,” the city of San Francisco declared October 22 to be “Internet Archive Day.” The Archive was also recently designated a federal depository library by Sen. Alex Padilla (D-Calif.), who proclaimed the organization a “perfect fit” to expand “access to federal government publications amid an increasingly digital landscape.”

The Internet Archive might sound like a thriving organization, but it only recently emerged from years of bruising copyright battles that threatened to bankrupt the beloved library project. In the end, the fight led to more than 500,000 books being removed from the Archive’s “Open Library.”

“We survived,” Internet Archive founder Brewster Kahle told Ars. “But it wiped out the Library.”

An Internet Archive spokesperson confirmed to Ars that the archive currently faces no major lawsuits and no active threats to its collections. Kahle thinks “the world became stupider” when the Open Library was gutted—but he’s moving forward with new ideas.

History of the Internet Archive

Kahle has been striving since 1996 to transform the Internet Archive into a digital Library of Alexandria—but “with a better fire protection plan,” joked Kyle Courtney, a copyright lawyer and librarian who leads the nonprofit eBook Study Group, which helps states update laws to protect libraries.

When the Wayback Machine was born in 2001 as a way to take snapshots of the web, Kahle told The New York Times that building free archives was “worth it.” He was also excited that the Wayback Machine had drawn renewed media attention to libraries.

At the time, law professor Lawrence Lessig predicted that the Internet Archive would face copyright battles, but he also believed that the Wayback Machine would change the way the public understood copyright fights.

”We finally have a clear and tangible example of what’s at stake,” Lessig told the Times. He insisted that Kahle was “defining the public domain” online, which would allow Internet users to see ”how easy and important” the Wayback Machine “would be in keeping us sane and honest about where we’ve been and where we’re going.”

Kahle suggested that IA’s legal battles weren’t with creators or publishers so much as with large media companies that he thinks aren’t “satisfied with the restriction you get from copyright.”

“They want that and more,” Kahle said, pointing to e-book licenses that expire as proof that libraries increasingly aren’t allowed to own their collections. He also suspects that such companies wanted the Wayback Machine dead—but the Wayback Machine has survived and proved itself to be a unique and useful resource.

The Internet Archive also began archiving—and then lending—e-books. For a decade, the Archive had loaned out individual e-books to one user at a time without triggering any lawsuits. That changed when IA decided to temporarily lift the cap on loans from its Open Library project to create a “National Emergency Library” as libraries across the world shut down during the early days of the COVID-19 pandemic. The project eventually grew to 1.4 million titles.

But lifting the lending restrictions also brought more scrutiny from copyright holders, who eventually sued the Archive. Litigation went on for years. In 2024, IA lost its final appeal in a lawsuit brought by book publishers over the Archive’s Open Library project, which used a novel e-book lending model to bypass publishers’ licensing fees and checkout limitations. Damages could have topped $400 million, but publishers ultimately announced a “confidential agreement on a monetary payment” that did not bankrupt the Archive.

Litigation has continued, though. More recently, the Archive settled another suit over its Great 78 Project after music publishers sought damages of up to $700 million. A settlement in that case, reached last month, was similarly confidential. In both cases, IA’s experts challenged publishers’ estimates of their losses as massively inflated.

For Internet Archive fans, a group that includes longtime Internet users, researchers, students, historians, lawyers, and the US government, the end of the lawsuits brought a sigh of relief. The Archive can continue—but it can’t run one of its major programs in the same way.

What the Internet Archive lost

To Kahle, the suits have been an immense setback to IA’s mission.

Publishers had argued that the Open Library’s lending harmed the e-book market, but IA says its vision for the project was not to frustrate e-book sales (which it denied its library does) but to make it easier for researchers to reference e-books by allowing Wikipedia to link to book scans. Wikipedia has long been one of the most visited websites in the world, and the Archive wanted to deepen its authority as a research tool.

“One of the real purposes of libraries is not just access to information by borrowing a book that you might buy in a bookstore,” Kahle said. “In fact, that’s actually the minority. Usually, you’re comparing and contrasting things. You’re quoting. You’re checking. You’re standing on the shoulders of giants.”

Meredith Rose, senior policy counsel for Public Knowledge, told Ars that the Internet Archive’s Wikipedia enhancements could have served to surface information that’s often buried in books, giving researchers a streamlined path to source accurate information online.

But Kahle said the lawsuits against IA showed that “massive multibillion-dollar media conglomerates” have their own interests in controlling the flow of information. “That’s what they really succeeded at—to make sure that Wikipedia readers don’t get access to books,” Kahle said.

At the heart of the Open Library lawsuit was publishers’ market for e-book licenses, which libraries complain provide only temporary access for a limited number of patrons and cost substantially more than the acquisition of physical books. Some states are crafting laws to restrict e-book licensing, with the aim of preserving library functions.

“We don’t want libraries to become Hulu or Netflix,” said Courtney of the eBook Study Group, posting warnings to patrons like “last day to check out this book, August 31st, then it goes away forever.”

He, like Kahle, is concerned that libraries will become unable to fulfill their longtime role—preserving culture and providing equal access to knowledge. Remote access, Courtney noted, benefits people who can’t easily get to libraries, like the elderly, people with disabilities, rural communities, and foreign-deployed troops.

Before the Internet Archive cases, libraries had won some important legal fights, according to Brandon Butler, a copyright lawyer and executive director of Re:Create, a coalition of “libraries, civil libertarians, online rights advocates, start-ups, consumers, and technology companies” that is “dedicated to balanced copyright and a free and open Internet.”

But the Internet Archive’s e-book fight didn’t set back libraries, Butler said, because the loss didn’t reverse any prior court wins. Instead, IA had been “exploring another frontier” beyond the Google Books ruling, which deemed Google’s searchable book excerpts a transformative fair use, hoping that linking to books from Wikipedia would also be deemed fair use. But IA “hit the edge” of what courts would allow, Butler said.

IA basically asked, “Could fair use go this much farther?” Butler said. “And the courts said, ‘No, this is as far as you go.’”

To Kahle, the cards feel stacked against the Internet Archive, with courts, lawmakers, and lobbyists backing corporations seeking “hyper levels of control.” He said IA has always served as a research library—an online destination where people can cross-reference texts and verify facts, just like perusing books at a local library.

“We’re just trying to be a library,” Kahle said. “A library in a traditional sense. And it’s getting hard.”

Fears of big fines may delay digitization projects

President Donald Trump’s cuts to the federal Institute of Museum and Library Services have put America’s public libraries at risk, and reduced funding will continue to challenge libraries in the coming years, ALA has warned. Butler has also suggested that under-resourced libraries may delay digitization efforts for preservation purposes if they worry that publishers may threaten costly litigation.

He told Ars he thinks courts are getting it right on recent fair use rulings. But he noted that libraries have fewer resources for legal fights because copyright law “has this provision that says, well, if you’re a copyright holder, you really don’t have to prove that you suffered any harm at all.”

“You can just elect [to receive] a massive payout based purely on the fact that you hold a copyright and somebody infringed,” Butler said. “And that’s really unique. Almost no other country in the world has that sort of a system.”

So while companies like AI firms may be able to afford legal fights with rights holders, libraries must be careful, even when they launch projects that seem “completely harmless and innocuous,” Butler said. Consider the Internet Archive’s Great 78 Project, which digitized 400,000 old shellac records, known as 78s, that were originally pressed from 1898 to the 1950s.

“The idea that somebody’s going to stream a 78 of an Elvis song instead of firing it up on their $10-a-month Spotify subscription is silly, right?” Butler said. “It doesn’t pass the laugh test, but given the scale of the project—and multiply that by the statutory damages—and that makes this an extremely dangerous project all of a sudden.”

Butler suggested that statutory damages could disrupt the balance that ensures the public has access to knowledge, creators get paid, and human creativity thrives, as AI advances and libraries’ growth potentially stalls.

“It sets the risk so high that it may force deals in situations where it would be better if people relied on fair use. Or it may scare people from trying new things because of the stakes of a copyright lawsuit,” Butler said.

Courtney, who co-wrote a whitepaper detailing the legal basis for different forms of “controlled digital lending” like the Open Library project uses, suggested that Kahle may be the person who’s best prepared to push the envelope on copyright.

When asked how the Internet Archive managed to avoid financial ruin, Courtney said it survived “only because their leader” is “very smart and capable.” Of all the “flavors” of controlled digital lending (CDL) that his paper outlined, Kahle’s methodology for the Open Library Project was the most “revolutionary,” Courtney said.

Importantly, IA’s loss did not doom other kinds of CDL that other archives use, he noted, nor did it prevent libraries from trying new things.

“Fair use is a case-by-case determination” that will be made as urgent preservation needs arise, Courtney told Ars, and “libraries have a ton of stuff that aren’t going to make the jump to digital unless we digitize them. No one will have access to them.”

What’s next for the Internet Archive?

The lawsuits haven’t dampened Kahle’s resolve to expand IA’s digitization efforts, though. Moving forward, the group will be growing a project called Democracy’s Library, which is “a free, open, online compendium of government research and publications from around the world” that will be conveniently linked in Wikipedia articles to help researchers discover them.

The Archive is also collecting as many physical materials as possible to help preserve knowledge, even as “the library system is largely contracting,” Kahle said. He noted that libraries historically tend to grow in societies that prioritize education and decline in societies where power is being concentrated, and he’s worried about where the US is headed. That makes it hard to predict if IA—or any library project—will be supported in the long term.

With governments globally partnering with the biggest tech companies to try to win the artificial intelligence race, critics have warned of threats to US democracy, while the White House has escalated its attack on libraries, universities, and science over the past year.

Meanwhile, AI firms face dozens of lawsuits from creators and publishers, which Kahle thinks only the biggest tech companies can likely afford to outlast. The momentum behind AI risks giving corporations even more control over information, Kahle said, and it’s uncertain if archives dedicated to preserving the public memory will survive attacks from multiple fronts.

“Societies that are [growing] are the ones that need to educate people” and therefore promote libraries, Kahle said. But when societies are “going down,” such as in times of war, conflict, and social upheaval, libraries “tend to get destroyed by the powerful. It used to be king and church, and it’s now corporations and governments.” (He recommended The Library: A Fragile History as a must-read to understand the challenges libraries have always faced.)

Kahle told Ars he’s not “black and white” on AI, and he even sees some potential for AI to enhance library services.

He’s more concerned that libraries in the US are losing support and may soon cease to perform classic functions that have always benefited civilizations—like buying books from small publishers and local authors, supporting intellectual endeavors, and partnering with other libraries to expand access to diverse collections.

To prevent these cultural and intellectual losses, he plans to position IA as a refuge for displaced collections, with hopes to digitize as much as possible while defending the early dream that the Internet could equalize access to information and supercharge progress.

“We want everyone [to be] a reader,” Kahle said, and that means “we want lots of publishers, we want lots of vendors, booksellers, lots of libraries.”

But, he asked, “Are we going that way? No.”

To turn things around, Kahle suggested that copyright laws be “re-architected” to ensure “we have a game with many winners”—where authors, publishers, and booksellers get paid, library missions are respected, and progress thrives. Then society can figure out “what do we do with this new set of AI tools” to keep the engine of human creativity humming.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Internet Archive’s legal fights are over, but its founder mourns what was lost Read More »

youtube-denies-ai-was-involved-with-odd-removals-of-tech-tutorials

YouTube denies AI was involved with odd removals of tech tutorials


YouTubers suspect AI is bizarrely removing popular video explainers.

This week, tech content creators began to suspect that AI was making it harder to share some of the most highly sought-after tech tutorials on YouTube, but now YouTube is denying that odd removals were due to automation.

Creators grew alarmed when educational videos that YouTube had allowed for years were suddenly being bizarrely flagged as “dangerous” or “harmful,” with seemingly no way to trigger human review to overturn removals. AI seemed to be running the show, with creators’ appeals seemingly getting denied faster than a human could possibly review them.

Late Friday, a YouTube spokesperson confirmed that videos flagged by Ars have been reinstated, promising that YouTube will take steps to ensure that similar content isn’t removed in the future. But, to creators, it remains unclear why the videos got taken down, as YouTube claimed that both initial enforcement decisions and decisions on appeals were not the result of an automation issue.

Shocked creators were stuck speculating

Rich White, a computer technician who runs an account called CyberCPU Tech, had two videos removed that demonstrated workarounds to install Windows 11 on unsupported hardware.

These videos are popular, White told Ars, with people looking to bypass Microsoft account requirements each time a new build is released. For tech content creators like White, “these are bread and butter videos,” dependably yielding “extremely high views,” he said.

Because there’s such high demand, many tech content creators’ channels are filled with these kinds of videos. White’s account has “countless” examples, he said, and in the past, YouTube even featured his most popular video in the genre on a trending list.

To White and others, it’s unclear exactly what has changed on YouTube that triggered removals of this type of content.

YouTube only seemed to be removing recently posted content, White told Ars. However, if the takedowns ever impacted older content, entire channels documenting years of tech tutorials risked disappearing in “the blink of an eye,” another YouTuber behind a tech tips account called Britec09 warned after one of his videos was removed.

The stakes appeared high for everyone, White warned, in a video titled “YouTube Tech Channels in Danger!”

White had already censored content that he planned to post on his channel, fearing it wouldn’t be worth the risk of potentially losing his account, which began in 2020 as a side hustle but has since become his primary source of income. If he continues to change the content he posts to avoid YouTube penalties, it could hurt his account’s reach and monetization. Britec told Ars that he paused a sponsorship due to the uncertainty that he said has already hurt his channel and caused a “great loss of income.”

YouTube’s policies are strict, with the platform known to swiftly remove accounts that receive three strikes for violating community guidelines within 90 days. But, curiously, White had not received any strikes following his content removals. Although Britec reported that his account had received a strike following his video’s removal, White told Ars that YouTube so far had only given him two warnings, so his account is not yet at risk of a ban.

Creators weren’t sure why YouTube might deem this content as harmful, so they tossed around some theories. It seemed possible, White suggested in his video, that AI was detecting this content as “piracy,” but that shouldn’t be the case, he claimed, since his guides require users to have a valid license to install Windows 11. He also thinks it’s unlikely that Microsoft prompted the takedowns, suggesting tech content creators have a “love-hate relationship” with the tech company.

“They don’t like what we’re doing, but I don’t think they’re going to get rid of it,” White told Ars, suggesting that Microsoft “could stop us in our tracks” if it were motivated to end workarounds. But Microsoft doesn’t do that, White said, perhaps because it benefits from popular tutorials that attract swarms of Windows 11 users who otherwise may not use “their flagship operating system” if they can’t bypass Microsoft account requirements.

Those users could become loyal to Microsoft, White said. And eventually, some users may even “get tired of bypassing the Microsoft account requirements, or Microsoft will add a new feature that they’ll happily get the account for, and they’ll relent and start using a Microsoft account,” White suggested in his video. “At least some people will, not me.”

Microsoft declined Ars’ request to comment.

To White, it seemed possible that YouTube was leaning on AI  to catch more violations but perhaps recognized the risk of over-moderation and, therefore, wasn’t allowing AI to issue strikes on his account.

But that was just a “theory” that he and other creators came up with, but couldn’t confirm, since YouTube’s chatbot that supports creators seemed to also be “suspiciously AI-driven,” seemingly auto-responding even when a “supervisor” is connected, White said in his video.

Absent more clarity from YouTube, creators who post tutorials, tech tips, and computer repair videos were spooked. Their biggest fear was that unexpected changes to automated content moderation could unexpectedly knock them off YouTube for posting videos that in tech circles seem ordinary and commonplace, White and Britec said.

“We are not even sure what we can make videos on,” White said. “Everything’s a theory right now because we don’t have anything solid from YouTube.”

YouTube recommends making the content it’s removing

White’s channel gained popularity after YouTube highlighted an early trending video that he made, showing a workaround to install Windows 11 on unsupported hardware. Following that video, his channel’s views spiked, and then he gradually built up his subscriber base to around 330,000.

In the past, White’s videos in that category had been flagged as violative, but human review got them quickly reinstated.

“They were striked for the same reason, but at that time, I guess the AI revolution hadn’t taken over,” White said. “So it was relatively easy to talk to a real person. And by talking to a real person, they were like, ‘Yeah, this is stupid.’ And they brought the videos back.”

Now, YouTube suggests that human review is causing the removals, which likely doesn’t completely ease creators’ fears about arbitrary takedowns.

Britec’s video was also flagged as dangerous or harmful. He has managed his account that currently has nearly 900,000 subscribers since 2009, and he’s worried he risked losing “years of hard work,” he said in his video.

Britec told Ars that “it’s very confusing” for panicked tech content creators trying to understand what content is permissible. It’s particularly frustrating, he noted in his video, that YouTube’s creator tool inspiring “ideas” for posts seemed to contradict the mods’ content warnings and continued to recommend that creators make content on specific topics like workarounds to install Windows 11 on unsupported hardware.

Screenshot from Britec09’s YouTube video, showing YouTube prompting creators to make content that could get their channels removed. Credit: via Britec09

“This tool was to give you ideas for your next video,” Britec said. “And you can see right here, it’s telling you to create content on these topics. And if you did this, I can guarantee you your channel will get a strike.”

From there, creators hit what White described as a “brick wall,” with one of his appeals denied within one minute, which felt like it must be an automated decision. As Britec explained, “You will appeal, and your appeal will be rejected instantly. You will not be speaking to a human being. You’ll be speaking to a bot or AI. The bot will be giving you automated responses.”

YouTube insisted that the decisions weren’t automated, even when an appeal was denied within one minute.

White told Ars that it’s easy for creators to be discouraged and censor their channels rather than fight with the AI. After wasting “an hour and a half trying to reason with an AI about why I didn’t violate the community guidelines” once his first appeal was quickly denied, he “didn’t even bother using the chat function” after the second appeal was denied even faster, White confirmed in his video.

“I simply wasn’t going to do that again,” White said.

All week, the panic spread, reaching fans who follow tech content creators. On Reddit, people recommended saving tutorials lest they risk YouTube taking them down.

“I’ve had people come out and say, ‘This can’t be true. I rely on this every time,’” White told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

YouTube denies AI was involved with odd removals of tech tutorials Read More »

fcc-to-rescind-ruling-that-said-isps-are-required-to-secure-their-networks

FCC to rescind ruling that said ISPs are required to secure their networks

The Federal Communications Commission will vote in November to repeal a ruling that requires telecom providers to secure their networks, acting on a request from the biggest lobby groups representing Internet providers.

FCC Chairman Brendan Carr said the ruling, adopted in January just before Republicans gained majority control of the commission, “exceeded the agency’s authority and did not present an effective or agile response to the relevant cybersecurity threats.” Carr said the vote scheduled for November 20 comes after “extensive FCC engagement with carriers” who have taken “substantial steps… to strengthen their cybersecurity defenses.”

The FCC’s January 2025 declaratory ruling came in response to attacks by China, including the Salt Typhoon infiltration of major telecom providers such as Verizon and AT&T. The Biden-era FCC found that the Communications Assistance for Law Enforcement Act (CALEA), a 1994 law, “affirmatively requires telecommunications carriers to secure their networks from unlawful access or interception of communications.”

“The Commission has previously found that section 105 of CALEA creates an affirmative obligation for a telecommunications carrier to avoid the risk that suppliers of untrusted equipment will ‘illegally activate interceptions or other forms of surveillance within the carrier’s switching premises without its knowledge,’” the January order said. “With this Declaratory Ruling, we clarify that telecommunications carriers’ duties under section 105 of CALEA extend not only to the equipment they choose to use in their networks, but also to how they manage their networks.”

ISPs get what they want

The declaratory ruling was paired with a Notice of Proposed Rulemaking that would have led to stricter rules requiring specific steps to secure networks against unauthorized interception. Carr voted against the decision at the time.

Although the declaratory ruling didn’t yet have specific rules to go along with it, the FCC at the time said it had some teeth. “Even absent rules adopted by the Commission, such as those proposed below, we believe that telecommunications carriers would be unlikely to satisfy their statutory obligations under section 105 without adopting certain basic cybersecurity practices for their communications systems and services,” the January order said. “For example, basic cybersecurity hygiene practices such as implementing role-based access controls, changing default passwords, requiring minimum password strength, and adopting multifactor authentication are necessary for any sensitive computer system. Furthermore, a failure to patch known vulnerabilities or to employ best practices that are known to be necessary in response to identified exploits would appear to fall short of fulfilling this statutory obligation.”

FCC to rescind ruling that said ISPs are required to secure their networks Read More »

at&t-sues-ad-industry-watchdog-instead-of-pulling-ads-that-slam-t-mobile

AT&T sues ad industry watchdog instead of pulling ads that slam T-Mobile


Self-regulation breakdown

National Advertising Division said AT&T ad and press release broke program rule.

Credit: Getty Images | AaronP/Bauer-Griffin

AT&T yesterday sued the advertising industry’s official watchdog over the group’s demand that AT&T stop using its rulings for advertising and promotional purposes.

As previously reported, BBB National Programs’ National Advertising Division (NAD) found that AT&T violated a rule “by issuing a video advertisement and press release that use the NAD process and its findings for promotional purposes,” and sent a cease-and-desist letter to the carrier. The NAD operates the US advertising industry’s system of self-regulation, which is designed to handle complaints that advertisers file against each other and minimize government regulation of false and misleading claims.

While it’s clear that both AT&T and T-Mobile have a history of misleading ad campaigns, AT&T portrays itself as a paragon of honesty in new ads calling T-Mobile “the master of breaking promises.” An AT&T press release about the ad campaign said the NAD “asked T-Mobile to correct their marketing claims 16 times over the last four years,” and an AT&T commercial said T-Mobile has faced more challenges for deceptive ads from competitors than all other telecom providers in that time.

While the NAD describes AT&T’s actions as a clear-cut violation of rules that advertisers agree to in the self-regulatory process, AT&T disputed the accusation in a lawsuit filed in US District Court for the Northern District of Texas. “We stand by our campaign to shine a light on deceptive advertising from our competitors and oppose demands to silence the truth,” AT&T said in a press release.

AT&T’s lawsuit asked the court for a declaration, stating “that it has not violated NAD’s procedures” and that “NAD has no legal basis to enforce its demand for censorship.” The lawsuit complained that AT&T hasn’t been able to run its advertisements widely because “NAD’s inflammatory and baseless accusations have now intimidated multiple TV networks into pulling AT&T’s advertisement.”

AT&T claims rule no longer applies

AT&T’s claim that it didn’t violate an NAD rule hinges partly on when its press release was issued. The carrier claims the rule against referencing NAD decisions only applies for a short period of time after each NAD ruling.

“NAD now takes the remarkable position that any former participant in an NAD proceeding is forever barred from truthfully referencing NAD’s own public findings about a competitor’s deceptive advertising,” AT&T said. The lawsuit argued that “if NAD’s procedures were ever binding on AT&T, their binding effect ceased at the conclusion of the proceeding or a reasonable time thereafter.”

AT&T also slammed the NAD for failing to rein in T-Mobile’s deceptive ads. The group’s slow process let T-Mobile air deceptive advertisements without meaningful consequences, and the “NAD has repeatedly failed to refer continued violations to the FTC,” AT&T said.

“Over the past several years, NAD has repeatedly deemed T-Mobile’s ads to be misleading, false, or unsubstantiated,” AT&T said. “But over and over, T-Mobile has gamed the system to avoid timely redressing its behavior. NAD’s process is often slow, and T-Mobile knows it can make that process even slower by asking for extensions and delaying fixes.”

We’ve reported extensively on both carriers’ history of misleading advertisements over the years. That includes T-Mobile promising never to raise prices on certain plans and then raising them anyway. AT&T used to advertise 4G LTE service as “5GE,” and was rebuked for an ad that falsely claimed the carrier was already offering cellular coverage from space. AT&T and T-Mobile have both gotten in trouble for misleading promises of unlimited data.

AT&T says vague ad didn’t violate rule

AT&T’s lawsuit alleged that the NAD press release “intentionally impl[ied] that AT&T mischaracterized NAD’s prior decisions about T-Mobile’s deceptive advertising.” However, the NAD’s public stance is that AT&T violated the rule by using NAD decisions for promotional purposes, not by mischaracterizing the decisions.

NAD procedures state that companies participating in the system agree “not to mischaracterize any decision, abstract, or press release issued or use and/or disseminate such decision, abstract or press release for advertising and/or promotional purposes.” The NAD announcement didn’t make any specific allegations of AT&T mischaracterizing its decisions but said that AT&T violated the rules “by issuing a video advertisement and press release that use the NAD process and its findings for promotional purposes.”

The NAD said AT&T committed a “direct violation” of the rules by running an ad and issuing a press release “making representations regarding the alleged results of a competitor’s participation in BBB National Program’s advertising industry self-regulatory process.” The “alleged results” phrase may be why AT&T is claiming the NAD accused it of mischaracterizing decisions. There could also be more specific allegations in the cease-and-desist letter, which wasn’t made public.

AT&T claims its TV ads about T-Mobile don’t violate the rule because they only refer to “challenges” to T-Mobile advertising and “do not reference any decision, abstract, or press release.”

AT&T quibbles over rule meaning

AT&T further argues that a press release can’t violate the prohibition against using NAD decisions “for advertising and/or promotional purposes.” While press releases are clearly promotional in nature, AT&T says that part of the NAD rules doesn’t apply to press releases issued by advertisers like itself. Specifically, AT&T said that “the permissibility of press releases is not governed by Section 2.1(I)(2)(b), which applies to uses ‘for advertising and/or promotional purposes.’”

But the NAD procedures also bar participants in the process from issuing certain kinds of press releases. AT&T describes the rule about press releases as being in a different section than the rule about advertising and promotional purposes, but it’s actually all part of the same sentence. The rule says, “By participating in an NAD or NARB proceeding, the parties agree: (a) not to issue a press release regarding any decisions issued; and/or (b) not to mischaracterize any decision, abstract or press release issued or use and/or disseminate such decision, abstract or press release for advertising and/or promotional purposes.”

AT&T argues that the rule only bars press releases at the time of each NAD decision. The rule’s “meaning is clear in context: When NAD or NARB [National Advertising Review Board] issues a decision, no party is allowed to issue a press release to announce that decision,” AT&T said. “Instead, NAD issues its own press release to announce the decision. AT&T did not issue a press release to announce any decision, and indeed its advertisements (and press release announcing its advertising campaign) do not mention any particular NAD decision. In fact, AT&T’s press release does not use the word ‘decision’ at all.”

AT&T said that because it only made a short reference to NAD decisions, “AT&T’s press release about its new advertising campaign is therefore not a press release about an NAD decision as contemplated by Section 2.1(I)(2)(a).” AT&T also said it’s not a violation because the press release simply stated the number of rulings against T-Mobile and did not specifically cite any of those 16 decisions.

“AT&T’s press release does not include, attach, copy, or even cite any specific decision, abstract, or press release either in part or in whole,” AT&T’s lawsuit said. AT&T further said the NAD rule doesn’t apply to any proceeding AT&T wasn’t involved in, and that “AT&T did not initiate several of the proceedings against T-Mobile included in the one-sentence reference.”

We contacted the NAD about AT&T’s lawsuit but the group declined to comment.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

AT&T sues ad industry watchdog instead of pulling ads that slam T-Mobile Read More »

man-finally-released-a-month-after-absurd-arrest-for-reposting-trump-meme

Man finally released a month after absurd arrest for reposting Trump meme


Bodycam footage undermined sheriff’s “true threat” justification for the arrest.

The saga of a 61-year-old man jailed for more than a month after reposting a Facebook meme has ended, but free speech advocates are still reeling in the wake.

On Wednesday, Larry Bushart was released from Perry County Jail, where he had spent weeks unable to make bail, which a judge set at $2 million. Prosecutors have not explained why the charges against him were dropped, according to The Intercept, which has been tracking the case closely. However, officials faced mounting pressure following media coverage and a social media campaign called “Free Larry Bushart,” which stoked widespread concern over suspected police censorship of a US citizen over his political views.

How a meme landed a man in jail

Bushart’s arrest came after he decided to troll a message thread about a Charlie Kirk vigil in a Facebook group called “What’s Happening in Perry County, TN.” He posted a meme showing a picture of Donald Trump saying, “We should get over it.” The meme included a caption that said “Donald Trump, on the Perry High School mass shooting, one day after,” and Bushart included a comment with his post that said, “This seems relevant today ….”

His meme caught the eye of the Perry County sheriff, Nick Weems, who had mourned Kirk’s passing on his own Facebook page, The Intercept noted.

Supposedly, Weems’ decision to go after Bushart wasn’t due to his political views but to receiving messages from parents who misread Bushart’s post as possibly threatening an attack on the local Perry County High School. To pressure Bushart to remove the post, Weems contacted the Lexington Police Department to find Bushart. That led to the meme poster’s arrest and transfer to Perry County Jail.

Weems justified the arrest by claiming that Bushart’s meme represented a true threat, since “investigators believe Bushart was fully aware of the fear his post would cause and intentionally sought to create hysteria within the community,” The Tennessean reported. But “there was no evidence of any hysteria,” The Intercept reported, leading media outlets to pick apart Weems’ story.

Perhaps most suspicious were Weems’ claims that Bushart had callously refused to take down his post after cops told him that people were scared that he was threatening a school shooting.

The Intercept and Nashville’s CBS affiliate, NewsChannel 5, secured bodycam footage from the Lexington cop that undermined Weems’ narrative. The footage clearly showed the cop did not understand why the Perry County sheriff had taken issue with Bushart’s Facebook post.

“So, I’m just going to be completely honest with you,” the cop told Bushart. “I have really no idea what they are talking about. He had just called me and said there was some concerning posts that were made….”

Bushart clarified that it was likely his Facebook posts, laughing at the notion that someone had called the cops to report his meme. The Lexington officer told Bushart that he wasn’t sure “exactly what” Facebook post “they are referring to you,” but “they said that something was insinuating violence.”

“No, it wasn’t,” Bushart responded, confirming that “I’m not going to take it down.”

The cop, declining to even glance at the Facebook post, told Bushart, “I don’t care. This ain’t got nothing to do with me.” But the officer’s indifference didn’t stop Lexington police from taking Bushart into custody, booking him, and sending him to Weems’ county, where Bushart was charged “under a state law passed in July 2024 that makes it a Class E felony to make threats against schools,” The Tennessean reported.

“Just to clarify, this is what they charged you with,” a Perry County jail officer told Bushart—which was recorded on footage reviewed by The Intercept—“Threatening Mass Violence at a School.”

“At a school?” Bushart asked.

“I ain’t got a clue,” the officer responded, laughing. “I just gotta do what I have to do.”

“I’ve been in Facebook jail, but now I’m really in it,” Bushart said, joining him in laughing.

Cops knew the meme wasn’t a threat

Lexington police told The Intercept that Weems had lied when he told local news outlets that the forces had “coordinated” to offer Bushart a chance to delete the post prior to his arrest. Confronted with the bodycam footage, Weems denied lying, claiming that his investigator’s report must have been inaccurate, NewsChannel 5 reported.

Weems later admitted to NewsChannel 5 that “investigators knew that the meme was not about Perry County High School” and sought Bushart’s arrest anyway, supposedly hoping to quell “the fears of people in the community who misinterpreted it.” That’s as close as Weems comes to seemingly admitting that his intention was to censor the post.

The Perry County Sheriff’s Office did not respond to Ars’ request to comment.

According to The Tennessean, the law that landed Bushart behind bars has been widely criticized by First Amendment advocates. Beth Cruz, a lecturer in public interest law at Vanderbilt University Law School, told The Tennessean that “518 children in Tennessee were arrested under the current threats of mass violence law, including 71 children between the ages of 7 and 11” last year alone.

The law seems to contradict Supreme Court precedent, which set a high bar for what’s considered a “true threat,” recognizing that “it is easy for speech made in one context to inadvertently reach a larger audience” that misinterprets the message.

“The risk of overcriminalizing upsetting or frightening speech has only been increased by the Internet,” SCOTUS ruled. Justices warned then that “without sufficient protection for unintentionally threatening speech, a high school student who is still learning norms around appropriate language could easily go to prison.” They also feared that “someone may post an enraged comment under a news story about a controversial topic” that potentially gets them in trouble for speaking out “in the heat of the moment.”

“In a Nation that has never been timid about its opinions, political or otherwise, this is commonplace,” SCOTUS noted.

Dissenting judges, including Amy Coney Barrett and Clarence Thomas, thought the ruling went too far to protect speech, however. They felt that so long as a “reasonable person would regard the statement as a threat of violence,” that supposedly objective standard could be enough to criminalize speech like Bushart’s.

Adam Steinbaugh, an attorney with the Foundation for Individual Rights and Expression, told The Intercept that “people’s performative overreaction is not a sufficient basis to limit someone else’s free speech rights.”

“A free country does not dispatch police in the dead of night to pull people from their homes because a sheriff objects to their social media posts,” Steinbaugh said.

Man resumes Facebook posting upon release

Chris Eargle, who started the “Free Larry Bushart” Facebook group, told The Intercept that Weems’ story justifying the arrest made no sense. Instead, it seemed like the sheriff’s actions were politically motivated, Eargle suggested, intended to silence people like Bushart with a show of force demonstrating that “if you say something I don’t like, and you don’t take it down, now you’re going to be in trouble.”

“I mean, it’s just control over people’s speech,” Eargle said.

The Perry County Sheriff’s office chose to remove its Facebook page after the controversy, and it remains down as of this writing.

But Weems logged onto his Facebook page on Wednesday before Bushart’s charges were dropped, The Intercept reported. The sheriff seemingly stuck to his guns that people had interpreted the meme as a threat to a local school, claiming that he’s “100 percent for protecting the First Amendment. However, freedom of speech does not allow anyone to put someone else in fear of their well being.”

For Bushart, who The Intercept noted retired from decades in law enforcement last year, the arrest turned him into an icon of free speech, but it also shook up his life. He lost his job as a medical driver, and he missed the birth of his granddaughter.

Leaving jail, Bushart said he was “very happy to be going home.” He thanked all his supporters who ensured that he would not have to wait until December 4 to petition for his bail to be reduced—a delay which the prosecution had sought shortly before abruptly dismissing the charges, The Intercept reported.

Back at his computer, Bushart logged onto Facebook, posting first about his grandkid, then resuming his political trolling.

Eargle claimed many others fear posting their political opinions after Bushart’s arrest, though. Bushart’s son, Taylor, told Nashville news outlet WKRN that it has been a “trying time” for his family, while noting that his father’s release “doesn’t change what has happened to him” or threats to speech that could persist under Tennessee’s law.

“I can’t even begin to express how thankful we are for the outpour of support he has received,” Taylor said. “If we don’t fight to protect and preserve our rights today, just as we’ve now seen, they may be gone tomorrow.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Man finally released a month after absurd arrest for reposting Trump meme Read More »

trump-admin-demands-states-exempt-isps-from-net-neutrality-and-price-laws

Trump admin demands states exempt ISPs from net neutrality and price laws


US says net neutrality is price regulation and is banned in $42B grant program.

Credit: Getty Images | Yuichiro Chino

The Trump administration is refusing to give broadband-deployment grants to states that enforce net neutrality rules or price regulations, a Commerce Department official said.

The administration claims that net neutrality rules are a form of rate regulation and thus not allowed under the US law that created the $42 billion Broadband Equity, Access, and Deployment (BEAD) program. Commerce Department official Arielle Roth said that any state accepting BEAD funds must exempt Internet service providers from net neutrality and price regulations in all parts of the state, not only in areas where the ISP is given funds to deploy broadband service.

States could object to the NTIA decisions and sue the US government. But even a successful lawsuit could take years and leave unserved homes without broadband for the foreseeable future.

Roth, an assistant secretary who leads the National Telecommunications and Information Administration (NTIA), said in a speech at the conservative Hudson Institute on Tuesday:

Consistent with the law, which explicitly prohibits regulating the rates charged for broadband service, NTIA is making clear that states cannot impose rate regulation on the BEAD program. To protect the BEAD investment, we are clarifying that BEAD providers must be protected throughout their service area in a state, while the provider is still within its BEAD period of performance. Specifically, any state receiving BEAD funds must exempt BEAD providers throughout their state footprint from broadband-specific economic regulations, such as price regulation and net neutrality.

Trouble for California and New York

The US law that created BEAD requires Internet providers that receive federal funds to offer at least one “low-cost broadband service option for eligible subscribers,” but also says the NTIA may not regulate broadband prices. “Nothing in this title may be construed to authorize the Assistant Secretary or the National Telecommunications and Information Administration to regulate the rates charged for broadband service,” the law says.

The NTIA is interpreting this law in an expansive way by categorizing net neutrality rules as impermissible rate regulation and by demanding statewide exemptions from state laws for ISPs that obtain grant money.

This would be trouble for California, which has a net neutrality law that’s nearly identical to FCC net neutrality rules repealed during President Trump’s first term. California beat court challenges from Internet providers in cases that upheld its authority to regulate broadband service.

The NTIA stance is also trouble for New York, which has a law requiring ISPs to offer $15 or $20 broadband plans to people with low incomes. New York defeated industry challenges to its law, with the US Supreme Court declining opportunities to overturn a federal appeals court ruling in favor of the state.

But while broadband lobby groups weren’t able to block these state regulations with lawsuits, their allies in the Trump administration want to accomplish the goal by blocking grants that could be used to deploy broadband networks to homes and businesses that are unserved or underserved.

This already had an impact when a California lawmaker dropped a proposal, modeled on New York’s law, to require $15 monthly plans. As we wrote in July, Assemblymember Tasha Boerner said she pulled the bill because the Trump administration said that regulating prices would prevent California from getting its $1.86 billion share of BEAD. But now, California could lose access to the fund anyway due to the NTIA’s stance on net neutrality rules.

We contacted the California and New York governors’ offices about Roth’s comments and will update this article if we get any response.

Roth: State laws “threaten financial viability” of projects

Republicans have long argued that net neutrality is rate regulation, even though the rules don’t directly regulate prices that ISPs charge consumers. California’s law prohibits ISPs from blocking or throttling lawful traffic, prohibits fees charged to websites or online services to deliver or prioritize their traffic, bans paid data cap exemptions (also known as “zero-rating”), and says that ISPs may not attempt to evade net neutrality protections by slowing down traffic at network interconnection points.

Roth claimed that state broadband laws, even if applied only in non-grant areas, would degrade the service offered by ISPs in locations funded by grants. She said:

Unfortunately, some states have adopted or are considering adopting laws that specifically target broadband providers with rate regulation or state-level net neutrality mandates that threaten the financial viability of BEAD-funded projects and undermine Congress’s goal of connecting unserved communities.

Rate regulation drives up operating costs and scares off investment, especially in high-cost areas where every dollar counts. State-level net neutrality rules—itself a form of rate regulation—create a patchwork of conflicting regulations that raise compliance costs and deter investment.

These burdens don’t just hurt BEAD providers; they hurt the very households BEAD is meant to connect by reducing capital available for the hardest-to-reach communities. In some cases, they can divert investment away from BEAD areas altogether, as providers redirect resources to their lower-cost, lower-risk, non-BEAD markets.

State broadband laws “could create perverse incentives” by “pressuring providers to shift resources away from BEAD commitments to subsidize operations in non-BEAD areas subject to burdensome state rules,” Roth said. “That would increase the likelihood of defaults and defeat the purpose of BEAD’s once-in-a-generation investment.”

The NTIA decision not to give funds to states that enforce such rules “is essential to ensure that BEAD funds go where Congress intended—to build and operate networks in hard-to-serve areas—not to prop up regulatory experiments that drive investment away,” she said.

States are complying, Roth says

Roth indicated that at least some states are complying with the NTIA’s demands. These demands also include cutting red tape related to permits and access to utility poles and increasing the amount of matching dollars that ISPs themselves put into the projects. “In the coming weeks we will announce the approval of several state plans that incorporate these commitments,” she said. “We remain on track to approve the majority of state plans and get money out the door this year.”

Before Trump won the election, the Biden administration developed rules for BEAD and approved initial funding plans submitted by every state and territory. The Trump administration’s overhaul of the program rules has delayed the funding.

While the Biden NTIA pushed states to require specific prices for low-income plans, the Biden administration prohibited states “from explicitly or implicitly setting the LCSO [low-cost service option] rate” that ISPs must offer. Instead, ISPs get to choose what counts as “low-cost.”

The Trump administration also removed a preference for fiber projects, resulting in more money going to satellite providers—though not as much as SpaceX CEO Elon Musk has demanded. The changes imposed by the Trump NTIA have caused states to allocate less funding overall, leading to an ongoing dispute over what will happen to the $42 billion program’s leftover money.

Roth said the NTIA is “considering how states can use some of the BEAD savings—what has commonly been referred to as nondeployment money—on key outcomes like permitting reform,” but added that “no final decisions have been made.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Trump admin demands states exempt ISPs from net neutrality and price laws Read More »

meta-denies-torrenting-porn-to-train-ai,-says-downloads-were-for-“personal-use”

Meta denies torrenting porn to train AI, says downloads were for “personal use”

Instead, Meta argued, available evidence “is plainly indicative” that the flagged adult content was torrented for “private personal use”—since the small amount linked to Meta IP addressess and employees represented only “a few dozen titles per year intermittently obtained one file at a time.”

“The far more plausible inference to be drawn from such meager, uncoordinated activity is that disparate individuals downloaded adult videos for personal use,” Meta’s filing said.

For example, unlike lawsuits raised by book authors whose works are part of an enormous dataset used to train AI, the activity on Meta’s corporate IP addresses only amounted to about 22 downloads per year. That is nowhere near the “concerted effort to collect the massive datasets Plaintiffs allege are necessary for effective AI training,” Meta argued.

Further, that alleged activity can’t even reliably be linked to any Meta employee, Meta argued.

Strike 3 “does not identify any of the individuals who supposedly used these Meta IP addresses, allege that any were employed by Meta or had any role in AI training at Meta, or specify whether (and which) content allegedly downloaded was used to train any particular Meta model,” Meta wrote.

Meanwhile, “tens of thousands of employees,” as well as “innumerable contractors, visitors, and third parties access the Internet at Meta every day,” Meta argued. So while it’s “possible one or more Meta employees” downloaded Strike 3’s content over the last seven years, “it is just as possible” that a “guest, or freeloader,” or “contractor, or vendor, or repair person—or any combination of such persons—was responsible for that activity,” Meta suggested.

Other alleged activity included a claim that a Meta contractor was directed to download adult content at his father’s house, but those downloads, too, “are plainly indicative of personal consumption,” Meta argued. That contractor worked as an “automation engineer,” Meta noted, with no apparent basis provided for why he would be expected to source AI training data in that role. “No facts plausibly” tie “Meta to those downloads,” Meta claimed.

Meta denies torrenting porn to train AI, says downloads were for “personal use” Read More »

fcc-republicans-force-prisoners-and-families-to-pay-more-for-phone-calls

FCC Republicans force prisoners and families to pay more for phone calls

At yesterday’s meeting, the FCC separately proposed to eliminate a rule that requires Internet providers to itemize various fees in broadband price labels that must be made available to consumers. Public comment will be taken before a final decision. We described that proposal in an October 8 article.

“Under the cover of a shutdown with limited staff, a confused public, and an overloaded agenda, the FCC pushed to pass the most anti-consumer items it has approved yet,” Gomez said yesterday.

New inflation factor to raise rates further

The phone provider NCIC Correctional Services filed a petition asking the FCC to change its 2024 rate-cap order, claiming that the limits were “below the cost of providing service for most IPCS providers” and “unsustainable.” The order was also protested by Global Tel*Link (aka ViaPath) and Securus Technologies.

Gomez said that “providers making these claims did not even bother to meet with my office to explain their position,” and did not provide data requested by the FCC. By accepting the industry claims, “the FCC today decides to reward bad behavior,” Gomez said.

FCC price caps vary based on the size of the facility. The 2024 order set a range of $0.06 to $0.12 per minute for audio calls, down from the previous range of $0.14 to $0.21 per minute. The 2024 order adopted video call rate caps for the first time, setting rates from $0.11 to $0.25 per minute.

A few weeks before yesterday’s vote, the FCC released a public draft of its proposal with new voice-call caps ranging from $0.10 to $0.18 per minute, and new video call caps ranging from $0.18 to $0.41 per minute. These new limits account for changes to the method of rate-cap calculation, the $0.02 additional fee, and a new size category of “extremely small jails” that can charge the highest rates.

Gomez criticized an inflation factor of 6.7 percent that she said was added in the “11th hour.” The final version of the order approved at yesterday’s meeting hasn’t been released publicly yet. The inflation “factor will be adopted without being given notice to the public that it was being considered… or evidence that it’s necessary,” Gomez said.

FCC Republicans force prisoners and families to pay more for phone calls Read More »

ice’s-forced-face-scans-to-verify-citizens-is-unconstitutional,-lawmakers-say

ICE’s forced face scans to verify citizens is unconstitutional, lawmakers say

“A 2024 test by the National Institute of Standards and Technology found that facial recognition tools are less accurate when images are low quality, blurry, obscured, or taken from the side or in poor light—exactly the kind of images an ICE agent would likely capture when using a smartphone in the field,” their letter said.

If ICE’s use continues to expand, mistakes “will almost certainly proliferate,” senators said, and “even if ICE’s facial recognition tools were perfectly accurate, these technologies would still pose serious threats to individual privacy and free speech.”

Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation, told 404 Media that ICE’s growing use of facial recognition confirms that “we should have banned government use of face recognition when we had the chance because it is dangerous, invasive, and an inherent threat to civil liberties.” It also suggests that “any remaining pretense that ICE is harassing and surveilling people in any kind of ‘precise’ way should be left in the dust,” Guariglia said.

ICE scans faces, even if shown an ID

In their letter to ICE acting director Todd Lyons, senators sent a long list of questions to learn more about “ICE’s expanded use of biometric technology systems,” which senators suggested risked having “a sweeping and lasting impact on the public’s civil rights and liberties.” They demanded to know when ICE started using face scans in domestic deployments, as previously the technology was only known to be used at the border, and what testing was done to ensure apps like Mobile Fortify are accurate and unbiased.

Perhaps most relevant to 404 Media’s recent report, senators asked, “Does ICE have any policies, practices, or procedures around the use of the Mobile Fortify app to identify US citizens?” Lyons was supposed to respond by October 2, but Ars was not able to immediately confirm whether that deadline was met.

DHS declined “to confirm or deny law enforcement capabilities or methods” in response to 404 Media’s report, while CBP confirmed that Mobile Fortify is still being used by ICE, along with “a variety of technological capabilities” that supposedly “enhance the effectiveness of agents on the ground.”

ICE’s forced face scans to verify citizens is unconstitutional, lawmakers say Read More »