Policy

youtube-denies-ai-was-involved-with-odd-removals-of-tech-tutorials

YouTube denies AI was involved with odd removals of tech tutorials


YouTubers suspect AI is bizarrely removing popular video explainers.

This week, tech content creators began to suspect that AI was making it harder to share some of the most highly sought-after tech tutorials on YouTube, but now YouTube is denying that odd removals were due to automation.

Creators grew alarmed when educational videos that YouTube had allowed for years were suddenly being bizarrely flagged as “dangerous” or “harmful,” with seemingly no way to trigger human review to overturn removals. AI seemed to be running the show, with creators’ appeals seemingly getting denied faster than a human could possibly review them.

Late Friday, a YouTube spokesperson confirmed that videos flagged by Ars have been reinstated, promising that YouTube will take steps to ensure that similar content isn’t removed in the future. But, to creators, it remains unclear why the videos got taken down, as YouTube claimed that both initial enforcement decisions and decisions on appeals were not the result of an automation issue.

Shocked creators were stuck speculating

Rich White, a computer technician who runs an account called CyberCPU Tech, had two videos removed that demonstrated workarounds to install Windows 11 on unsupported hardware.

These videos are popular, White told Ars, with people looking to bypass Microsoft account requirements each time a new build is released. For tech content creators like White, “these are bread and butter videos,” dependably yielding “extremely high views,” he said.

Because there’s such high demand, many tech content creators’ channels are filled with these kinds of videos. White’s account has “countless” examples, he said, and in the past, YouTube even featured his most popular video in the genre on a trending list.

To White and others, it’s unclear exactly what has changed on YouTube that triggered removals of this type of content.

YouTube only seemed to be removing recently posted content, White told Ars. However, if the takedowns ever impacted older content, entire channels documenting years of tech tutorials risked disappearing in “the blink of an eye,” another YouTuber behind a tech tips account called Britec09 warned after one of his videos was removed.

The stakes appeared high for everyone, White warned, in a video titled “YouTube Tech Channels in Danger!”

White had already censored content that he planned to post on his channel, fearing it wouldn’t be worth the risk of potentially losing his account, which began in 2020 as a side hustle but has since become his primary source of income. If he continues to change the content he posts to avoid YouTube penalties, it could hurt his account’s reach and monetization. Britec told Ars that he paused a sponsorship due to the uncertainty that he said has already hurt his channel and caused a “great loss of income.”

YouTube’s policies are strict, with the platform known to swiftly remove accounts that receive three strikes for violating community guidelines within 90 days. But, curiously, White had not received any strikes following his content removals. Although Britec reported that his account had received a strike following his video’s removal, White told Ars that YouTube so far had only given him two warnings, so his account is not yet at risk of a ban.

Creators weren’t sure why YouTube might deem this content as harmful, so they tossed around some theories. It seemed possible, White suggested in his video, that AI was detecting this content as “piracy,” but that shouldn’t be the case, he claimed, since his guides require users to have a valid license to install Windows 11. He also thinks it’s unlikely that Microsoft prompted the takedowns, suggesting tech content creators have a “love-hate relationship” with the tech company.

“They don’t like what we’re doing, but I don’t think they’re going to get rid of it,” White told Ars, suggesting that Microsoft “could stop us in our tracks” if it were motivated to end workarounds. But Microsoft doesn’t do that, White said, perhaps because it benefits from popular tutorials that attract swarms of Windows 11 users who otherwise may not use “their flagship operating system” if they can’t bypass Microsoft account requirements.

Those users could become loyal to Microsoft, White said. And eventually, some users may even “get tired of bypassing the Microsoft account requirements, or Microsoft will add a new feature that they’ll happily get the account for, and they’ll relent and start using a Microsoft account,” White suggested in his video. “At least some people will, not me.”

Microsoft declined Ars’ request to comment.

To White, it seemed possible that YouTube was leaning on AI  to catch more violations but perhaps recognized the risk of over-moderation and, therefore, wasn’t allowing AI to issue strikes on his account.

But that was just a “theory” that he and other creators came up with, but couldn’t confirm, since YouTube’s chatbot that supports creators seemed to also be “suspiciously AI-driven,” seemingly auto-responding even when a “supervisor” is connected, White said in his video.

Absent more clarity from YouTube, creators who post tutorials, tech tips, and computer repair videos were spooked. Their biggest fear was that unexpected changes to automated content moderation could unexpectedly knock them off YouTube for posting videos that in tech circles seem ordinary and commonplace, White and Britec said.

“We are not even sure what we can make videos on,” White said. “Everything’s a theory right now because we don’t have anything solid from YouTube.”

YouTube recommends making the content it’s removing

White’s channel gained popularity after YouTube highlighted an early trending video that he made, showing a workaround to install Windows 11 on unsupported hardware. Following that video, his channel’s views spiked, and then he gradually built up his subscriber base to around 330,000.

In the past, White’s videos in that category had been flagged as violative, but human review got them quickly reinstated.

“They were striked for the same reason, but at that time, I guess the AI revolution hadn’t taken over,” White said. “So it was relatively easy to talk to a real person. And by talking to a real person, they were like, ‘Yeah, this is stupid.’ And they brought the videos back.”

Now, YouTube suggests that human review is causing the removals, which likely doesn’t completely ease creators’ fears about arbitrary takedowns.

Britec’s video was also flagged as dangerous or harmful. He has managed his account that currently has nearly 900,000 subscribers since 2009, and he’s worried he risked losing “years of hard work,” he said in his video.

Britec told Ars that “it’s very confusing” for panicked tech content creators trying to understand what content is permissible. It’s particularly frustrating, he noted in his video, that YouTube’s creator tool inspiring “ideas” for posts seemed to contradict the mods’ content warnings and continued to recommend that creators make content on specific topics like workarounds to install Windows 11 on unsupported hardware.

Screenshot from Britec09’s YouTube video, showing YouTube prompting creators to make content that could get their channels removed. Credit: via Britec09

“This tool was to give you ideas for your next video,” Britec said. “And you can see right here, it’s telling you to create content on these topics. And if you did this, I can guarantee you your channel will get a strike.”

From there, creators hit what White described as a “brick wall,” with one of his appeals denied within one minute, which felt like it must be an automated decision. As Britec explained, “You will appeal, and your appeal will be rejected instantly. You will not be speaking to a human being. You’ll be speaking to a bot or AI. The bot will be giving you automated responses.”

YouTube insisted that the decisions weren’t automated, even when an appeal was denied within one minute.

White told Ars that it’s easy for creators to be discouraged and censor their channels rather than fight with the AI. After wasting “an hour and a half trying to reason with an AI about why I didn’t violate the community guidelines” once his first appeal was quickly denied, he “didn’t even bother using the chat function” after the second appeal was denied even faster, White confirmed in his video.

“I simply wasn’t going to do that again,” White said.

All week, the panic spread, reaching fans who follow tech content creators. On Reddit, people recommended saving tutorials lest they risk YouTube taking them down.

“I’ve had people come out and say, ‘This can’t be true. I rely on this every time,’” White told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

YouTube denies AI was involved with odd removals of tech tutorials Read More »

fcc-to-rescind-ruling-that-said-isps-are-required-to-secure-their-networks

FCC to rescind ruling that said ISPs are required to secure their networks

The Federal Communications Commission will vote in November to repeal a ruling that requires telecom providers to secure their networks, acting on a request from the biggest lobby groups representing Internet providers.

FCC Chairman Brendan Carr said the ruling, adopted in January just before Republicans gained majority control of the commission, “exceeded the agency’s authority and did not present an effective or agile response to the relevant cybersecurity threats.” Carr said the vote scheduled for November 20 comes after “extensive FCC engagement with carriers” who have taken “substantial steps… to strengthen their cybersecurity defenses.”

The FCC’s January 2025 declaratory ruling came in response to attacks by China, including the Salt Typhoon infiltration of major telecom providers such as Verizon and AT&T. The Biden-era FCC found that the Communications Assistance for Law Enforcement Act (CALEA), a 1994 law, “affirmatively requires telecommunications carriers to secure their networks from unlawful access or interception of communications.”

“The Commission has previously found that section 105 of CALEA creates an affirmative obligation for a telecommunications carrier to avoid the risk that suppliers of untrusted equipment will ‘illegally activate interceptions or other forms of surveillance within the carrier’s switching premises without its knowledge,’” the January order said. “With this Declaratory Ruling, we clarify that telecommunications carriers’ duties under section 105 of CALEA extend not only to the equipment they choose to use in their networks, but also to how they manage their networks.”

ISPs get what they want

The declaratory ruling was paired with a Notice of Proposed Rulemaking that would have led to stricter rules requiring specific steps to secure networks against unauthorized interception. Carr voted against the decision at the time.

Although the declaratory ruling didn’t yet have specific rules to go along with it, the FCC at the time said it had some teeth. “Even absent rules adopted by the Commission, such as those proposed below, we believe that telecommunications carriers would be unlikely to satisfy their statutory obligations under section 105 without adopting certain basic cybersecurity practices for their communications systems and services,” the January order said. “For example, basic cybersecurity hygiene practices such as implementing role-based access controls, changing default passwords, requiring minimum password strength, and adopting multifactor authentication are necessary for any sensitive computer system. Furthermore, a failure to patch known vulnerabilities or to employ best practices that are known to be necessary in response to identified exploits would appear to fall short of fulfilling this statutory obligation.”

FCC to rescind ruling that said ISPs are required to secure their networks Read More »

at&t-sues-ad-industry-watchdog-instead-of-pulling-ads-that-slam-t-mobile

AT&T sues ad industry watchdog instead of pulling ads that slam T-Mobile


Self-regulation breakdown

National Advertising Division said AT&T ad and press release broke program rule.

Credit: Getty Images | AaronP/Bauer-Griffin

AT&T yesterday sued the advertising industry’s official watchdog over the group’s demand that AT&T stop using its rulings for advertising and promotional purposes.

As previously reported, BBB National Programs’ National Advertising Division (NAD) found that AT&T violated a rule “by issuing a video advertisement and press release that use the NAD process and its findings for promotional purposes,” and sent a cease-and-desist letter to the carrier. The NAD operates the US advertising industry’s system of self-regulation, which is designed to handle complaints that advertisers file against each other and minimize government regulation of false and misleading claims.

While it’s clear that both AT&T and T-Mobile have a history of misleading ad campaigns, AT&T portrays itself as a paragon of honesty in new ads calling T-Mobile “the master of breaking promises.” An AT&T press release about the ad campaign said the NAD “asked T-Mobile to correct their marketing claims 16 times over the last four years,” and an AT&T commercial said T-Mobile has faced more challenges for deceptive ads from competitors than all other telecom providers in that time.

While the NAD describes AT&T’s actions as a clear-cut violation of rules that advertisers agree to in the self-regulatory process, AT&T disputed the accusation in a lawsuit filed in US District Court for the Northern District of Texas. “We stand by our campaign to shine a light on deceptive advertising from our competitors and oppose demands to silence the truth,” AT&T said in a press release.

AT&T’s lawsuit asked the court for a declaration, stating “that it has not violated NAD’s procedures” and that “NAD has no legal basis to enforce its demand for censorship.” The lawsuit complained that AT&T hasn’t been able to run its advertisements widely because “NAD’s inflammatory and baseless accusations have now intimidated multiple TV networks into pulling AT&T’s advertisement.”

AT&T claims rule no longer applies

AT&T’s claim that it didn’t violate an NAD rule hinges partly on when its press release was issued. The carrier claims the rule against referencing NAD decisions only applies for a short period of time after each NAD ruling.

“NAD now takes the remarkable position that any former participant in an NAD proceeding is forever barred from truthfully referencing NAD’s own public findings about a competitor’s deceptive advertising,” AT&T said. The lawsuit argued that “if NAD’s procedures were ever binding on AT&T, their binding effect ceased at the conclusion of the proceeding or a reasonable time thereafter.”

AT&T also slammed the NAD for failing to rein in T-Mobile’s deceptive ads. The group’s slow process let T-Mobile air deceptive advertisements without meaningful consequences, and the “NAD has repeatedly failed to refer continued violations to the FTC,” AT&T said.

“Over the past several years, NAD has repeatedly deemed T-Mobile’s ads to be misleading, false, or unsubstantiated,” AT&T said. “But over and over, T-Mobile has gamed the system to avoid timely redressing its behavior. NAD’s process is often slow, and T-Mobile knows it can make that process even slower by asking for extensions and delaying fixes.”

We’ve reported extensively on both carriers’ history of misleading advertisements over the years. That includes T-Mobile promising never to raise prices on certain plans and then raising them anyway. AT&T used to advertise 4G LTE service as “5GE,” and was rebuked for an ad that falsely claimed the carrier was already offering cellular coverage from space. AT&T and T-Mobile have both gotten in trouble for misleading promises of unlimited data.

AT&T says vague ad didn’t violate rule

AT&T’s lawsuit alleged that the NAD press release “intentionally impl[ied] that AT&T mischaracterized NAD’s prior decisions about T-Mobile’s deceptive advertising.” However, the NAD’s public stance is that AT&T violated the rule by using NAD decisions for promotional purposes, not by mischaracterizing the decisions.

NAD procedures state that companies participating in the system agree “not to mischaracterize any decision, abstract, or press release issued or use and/or disseminate such decision, abstract or press release for advertising and/or promotional purposes.” The NAD announcement didn’t make any specific allegations of AT&T mischaracterizing its decisions but said that AT&T violated the rules “by issuing a video advertisement and press release that use the NAD process and its findings for promotional purposes.”

The NAD said AT&T committed a “direct violation” of the rules by running an ad and issuing a press release “making representations regarding the alleged results of a competitor’s participation in BBB National Program’s advertising industry self-regulatory process.” The “alleged results” phrase may be why AT&T is claiming the NAD accused it of mischaracterizing decisions. There could also be more specific allegations in the cease-and-desist letter, which wasn’t made public.

AT&T claims its TV ads about T-Mobile don’t violate the rule because they only refer to “challenges” to T-Mobile advertising and “do not reference any decision, abstract, or press release.”

AT&T quibbles over rule meaning

AT&T further argues that a press release can’t violate the prohibition against using NAD decisions “for advertising and/or promotional purposes.” While press releases are clearly promotional in nature, AT&T says that part of the NAD rules doesn’t apply to press releases issued by advertisers like itself. Specifically, AT&T said that “the permissibility of press releases is not governed by Section 2.1(I)(2)(b), which applies to uses ‘for advertising and/or promotional purposes.’”

But the NAD procedures also bar participants in the process from issuing certain kinds of press releases. AT&T describes the rule about press releases as being in a different section than the rule about advertising and promotional purposes, but it’s actually all part of the same sentence. The rule says, “By participating in an NAD or NARB proceeding, the parties agree: (a) not to issue a press release regarding any decisions issued; and/or (b) not to mischaracterize any decision, abstract or press release issued or use and/or disseminate such decision, abstract or press release for advertising and/or promotional purposes.”

AT&T argues that the rule only bars press releases at the time of each NAD decision. The rule’s “meaning is clear in context: When NAD or NARB [National Advertising Review Board] issues a decision, no party is allowed to issue a press release to announce that decision,” AT&T said. “Instead, NAD issues its own press release to announce the decision. AT&T did not issue a press release to announce any decision, and indeed its advertisements (and press release announcing its advertising campaign) do not mention any particular NAD decision. In fact, AT&T’s press release does not use the word ‘decision’ at all.”

AT&T said that because it only made a short reference to NAD decisions, “AT&T’s press release about its new advertising campaign is therefore not a press release about an NAD decision as contemplated by Section 2.1(I)(2)(a).” AT&T also said it’s not a violation because the press release simply stated the number of rulings against T-Mobile and did not specifically cite any of those 16 decisions.

“AT&T’s press release does not include, attach, copy, or even cite any specific decision, abstract, or press release either in part or in whole,” AT&T’s lawsuit said. AT&T further said the NAD rule doesn’t apply to any proceeding AT&T wasn’t involved in, and that “AT&T did not initiate several of the proceedings against T-Mobile included in the one-sentence reference.”

We contacted the NAD about AT&T’s lawsuit but the group declined to comment.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

AT&T sues ad industry watchdog instead of pulling ads that slam T-Mobile Read More »

man-finally-released-a-month-after-absurd-arrest-for-reposting-trump-meme

Man finally released a month after absurd arrest for reposting Trump meme


Bodycam footage undermined sheriff’s “true threat” justification for the arrest.

The saga of a 61-year-old man jailed for more than a month after reposting a Facebook meme has ended, but free speech advocates are still reeling in the wake.

On Wednesday, Larry Bushart was released from Perry County Jail, where he had spent weeks unable to make bail, which a judge set at $2 million. Prosecutors have not explained why the charges against him were dropped, according to The Intercept, which has been tracking the case closely. However, officials faced mounting pressure following media coverage and a social media campaign called “Free Larry Bushart,” which stoked widespread concern over suspected police censorship of a US citizen over his political views.

How a meme landed a man in jail

Bushart’s arrest came after he decided to troll a message thread about a Charlie Kirk vigil in a Facebook group called “What’s Happening in Perry County, TN.” He posted a meme showing a picture of Donald Trump saying, “We should get over it.” The meme included a caption that said “Donald Trump, on the Perry High School mass shooting, one day after,” and Bushart included a comment with his post that said, “This seems relevant today ….”

His meme caught the eye of the Perry County sheriff, Nick Weems, who had mourned Kirk’s passing on his own Facebook page, The Intercept noted.

Supposedly, Weems’ decision to go after Bushart wasn’t due to his political views but to receiving messages from parents who misread Bushart’s post as possibly threatening an attack on the local Perry County High School. To pressure Bushart to remove the post, Weems contacted the Lexington Police Department to find Bushart. That led to the meme poster’s arrest and transfer to Perry County Jail.

Weems justified the arrest by claiming that Bushart’s meme represented a true threat, since “investigators believe Bushart was fully aware of the fear his post would cause and intentionally sought to create hysteria within the community,” The Tennessean reported. But “there was no evidence of any hysteria,” The Intercept reported, leading media outlets to pick apart Weems’ story.

Perhaps most suspicious were Weems’ claims that Bushart had callously refused to take down his post after cops told him that people were scared that he was threatening a school shooting.

The Intercept and Nashville’s CBS affiliate, NewsChannel 5, secured bodycam footage from the Lexington cop that undermined Weems’ narrative. The footage clearly showed the cop did not understand why the Perry County sheriff had taken issue with Bushart’s Facebook post.

“So, I’m just going to be completely honest with you,” the cop told Bushart. “I have really no idea what they are talking about. He had just called me and said there was some concerning posts that were made….”

Bushart clarified that it was likely his Facebook posts, laughing at the notion that someone had called the cops to report his meme. The Lexington officer told Bushart that he wasn’t sure “exactly what” Facebook post “they are referring to you,” but “they said that something was insinuating violence.”

“No, it wasn’t,” Bushart responded, confirming that “I’m not going to take it down.”

The cop, declining to even glance at the Facebook post, told Bushart, “I don’t care. This ain’t got nothing to do with me.” But the officer’s indifference didn’t stop Lexington police from taking Bushart into custody, booking him, and sending him to Weems’ county, where Bushart was charged “under a state law passed in July 2024 that makes it a Class E felony to make threats against schools,” The Tennessean reported.

“Just to clarify, this is what they charged you with,” a Perry County jail officer told Bushart—which was recorded on footage reviewed by The Intercept—“Threatening Mass Violence at a School.”

“At a school?” Bushart asked.

“I ain’t got a clue,” the officer responded, laughing. “I just gotta do what I have to do.”

“I’ve been in Facebook jail, but now I’m really in it,” Bushart said, joining him in laughing.

Cops knew the meme wasn’t a threat

Lexington police told The Intercept that Weems had lied when he told local news outlets that the forces had “coordinated” to offer Bushart a chance to delete the post prior to his arrest. Confronted with the bodycam footage, Weems denied lying, claiming that his investigator’s report must have been inaccurate, NewsChannel 5 reported.

Weems later admitted to NewsChannel 5 that “investigators knew that the meme was not about Perry County High School” and sought Bushart’s arrest anyway, supposedly hoping to quell “the fears of people in the community who misinterpreted it.” That’s as close as Weems comes to seemingly admitting that his intention was to censor the post.

The Perry County Sheriff’s Office did not respond to Ars’ request to comment.

According to The Tennessean, the law that landed Bushart behind bars has been widely criticized by First Amendment advocates. Beth Cruz, a lecturer in public interest law at Vanderbilt University Law School, told The Tennessean that “518 children in Tennessee were arrested under the current threats of mass violence law, including 71 children between the ages of 7 and 11” last year alone.

The law seems to contradict Supreme Court precedent, which set a high bar for what’s considered a “true threat,” recognizing that “it is easy for speech made in one context to inadvertently reach a larger audience” that misinterprets the message.

“The risk of overcriminalizing upsetting or frightening speech has only been increased by the Internet,” SCOTUS ruled. Justices warned then that “without sufficient protection for unintentionally threatening speech, a high school student who is still learning norms around appropriate language could easily go to prison.” They also feared that “someone may post an enraged comment under a news story about a controversial topic” that potentially gets them in trouble for speaking out “in the heat of the moment.”

“In a Nation that has never been timid about its opinions, political or otherwise, this is commonplace,” SCOTUS noted.

Dissenting judges, including Amy Coney Barrett and Clarence Thomas, thought the ruling went too far to protect speech, however. They felt that so long as a “reasonable person would regard the statement as a threat of violence,” that supposedly objective standard could be enough to criminalize speech like Bushart’s.

Adam Steinbaugh, an attorney with the Foundation for Individual Rights and Expression, told The Intercept that “people’s performative overreaction is not a sufficient basis to limit someone else’s free speech rights.”

“A free country does not dispatch police in the dead of night to pull people from their homes because a sheriff objects to their social media posts,” Steinbaugh said.

Man resumes Facebook posting upon release

Chris Eargle, who started the “Free Larry Bushart” Facebook group, told The Intercept that Weems’ story justifying the arrest made no sense. Instead, it seemed like the sheriff’s actions were politically motivated, Eargle suggested, intended to silence people like Bushart with a show of force demonstrating that “if you say something I don’t like, and you don’t take it down, now you’re going to be in trouble.”

“I mean, it’s just control over people’s speech,” Eargle said.

The Perry County Sheriff’s office chose to remove its Facebook page after the controversy, and it remains down as of this writing.

But Weems logged onto his Facebook page on Wednesday before Bushart’s charges were dropped, The Intercept reported. The sheriff seemingly stuck to his guns that people had interpreted the meme as a threat to a local school, claiming that he’s “100 percent for protecting the First Amendment. However, freedom of speech does not allow anyone to put someone else in fear of their well being.”

For Bushart, who The Intercept noted retired from decades in law enforcement last year, the arrest turned him into an icon of free speech, but it also shook up his life. He lost his job as a medical driver, and he missed the birth of his granddaughter.

Leaving jail, Bushart said he was “very happy to be going home.” He thanked all his supporters who ensured that he would not have to wait until December 4 to petition for his bail to be reduced—a delay which the prosecution had sought shortly before abruptly dismissing the charges, The Intercept reported.

Back at his computer, Bushart logged onto Facebook, posting first about his grandkid, then resuming his political trolling.

Eargle claimed many others fear posting their political opinions after Bushart’s arrest, though. Bushart’s son, Taylor, told Nashville news outlet WKRN that it has been a “trying time” for his family, while noting that his father’s release “doesn’t change what has happened to him” or threats to speech that could persist under Tennessee’s law.

“I can’t even begin to express how thankful we are for the outpour of support he has received,” Taylor said. “If we don’t fight to protect and preserve our rights today, just as we’ve now seen, they may be gone tomorrow.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Man finally released a month after absurd arrest for reposting Trump meme Read More »

trump-admin-demands-states-exempt-isps-from-net-neutrality-and-price-laws

Trump admin demands states exempt ISPs from net neutrality and price laws


US says net neutrality is price regulation and is banned in $42B grant program.

Credit: Getty Images | Yuichiro Chino

The Trump administration is refusing to give broadband-deployment grants to states that enforce net neutrality rules or price regulations, a Commerce Department official said.

The administration claims that net neutrality rules are a form of rate regulation and thus not allowed under the US law that created the $42 billion Broadband Equity, Access, and Deployment (BEAD) program. Commerce Department official Arielle Roth said that any state accepting BEAD funds must exempt Internet service providers from net neutrality and price regulations in all parts of the state, not only in areas where the ISP is given funds to deploy broadband service.

States could object to the NTIA decisions and sue the US government. But even a successful lawsuit could take years and leave unserved homes without broadband for the foreseeable future.

Roth, an assistant secretary who leads the National Telecommunications and Information Administration (NTIA), said in a speech at the conservative Hudson Institute on Tuesday:

Consistent with the law, which explicitly prohibits regulating the rates charged for broadband service, NTIA is making clear that states cannot impose rate regulation on the BEAD program. To protect the BEAD investment, we are clarifying that BEAD providers must be protected throughout their service area in a state, while the provider is still within its BEAD period of performance. Specifically, any state receiving BEAD funds must exempt BEAD providers throughout their state footprint from broadband-specific economic regulations, such as price regulation and net neutrality.

Trouble for California and New York

The US law that created BEAD requires Internet providers that receive federal funds to offer at least one “low-cost broadband service option for eligible subscribers,” but also says the NTIA may not regulate broadband prices. “Nothing in this title may be construed to authorize the Assistant Secretary or the National Telecommunications and Information Administration to regulate the rates charged for broadband service,” the law says.

The NTIA is interpreting this law in an expansive way by categorizing net neutrality rules as impermissible rate regulation and by demanding statewide exemptions from state laws for ISPs that obtain grant money.

This would be trouble for California, which has a net neutrality law that’s nearly identical to FCC net neutrality rules repealed during President Trump’s first term. California beat court challenges from Internet providers in cases that upheld its authority to regulate broadband service.

The NTIA stance is also trouble for New York, which has a law requiring ISPs to offer $15 or $20 broadband plans to people with low incomes. New York defeated industry challenges to its law, with the US Supreme Court declining opportunities to overturn a federal appeals court ruling in favor of the state.

But while broadband lobby groups weren’t able to block these state regulations with lawsuits, their allies in the Trump administration want to accomplish the goal by blocking grants that could be used to deploy broadband networks to homes and businesses that are unserved or underserved.

This already had an impact when a California lawmaker dropped a proposal, modeled on New York’s law, to require $15 monthly plans. As we wrote in July, Assemblymember Tasha Boerner said she pulled the bill because the Trump administration said that regulating prices would prevent California from getting its $1.86 billion share of BEAD. But now, California could lose access to the fund anyway due to the NTIA’s stance on net neutrality rules.

We contacted the California and New York governors’ offices about Roth’s comments and will update this article if we get any response.

Roth: State laws “threaten financial viability” of projects

Republicans have long argued that net neutrality is rate regulation, even though the rules don’t directly regulate prices that ISPs charge consumers. California’s law prohibits ISPs from blocking or throttling lawful traffic, prohibits fees charged to websites or online services to deliver or prioritize their traffic, bans paid data cap exemptions (also known as “zero-rating”), and says that ISPs may not attempt to evade net neutrality protections by slowing down traffic at network interconnection points.

Roth claimed that state broadband laws, even if applied only in non-grant areas, would degrade the service offered by ISPs in locations funded by grants. She said:

Unfortunately, some states have adopted or are considering adopting laws that specifically target broadband providers with rate regulation or state-level net neutrality mandates that threaten the financial viability of BEAD-funded projects and undermine Congress’s goal of connecting unserved communities.

Rate regulation drives up operating costs and scares off investment, especially in high-cost areas where every dollar counts. State-level net neutrality rules—itself a form of rate regulation—create a patchwork of conflicting regulations that raise compliance costs and deter investment.

These burdens don’t just hurt BEAD providers; they hurt the very households BEAD is meant to connect by reducing capital available for the hardest-to-reach communities. In some cases, they can divert investment away from BEAD areas altogether, as providers redirect resources to their lower-cost, lower-risk, non-BEAD markets.

State broadband laws “could create perverse incentives” by “pressuring providers to shift resources away from BEAD commitments to subsidize operations in non-BEAD areas subject to burdensome state rules,” Roth said. “That would increase the likelihood of defaults and defeat the purpose of BEAD’s once-in-a-generation investment.”

The NTIA decision not to give funds to states that enforce such rules “is essential to ensure that BEAD funds go where Congress intended—to build and operate networks in hard-to-serve areas—not to prop up regulatory experiments that drive investment away,” she said.

States are complying, Roth says

Roth indicated that at least some states are complying with the NTIA’s demands. These demands also include cutting red tape related to permits and access to utility poles and increasing the amount of matching dollars that ISPs themselves put into the projects. “In the coming weeks we will announce the approval of several state plans that incorporate these commitments,” she said. “We remain on track to approve the majority of state plans and get money out the door this year.”

Before Trump won the election, the Biden administration developed rules for BEAD and approved initial funding plans submitted by every state and territory. The Trump administration’s overhaul of the program rules has delayed the funding.

While the Biden NTIA pushed states to require specific prices for low-income plans, the Biden administration prohibited states “from explicitly or implicitly setting the LCSO [low-cost service option] rate” that ISPs must offer. Instead, ISPs get to choose what counts as “low-cost.”

The Trump administration also removed a preference for fiber projects, resulting in more money going to satellite providers—though not as much as SpaceX CEO Elon Musk has demanded. The changes imposed by the Trump NTIA have caused states to allocate less funding overall, leading to an ongoing dispute over what will happen to the $42 billion program’s leftover money.

Roth said the NTIA is “considering how states can use some of the BEAD savings—what has commonly been referred to as nondeployment money—on key outcomes like permitting reform,” but added that “no final decisions have been made.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Trump admin demands states exempt ISPs from net neutrality and price laws Read More »

meta-denies-torrenting-porn-to-train-ai,-says-downloads-were-for-“personal-use”

Meta denies torrenting porn to train AI, says downloads were for “personal use”

Instead, Meta argued, available evidence “is plainly indicative” that the flagged adult content was torrented for “private personal use”—since the small amount linked to Meta IP addressess and employees represented only “a few dozen titles per year intermittently obtained one file at a time.”

“The far more plausible inference to be drawn from such meager, uncoordinated activity is that disparate individuals downloaded adult videos for personal use,” Meta’s filing said.

For example, unlike lawsuits raised by book authors whose works are part of an enormous dataset used to train AI, the activity on Meta’s corporate IP addresses only amounted to about 22 downloads per year. That is nowhere near the “concerted effort to collect the massive datasets Plaintiffs allege are necessary for effective AI training,” Meta argued.

Further, that alleged activity can’t even reliably be linked to any Meta employee, Meta argued.

Strike 3 “does not identify any of the individuals who supposedly used these Meta IP addresses, allege that any were employed by Meta or had any role in AI training at Meta, or specify whether (and which) content allegedly downloaded was used to train any particular Meta model,” Meta wrote.

Meanwhile, “tens of thousands of employees,” as well as “innumerable contractors, visitors, and third parties access the Internet at Meta every day,” Meta argued. So while it’s “possible one or more Meta employees” downloaded Strike 3’s content over the last seven years, “it is just as possible” that a “guest, or freeloader,” or “contractor, or vendor, or repair person—or any combination of such persons—was responsible for that activity,” Meta suggested.

Other alleged activity included a claim that a Meta contractor was directed to download adult content at his father’s house, but those downloads, too, “are plainly indicative of personal consumption,” Meta argued. That contractor worked as an “automation engineer,” Meta noted, with no apparent basis provided for why he would be expected to source AI training data in that role. “No facts plausibly” tie “Meta to those downloads,” Meta claimed.

Meta denies torrenting porn to train AI, says downloads were for “personal use” Read More »

fcc-republicans-force-prisoners-and-families-to-pay-more-for-phone-calls

FCC Republicans force prisoners and families to pay more for phone calls

At yesterday’s meeting, the FCC separately proposed to eliminate a rule that requires Internet providers to itemize various fees in broadband price labels that must be made available to consumers. Public comment will be taken before a final decision. We described that proposal in an October 8 article.

“Under the cover of a shutdown with limited staff, a confused public, and an overloaded agenda, the FCC pushed to pass the most anti-consumer items it has approved yet,” Gomez said yesterday.

New inflation factor to raise rates further

The phone provider NCIC Correctional Services filed a petition asking the FCC to change its 2024 rate-cap order, claiming that the limits were “below the cost of providing service for most IPCS providers” and “unsustainable.” The order was also protested by Global Tel*Link (aka ViaPath) and Securus Technologies.

Gomez said that “providers making these claims did not even bother to meet with my office to explain their position,” and did not provide data requested by the FCC. By accepting the industry claims, “the FCC today decides to reward bad behavior,” Gomez said.

FCC price caps vary based on the size of the facility. The 2024 order set a range of $0.06 to $0.12 per minute for audio calls, down from the previous range of $0.14 to $0.21 per minute. The 2024 order adopted video call rate caps for the first time, setting rates from $0.11 to $0.25 per minute.

A few weeks before yesterday’s vote, the FCC released a public draft of its proposal with new voice-call caps ranging from $0.10 to $0.18 per minute, and new video call caps ranging from $0.18 to $0.41 per minute. These new limits account for changes to the method of rate-cap calculation, the $0.02 additional fee, and a new size category of “extremely small jails” that can charge the highest rates.

Gomez criticized an inflation factor of 6.7 percent that she said was added in the “11th hour.” The final version of the order approved at yesterday’s meeting hasn’t been released publicly yet. The inflation “factor will be adopted without being given notice to the public that it was being considered… or evidence that it’s necessary,” Gomez said.

FCC Republicans force prisoners and families to pay more for phone calls Read More »

ice’s-forced-face-scans-to-verify-citizens-is-unconstitutional,-lawmakers-say

ICE’s forced face scans to verify citizens is unconstitutional, lawmakers say

“A 2024 test by the National Institute of Standards and Technology found that facial recognition tools are less accurate when images are low quality, blurry, obscured, or taken from the side or in poor light—exactly the kind of images an ICE agent would likely capture when using a smartphone in the field,” their letter said.

If ICE’s use continues to expand, mistakes “will almost certainly proliferate,” senators said, and “even if ICE’s facial recognition tools were perfectly accurate, these technologies would still pose serious threats to individual privacy and free speech.”

Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation, told 404 Media that ICE’s growing use of facial recognition confirms that “we should have banned government use of face recognition when we had the chance because it is dangerous, invasive, and an inherent threat to civil liberties.” It also suggests that “any remaining pretense that ICE is harassing and surveilling people in any kind of ‘precise’ way should be left in the dust,” Guariglia said.

ICE scans faces, even if shown an ID

In their letter to ICE acting director Todd Lyons, senators sent a long list of questions to learn more about “ICE’s expanded use of biometric technology systems,” which senators suggested risked having “a sweeping and lasting impact on the public’s civil rights and liberties.” They demanded to know when ICE started using face scans in domestic deployments, as previously the technology was only known to be used at the border, and what testing was done to ensure apps like Mobile Fortify are accurate and unbiased.

Perhaps most relevant to 404 Media’s recent report, senators asked, “Does ICE have any policies, practices, or procedures around the use of the Mobile Fortify app to identify US citizens?” Lyons was supposed to respond by October 2, but Ars was not able to immediately confirm whether that deadline was met.

DHS declined “to confirm or deny law enforcement capabilities or methods” in response to 404 Media’s report, while CBP confirmed that Mobile Fortify is still being used by ICE, along with “a variety of technological capabilities” that supposedly “enhance the effectiveness of agents on the ground.”

ICE’s forced face scans to verify citizens is unconstitutional, lawmakers say Read More »

if-things-in-america-weren’t-stupid-enough,-texas-is-suing-tylenol-maker

If things in America weren’t stupid enough, Texas is suing Tylenol maker

While the underlying cause or causes of autism spectrum disorder remain elusive and appear likely to be a complex interplay of genetic and environmental factors, President Trump and his anti-vaccine health secretary Robert F. Kennedy Jr.—neither of whom have any scientific or medical background whatsoever—have decided to pin the blame on Tylenol, a common pain reliever and fever reducer that has no proven link to autism.

And now, Texas Attorney General Ken Paxton is suing the maker of Tylenol, Kenvue and Johnson & Johnson, who previously sold Tylenol, claiming that they have been “deceptively marketing Tylenol” knowing that it “leads to a significantly increased risk of autism and other disorders.”

To back that claim, Paxton relies on the “considerable body of evidence… recently highlighted by the Trump Administration.”

Of course, there is no “considerable” evidence for this claim, only tenuous associations and conflicting studies. Trump and Kennedy’s justification for blaming Tylenol was revealed in a rambling, incoherent press conference last month, in which Trump spoke of a “rumor” about Tylenol and his “opinion” on the matter. Still, he firmly warned against its use, saying well over a dozen times: “don’t take Tylenol.”

“Don’t take Tylenol. There’s no downside. Don’t take it. You’ll be uncomfortable. It won’t be as easy maybe, but don’t take it if you’re pregnant. Don’t take Tylenol and don’t give it to the baby after the baby is born,” he said.

“Scientifically unfounded”

As Ars has reported previously, there are some studies that have found an association between use of Tylenol (aka acetaminophen or paracetamol) and a higher risk of autism. But, many of the studies finding such an association have significant flaws. Other studies have found no link. That includes a highly regarded Swedish study that compared autism risk among siblings with different acetaminophen exposures during pregnancy, but otherwise similar genetic and environmental risks. Acetaminophen didn’t make a difference, suggesting other genetic and/or environmental factors might explain any associations. Further, even if there is a real association (aka a correlation) between acetaminophen use and autism risk, that does not mean the pain reliever is the cause of autism.

If things in America weren’t stupid enough, Texas is suing Tylenol maker Read More »

senators-move-to-keep-big-tech’s-creepy-companion-bots-away-from-kids

Senators move to keep Big Tech’s creepy companion bots away from kids

Big Tech says bans aren’t the answer

As the bill advances, it could change, senators and parents acknowledged at the press conference. It will likely face backlash from privacy advocates who have raised concerns that widely collecting personal data for age verification puts sensitive information at risk of a data breach or other misuse.

The tech industry has already voiced opposition. On Tuesday, Chamber of Progress, a Big Tech trade group, criticized the law as taking a “heavy-handed approach” to child safety. The group’s vice president of US policy and government relations, K.J. Bagchi, said that “we all want to keep kids safe, but the answer is balance, not bans.

“It’s better to focus on transparency when kids chat with AI, curbs on manipulative design, and reporting when sensitive issues arise,” Bagchi said.

However, several organizations dedicated to child safety online, including the Young People’s Alliance, the Tech Justice Law Project, and the Institute for Families and Technology, cheered senators’ announcement Tuesday. The GUARD Act, these groups told Time, is just “one part of a national movement to protect children and teens from the dangers of companion chatbots.”

Mourning parents are rallying behind that movement. Earlier this month, Garcia praised California for “finally” passing the first state law requiring companies to protect their users who express suicidal ideations to chatbots.

“American families, like mine, are in a battle for the online safety of our children,” Garcia said at that time.

During Tuesday’s press conference, Blumenthal noted that the chatbot ban bill was just one initiative of many that he and Hawley intend to raise to heighten scrutiny on AI firms.

Senators move to keep Big Tech’s creepy companion bots away from kids Read More »

python-plan-to-boost-software-security-foiled-by-trump-admin’s-anti-dei-rules

Python plan to boost software security foiled by Trump admin’s anti-DEI rules

“Given the value of the grant to the community and the PSF, we did our utmost to get clarity on the terms and to find a way to move forward in concert with our values. We consulted our NSF contacts and reviewed decisions made by other organizations in similar circumstances, particularly The Carpentries,” the Python Software Foundation said.

Board voted unanimously to withdraw application

The Carpentries, which teaches computational and data science skills to researchers, said in June that it withdrew its grant proposal after “we were notified that our proposal was flagged for DEI content, namely, for ‘the retention of underrepresented students, which has a limitation or preference in outreach, recruitment, participation that is not aligned to NSF priorities.’” The Carpentries was also concerned about the National Science Foundation rule against grant recipients advancing or promoting DEI in “any” program, a change that took effect in May.

“These new requirements mean that, in order to accept NSF funds, we would need to agree to discontinue all DEI focused programming, even if those activities are not carried out with NSF funds,” The Carpentries’ announcement in June said, explaining the decision to rescind the proposal.

The Python Software Foundation similarly decided that it “can’t agree to a statement that we won’t operate any programs that ‘advance or promote’ diversity, equity, and inclusion, as it would be a betrayal of our mission and our community,” it said yesterday. The foundation board “voted unanimously to withdraw” the application.

The Python foundation said it is disappointed because the project would have offered “invaluable advances to the Python and greater open source community, protecting millions of PyPI users from attempted supply-chain attacks.” The plan was to “create new tools for automated proactive review of all packages uploaded to PyPI, rather than the current process of reactive-only review. These novel tools would rely on capability analysis, designed based on a dataset of known malware. Beyond just protecting PyPI users, the outputs of this work could be transferable for all open source software package registries, such as NPM and Crates.io, improving security across multiple open source ecosystems.”

The foundation is still hoping to do that work and ended its blog post with a call for donations from individuals and companies that use Python.

Python plan to boost software security foiled by Trump admin’s anti-DEI rules Read More »

australia’s-social-media-ban-is-“problematic,”-but-platforms-will-comply-anyway

Australia’s social media ban is “problematic,” but platforms will comply anyway

Social media platforms have agreed to comply with Australia’s social media ban for users under 16 years old, begrudgingly embracing the world’s most restrictive online child safety law.

On Tuesday, Meta, Snap, and TikTok confirmed to Australia’s parliament that they’ll start removing and deactivating more than a million underage accounts when the law’s enforcement begins on December 10, Reuters reported.

Firms risk fines of up to $32.5 million for failing to block underage users.

Age checks are expected to be spotty, however, and Australia is still “scrambling” to figure out “key issues around enforcement,” including detailing firms’ precise obligations, AFP reported.

An FAQ managed by Australia’s eSafety regulator noted that platforms will be expected to find the accounts of all users under 16.

Those users must be allowed to download their data easily before their account is removed.

Some platforms can otherwise allow users to simply deactivate and retain their data until they reach age 17. Meta and TikTok expect to go that route, but Australia’s regulator warned that “users should not rely on platforms to provide this option.”

Additionally, platforms must prepare to catch kids who skirt age gates, the regulator said, and must block anyone under 16 from opening a new account. Beyond that, they’re expected to prevent “workarounds” to “bypass restrictions,” such as kids using AI to fake IDs, deepfakes to trick face scans, or the use of virtual private networks (VPNs) to alter their location to basically anywhere else in the world with less restrictive child safety policies.

Kids discovered inappropriately accessing social media should be easy to report, too, Australia’s regulator said.

Australia’s social media ban is “problematic,” but platforms will comply anyway Read More »