Policy

cloudflare-defies-italy’s-piracy-shield,-won’t-block-websites-on-111.1-dns

Cloudflare defies Italy’s Piracy Shield, won’t block websites on 1.1.1.1 DNS

The CCIA added that “the Piracy Shield raises a significant number of concerns which can inadvertently affect legitimate online services, primarily due to the potential for overblocking.” The letter said that in October 2024, “Google Drive was mistakenly blocked by the Piracy Shield system, causing a three-hour blackout for all Italian users, while 13.5 percent of users were still blocked at the IP level, and 3 percent were blocked at the DNS level after 12 hours.”

The Italian system “aims to automate the blocking process by allowing rights holders to submit IP addresses directly through the platform, following which ISPs have to implement a block,” the CCIA said. “Verification procedures between submission and blocking are not clear, and indeed seem to be lacking. Additionally, there is a total lack of redress mechanisms for affected parties, in case a wrong domain or IP address is submitted and blocked.”

30-minute blocking prevents “careful verification”

The 30-minute blocking window “leaves extremely limited time for careful verification by ISPs that the submitted destination is indeed being used for piracy purposes,” the CCIA said. The trade group also questioned the piracy-reporting system’s ties to the organization that runs Italy’s top football league.

“Additionally, the fact that the Piracy Shield platform was developed for AGCOM by a company affiliated with Lega Serie A, which is one of the very few entities authorized to report, raises serious questions about the potential conflict of interest exacerbating the lack of transparency issue,” the letter said.

A trade group for Italian ISPs has argued that the law requires “filtering and tasks that collide with individual freedoms” and is contrary to European legislation that classifies broadband network services as mere conduits that are exempt from liability.

“On the contrary, in Italy criminal liability has been expressly established for ISPs,” Dalia Coffetti, head of regulatory and EU affairs at the Association of Italian Internet Providers, wrote in April 2025. Coffetti argued, “There are better tools to fight piracy, including criminal Law, cooperation between States, and digital solutions that downgrade the quality of the signal broadcast via illegal streaming websites or IPtv. European ISPs are ready to play their part in the battle against piracy, but the solution certainly does not lie in filtering and blocking IP addresses.”

Cloudflare defies Italy’s Piracy Shield, won’t block websites on 1.1.1.1 DNS Read More »

x’s-half-assed-attempt-to-paywall-grok-doesn’t-block-free-image-editing

X’s half-assed attempt to paywall Grok doesn’t block free image editing

So far, US regulators have been quiet about Grok’s outputs, with the Justice Department generally promising to take all forms of CSAM seriously. On Friday, Democratic senators started shifting those tides, demanding that Google and Apple remove X and Grok from app stores until it improves safeguards to block harmful outputs.

“There can be no mistake about X’s knowledge, and, at best, negligent response to these trends,” the senators wrote in a letter to Apple Chief Executive Officer Tim Cook and Google Chief Executive Officer Sundar Pichai. “Turning a blind eye to X’s egregious behavior would make a mockery of your moderation practices. Indeed, not taking action would undermine your claims in public and in court that your app stores offer a safer user experience than letting users download apps directly to their phones.”

A response to the letter is requested by January 23.

Whether the UK will accept X’s supposed solution is yet to be seen. If UK regulator Ofcom decides to move ahead with a probe into whether Musk’s chatbot violates the UK’s Online Safety Act, X could face a UK ban or fines of up to 10 percent of the company’s global turnover.

“It’s unlawful,” UK Prime Minister Keir Starmer said of Grok’s worst outputs. “We’re not going to tolerate it. I’ve asked for all options to be on the table. It’s disgusting. X need to get their act together and get this material down. We will take action on this because it’s simply not tolerable.”

At least one UK parliament member, Jess Asato, told The Guardian that even if X had put up an actual paywall, that isn’t enough to end the scrutiny.

“While it is a step forward to have removed the universal access to Grok’s disgusting nudifying features, this still means paying users can take images of women without their consent to sexualise and brutalise them,” Asato said. “Paying to put semen, bullet holes, or bikinis on women is still digital sexual assault, and xAI should disable the feature for good.”

X’s half-assed attempt to paywall Grok doesn’t block free image editing Read More »

wi-fi-advocates-get-win-from-fcc-with-vote-to-allow-higher-power-devices

Wi-Fi advocates get win from FCC with vote to allow higher-power devices

“This is important for Wi-Fi 7 as well as Wi-Fi 6,” Feld wrote today in response to the Carr plan. “But we need a real pipeline for more unlicensed spectrum. Glad to see value of unlicensed acknowledged. Looking forward to more of it.”

Risk to Wi-Fi spectrum appears low

Despite the positive response to Carr’s plan this week, there’s still a potential threat to Wi-Fi’s use of the 6 GHz band. The 1,200 MHz between 5.925 and 7.125 GHz was allocated to Wi-Fi in April 2020, but a plan to auction spectrum to wireless carriers could take some of those frequencies away from Wi-Fi.

A law approved by Congress and Trump in July 2025 requires the FCC to auction at least 800 MHz of spectrum, some of which could come from the 6 GHz band currently allocated to Wi-Fi or the Citizens Broadband Radio Service (CBRS) in the 3550 MHz to 3700 MHz range. The FCC has some leeway to decide which frequencies to auction, and its pending decision in the matter will draw much interest from groups interested in preserving and expanding Wi-Fi and CBRS access.

Calabrese said in June 2025 that 6 GHz and CBRS “are the most vulnerable non-federal bands for reallocation and auction.” But now, after Trump administration statements claiming 6 GHz Wi-Fi as a key Trump accomplishment and support from congressional Republicans, Calabrese told Ars today that reallocation of Wi-Fi frequencies “seems far less likely.” Advocates are “far more worried about CBRS now than 6 GHz,” he said.

In addition to consumer advocacy groups, the cable industry has been lobbying for Wi-Fi and CBRS, putting it in opposition to the mobile industry that seeks more exclusive licenses to use airwaves. Cable industry lobby group NCTA said yesterday that it is “encouraged by the FCC’s action to enhance usage in the 6 GHz band. With Wi-Fi now carrying nearly 90 percent of mobile data, securing more unlicensed spectrum is essential to keep up with surging consumer demand, power emerging technologies, and ensure fast, reliable connections for homes, businesses, and communities nationwide.”

Wi-Fi advocates get win from FCC with vote to allow higher-power devices Read More »

grok-assumes-users-seeking-images-of-underage-girls-have-“good-intent”

Grok assumes users seeking images of underage girls have “good intent”


Conflicting instructions?

Expert explains how simple it could be to tweak Grok to block CSAM outputs.

Credit: Aurich Lawson | Getty Images

For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated that the chatbot generated over 6,000 images an hour flagged as “sexually suggestive or nudifying,” Bloomberg reported.

While the chatbot claimed that xAI supposedly “identified lapses in safeguards” that allowed outputs flagged as child sexual abuse material (CSAM) and was “urgently fixing them,” Grok has proven to be an unreliable spokesperson, and xAI has not announced any fixes.

A quick look at Grok’s safety guidelines on its public GitHub shows they were last updated two months ago. The GitHub also indicates that, despite prohibiting such content, Grok maintains programming that could make it likely to generate CSAM.

Billed as “the highest priority,” superseding “any other instructions” Grok may receive, these rules explicitly prohibit Grok from assisting with queries that “clearly intend to engage” in creating or distributing CSAM or otherwise sexually exploit children.

However, the rules also direct Grok to “assume good intent” and “don’t make worst-case assumptions without evidence” when users request images of young women.

Using words like “‘teenage’ or ‘girl’ does not necessarily imply underage,” Grok’s instructions say.

X declined Ars’ request to comment. The only statement X Safety has made so far shows that Elon Musk’s social media platform plans to blame users for generating CSAM, threatening to permanently suspend users and report them to law enforcement.

Critics dispute that X’s solution will end the Grok scandal, and child safety advocates and foreign governments are growing increasingly alarmed as X delays updates that could block Grok’s undressing spree.

Why Grok shouldn’t “assume good intentions”

Grok can struggle to assess users’ intenttions, making it “incredibly easy” for the chatbot to generate CSAM under xAI’s policy, Alex Georges, an AI safety researcher, told Ars.

The chatbot has been instructed, for example, that “there are no restrictionson fictional adult sexual content with dark or violent themes,” and Grok’s mandate to assume “good intent” may create gray areas in which CSAM could be created.

There’s evidence that in relying on these guidelines, Grok is currently generating a flood of harmful images on X, with even more graphic images being created on the chatbot’s standalone website and app, Wired reported. Researchers who surveyed 20,000 random images and 50,000 prompts told CNN that more than half of Grok’s outputs that feature images of people sexualize women, with 2 percent depicting “people appearing to be 18 years old or younger.” Some users specifically “requested minors be put in erotic positions and that sexual fluids be depicted on their bodies,” researchers found.

Grok isn’t the only chatbot that sexualizes images of real people without consent, but its policy seems to leave safety at a surface level, Georges said, and xAI is seemingly unwilling to expand safety efforts to block more harmful outputs.

Georges is the founder and CEO of AetherLab, an AI company that helps a wide range of firms—including tech giants like OpenAI, Microsoft, and Amazon—deploy generative AI products with appropriate safeguards. He told Ars that AetherLab works with many AI companies that are concerned about blocking harmful companion bot outputs like Grok’s. And although there are no industry norms—creating a “Wild West” due to regulatory gaps, particularly in the US—his experience with chatbot content moderation has convinced him that Grok’s instructions to “assume good intent” are “silly” because xAI’s requirement of “clear intent” doesn’t mean anything operationally to the chatbot.

“I can very easily get harmful outputs by just obfuscating my intent,” Georges said, emphasizing that “users absolutely do not automatically fit into the good-intent bucket.” And even “in a perfect world,” where “every single user does have good intent,” Georges noted, the model “will still generate bad content on its own because of how it’s trained.”

Benign inputs can lead to harmful outputs, Georges explained, and a sound safety system would catch both benign and harmful prompts. Consider, he suggested, a prompt for “a pic of a girl model taking swimming lessons.”

The user could be trying to create an ad for a swimming school, or they could have malicious intent and be attempting to manipulate the model. For users with benign intent, prompting can “go wrong,” Georges said, if Grok’s training data statistically links certain “normal phrases and situations” to “younger-looking subjects and/or more revealing depictions.”

“Grok might have seen a bunch of images where ‘girls taking swimming lessons’ were young and that human ‘models’ were dressed in revealing things, which means it could produce an underage girl in a swimming pool wearing something revealing,” Georges said. “So, a prompt that looks ‘normal’ can still produce an image that crosses the line.”

While AetherLab has never worked directly with xAI or X, Georges’ team has “tested their systems independently by probing for harmful outputs, and unsurprisingly, we’ve been able to get really bad content out of them,” Georges said.

Leaving AI chatbots unchecked poses a risk to children. A spokesperson for the National Center for Missing and Exploited Children (NCMEC), which processes reports of CSAM on X in the US, told Ars that “sexual images of children, including those created using artificial intelligence, are child sexual abuse material (CSAM). Whether an image is real or computer-generated, the harm is real, and the material is illegal.”

Researchers at the Internet Watch Foundation told the BBC that users of dark web forums are already promoting CSAM they claim was generated by Grok. These images are typically classified in the United Kingdom as the “lowest severity of criminal material,” researchers said. But at least one user was found to have fed a less-severe Grok output into another tool to generate the “most serious” criminal material, demonstrating how Grok could be used as an instrument by those seeking to commercialize AI CSAM.

Easy tweaks to make Grok safer

In August, xAI explained how the company works to keep Grok safe for users. But although the company acknowledged that it’s difficult to distinguish “malignant intent” from “mere curiosity,” xAI seemed convinced that Grok could “decline queries demonstrating clear intent to engage in activities” like child sexual exploitation, without blocking prompts from merely curious users.

That report showed that xAI refines Grok over time to block requests for CSAM “by adding safeguards to refuse requests that may lead to foreseeable harm”—a step xAI does not appear to have taken since late December, when reports first raised concerns that Grok was sexualizing images of minors.

Georges said there are easy tweaks xAI could make to Grok to block harmful outputs, including CSAM, while acknowledging that he is making assumptions without knowing exactly how xAI works to place checks on Grok.

First, he recommended that Grok rely on end-to-end guardrails, blocking “obvious” malicious prompts and flagging suspicious ones. It should then double-check outputs to block harmful ones, even when prompts are benign.

This strategy works best, Georges said, when multiple watchdog systems are employed, noting that “you can’t rely on the generator to self-police because its learned biases are part of what creates these failure modes.” That’s the role that AetherLab wants to fill across the industry, helping test chatbots for weakness to block harmful outputs by using “an ‘agentic’ approach with a shitload of AI models working together (thereby reducing the collective bias),” Georges said.

xAI could also likely block more harmful outputs by reworking Grok’s prompt style guidance, Georges suggested. “If Grok is, say, 30 percent vulnerable to CSAM-style attacks and another provider is 1 percent vulnerable, that’s a massive difference,” Georges said.

It appears that xAI is currently relying on Grok to police itself, while using safety guidelines that Georges said overlook an “enormous” number of potential cases where Grok could generate harmful content. The guidelines do not “signal that safety is a real concern,” Georges said, suggesting that “if I wanted to look safe while still allowing a lot under the hood, this is close to the policy I’d write.”

Chatbot makers must protect kids, NCMEC says

X has been very vocal about policing its platform for CSAM since Musk took over Twitter, but under former CEO Linda Yaccarino, the company adopted a broad protective stance against all image-based sexual abuse (IBSA). In 2024, X became one of the earliest corporations to voluntarily adopt the IBSA Principles that X now seems to be violating by failing to tweak Grok.

Those principles seek to combat all kinds of IBSA, recognizing that even fake images can “cause devastating psychological, financial, and reputational harm.” When it adopted the principles, X vowed to prevent the nonconsensual distribution of intimate images by providing easy-to-use reporting tools and quickly supporting the needs of victims desperate to block “the nonconsensual creation or distribution of intimate images” on its platform.

Kate Ruane, the director of the Center for Democracy and Technologys Free Expression Project, which helped form the working group behind the IBSA Principles, told Ars that although the commitments X made were “voluntary,” they signaled that X agreed the problem was a “pressing issue the company should take seriously.”

“They are on record saying that they will do these things, and they are not,” Ruane said.

As the Grok controversy sparks probes in Europe, India, and Malaysia, xAI may be forced to update Grok’s safety guidelines or make other tweaks to block the worst outputs.

In the US, xAI may face civil suits under federal or state laws that restrict intimate image abuse. If Grok’s harmful outputs continue into May, X could face penalties under the Take It Down Act, which authorizes the Federal Trade Commission to intervene if platforms don’t quickly remove both real and AI-generated non-consensual intimate imagery.

But whether US authorities will intervene any time soon remains unknown, as Musk is a close ally of the Trump administration. A spokesperson for the Justice Department told CNN that the department “takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM.”

“Laws are only as good as their enforcement,” Ruane told Ars. “You need law enforcement at the Federal Trade Commission or at the Department of Justice to be willing to go after these companies if they are in violation of the laws.”

Child safety advocates seem alarmed by the sluggish response. “Technology companies have a responsibility to prevent their tools from being used to sexualize or exploit children,” NCMEC’s spokesperson told Ars. “As AI continues to advance, protecting children must remain a clear and nonnegotiable priority.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Grok assumes users seeking images of underage girls have “good intent” Read More »

warner-bros.-sticks-with-netflix-merger,-calls-paramount’s-$108b-bid-“illusory”

Warner Bros. sticks with Netflix merger, calls Paramount’s $108B bid “illusory”


Larry Ellison pledged $40B, but “he didn’t raise the price,” Warner chair says.

Credit: Getty Images | Kenneth Cheung

The Warner Bros. Discovery board has unanimously voted to rebuff Paramount’s $108.4 billion offer and urged shareholders to reject the hostile takeover bid. The board is continuing to support Netflix’s pending $82.7 billion purchase of its streaming and movie studios businesses along with a separate spinoff of the Warner Bros. cable TV division.

Warner Bros. called the Paramount bid “illusory” in a presentation for shareholders today, saying the offer requires an “extraordinary amount of debt financing” and other terms that make it less likely to be completed than a Netflix merger. It would be the largest leveraged buyout ever, “with $87B of total pro forma gross debt,” and is “effectively a one-sided option for PSKY [Paramount Skydance] as the offer can be terminated or amended by PSKY at any time,” Warner Bros. said.

The Warner Bros. presentation touted Netflix’s financial strength while saying that Paramount “is a $14B market cap company with a ‘junk’ credit rating, negative free cash flows, significant fixed financial obligations, and a high degree of dependency on its linear business.” The Paramount “offer is illusory as it cannot be completed before it is currently scheduled to expire,” Warner Bros. said.

Warner Bros. said in a letter to shareholders today that it prefers Netflix with its “market capitalization of approximately $400 billion, an investment grade balance sheet, an A/A3 credit rating and estimated free cash flow of more than $12 billion for 2026.” Moreover, the deal with Netflix provides Warner Bros. with “more flexibility to operate in a normal course until closing,” the letter said.

Even if Paramount is able to complete a deal, “WBD stockholders will not receive cash for 12-18 months and you cannot trade your shares while shares are tendered,” the board told investors. Despite the seemingly firm position, Warner Bros. Discovery board Chairman Samuel Di Piazza Jr. seemed to suggest in an appearance on CNBC’s Squawk Box today that the board could be swayed by a higher offer.

Larry Ellison “didn’t raise the price”

On December 5, after a bidding war that also involved Paramount and Comcast, Warner Bros. struck a deal to sell Netflix its streaming and movie studios businesses. Netflix, already the world’s largest streaming service, would become an even bigger juggernaut if it completes the takeover including rival HBO Max, WB Studios, and other assets.

While the Paramount bid is higher, it would involve the purchase of more Warner Bros. assets than the deal with Netflix. “Unlike Netflix, Paramount is seeking to buy the company’s legacy television and cable assets such as CNN, TNT, and Discovery Channel,” the Financial Times wrote. “Netflix plans to acquire WBD after it spins off its cable TV business, which is scheduled to happen this year.”

Paramount, which recently completed an $8 billion merger with Skydance, submitted its bid for a hostile takeover days after the Netflix/Warner Bros. deal was announced. Warner Bros. resisted, and Paramount amended its offer on December 22 to address objections.

“Larry Ellison has agreed to provide an irrevocable personal guarantee of $40.4 billion of the equity financing for the offer and any damages claims against Paramount,” Paramount said. It also said it offered “improved flexibility to WBD on debt refinancing transactions, representations and interim operating covenants.”

Larry Ellison’s son, David Ellison, is the chairman and CEO of Paramount Skydance. In his CNBC appearance, Di Piazza acknowledged that “Larry Ellison stepped up to the table and the board recognizes what he did.” But “ultimately, he didn’t raise the price. So, in our perspective, Netflix continues to be the superior offer, a clear path to closing.”

Warner Bros. shareholders currently have a January 21 deadline for tendering shares under the Paramount offer, but that could change, as Paramount has indicated it could sweeten the deal further.

Breakup fees a sticking point

Warner Bros. said in the letter to shareholders today that the latest offer still isn’t good enough. Paramount is “attempting an acquisition requiring $94.65 billion of debt and equity financing, nearly seven times its total market capitalization,” requiring it “to incur an extraordinary amount of incremental debt—more than $50 billion—through arrangements with multiple financing partners,” the letter said.

Warner Bros. said that breaking the deal with Netflix would require it to pay Netflix a $2.8 billion termination fee. Either Paramount or Netflix would have to pay Warner Bros. a $5.8 billion termination fee if the buyer can’t get regulatory approval for a merger. But if a Paramount deal failed, there would also be $4.7 billion in unreimbursed costs for shareholders, reducing the effective termination fee to $1.1 billion, according to Warner Bros.

“In the large majority of cases, when an overbidder comes in, they take that break[up] fee and pay it,” Di Piazza said on CNBC.

Warner Bros. Discovery also said the Paramount offer would prohibit it from completing its planned separation of Discovery Global and Warner Bros., which it argues will bring substantial benefits to shareholders by letting each of the separated entities “focus on its own strategic plan.” This separation can be completed even if Netflix is unable to complete the merger for regulatory reasons, it said.

We contacted Paramount and will update this article if it provides any response.

Warner Bros. investor wants more negotiations

Warner Bros. is facing pressure from one of its top shareholders to negotiate further with Paramount. “Pentwater Capital Management, a hedge-fund manager that is among Warner’s top shareholders, told the board in a letter Wednesday that it is failing in its fiduciary duty to shareholders by not engaging in discussions with Paramount,” according to The Wall Street Journal.

The hedge-fund manager said the board should at least ask Paramount what improvements it is willing to make to its offer. “Pentwater vowed to vote against the merger and not support the renomination of directors in the future if Paramount raises its offer and Warner’s board doesn’t have further discussions with the company,” the Journal wrote.

The Warner Bros. board argued in its letter that “PSKY has continued to submit offers that still include many of the deficiencies we previously repeatedly identified to PSKY, none of which are present in the Netflix merger agreement, all while asserting that its offers do not represent its ‘best and final’ proposal.”

However, Di Piazza suggested on CNBC that Paramount could still put a superior offer on the table. “They had that opportunity in the seventh proposal, the eighth proposal, and they haven’t done it,” he said. “And so from our perspective, they’ve got to put something on the table that is compelling and is superior.”

Netflix issued a statement today saying it “is engaging with competition authorities, including the US Department of Justice and European Commission,” to move the deal forward. “As previously disclosed, the transaction is expected to close in 12-18 months from the date that Netflix and WBD originally entered into their merger agreement,” Netflix said.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Warner Bros. sticks with Netflix merger, calls Paramount’s $108B bid “illusory” Read More »

letting-prisons-jam-contraband-phones-is-a-bad-idea,-phone-companies-tell-fcc

Letting prisons jam contraband phones is a bad idea, phone companies tell FCC


FCC hopes you like jammin’ too

“Jamming will block all communications,” including 911 calls, CTIA tells FCC.

Credit: Getty Images | da-kuk

A Federal Communications Commission proposal to let state and local prisons jam contraband cell phones has support from Republican attorneys general and prison phone companies but faces opposition from wireless carriers that say it would disrupt lawful communications. Groups dedicated to Wi-Fi and GPS also raised concerns in comments to the FCC.

“Jamming will block all communications, not just communications from contraband devices,” wireless lobby group CTIA said in December 29 comments in response to Chairman Brendan Carr’s proposal. The CTIA said that “jamming blocks all communications, including lawful communications such as 911 calling,” and argued that the FCC “has no authority to allow jamming.”

CTIA members AT&T and Verizon expressed their displeasure in separate comments to the FCC. “The proposed legal framework is based on a flawed factual premise,” AT&T wrote.

While the Communications Act prohibits interference with authorized radio communications, Carr’s plan tries to sidestep this prohibition by proposing to de-authorize certain communications, AT&T wrote. “This legal framework, however, is premised on a fundamental factual error: the assumption that jammers will only block ‘unauthorized’ communications without impacting lawful uses. There is no way to jam some communications on a spectrum band but not others,” AT&T wrote.

Previous FCC leaders recognized the problem that radio jammers can’t differentiate between contraband and legitimate devices, AT&T said. “As explained above, there are no technical workarounds to that limitation with respect to jammers,” AT&T wrote.

“Jammers block all wireless communications”

In 2013, the FCC explained that jamming systems transmit on the same frequencies as their targets in order to disrupt the links between devices and network base stations and that this process “render[s] any wireless device operating on those frequencies unusable. When used to disrupt wireless devices, radio signal jammers cannot differentiate between contraband devices and legitimate devices, including devices making 911 calls. Radio signal jammers block all wireless communications on affected spectrum bands.”

That apparently hasn’t changed. The FCC’s new proposal issued in September 2025 said the commission’s “understanding is that jamming solutions block calls on all affected frequencies and… are unable to allow 911 calls to be transmitted.” But the proposal indicates this may be an acceptable outcome, as “some state DOC [Department of Corrections] officials have indicated that correctional facilities typically do not allow any calls from within, including emergency calls.”

If the FCC adopts its plan, it would “authorize, for the first time, non-federal operation of radio frequency (RF) jamming solutions in correctional facilities,” the proposal said.

Carr said in September that previous FCC actions, such as authorizing “contraband interdiction systems” and letting wireless carriers disable contraband phones at a prison’s request, have not been enough. “Contraband cellphones have been pouring into state and local prisons by the tens of thousands every year,” Carr said. “They are used to run drug operations, orchestrate kidnappings, and further criminal enterprises in communities all across the country.”

Carr said that prisons and jails will not be required to install jamming systems and that the FCC “proposes to authorize targeted jamming. Jamming technology can be precise enough that it does not interrupt the regular communications of law enforcement or community members in the vicinity.” The FCC proposal asks the public for comment on “restrictions that might prove necessary to ensure that jamming solutions are limited to this targeted use, and to mitigate the risk that these solutions are deployed in contexts other than a correctional facility environment.”

Jamming has support from 23 state attorneys general, all Republicans, who told the FCC that “inmates routinely use smuggled phones to coordinate criminal enterprises, intimidate witnesses, and orchestrate violence both inside and outside prison walls.” More jamming support came from the state Department of Corrections in both Florida and South Carolina.

Prison phone companies like jamming

Prison phone companies that would financially benefit from increased use of official phone systems also support jamming cell phones. Global Tel*Link (aka ViaPath) called the plan “one more tool to help combat the serious problem of contraband wireless devices in correctional facilities.”

NCIC Correctional Services, another prison phone firm, said that jamming to create “‘dead zones’ within correctional facilities would permit smaller jails to restrict contraband device access where it is not cost-effective to install managed access systems.” Detection Innovation Group, which sells inmate-tracking technology to prisons and jails, also urged the FCC to allow jamming.

Telecom industry groups say that limiting the effect of jamming will be difficult or impossible. The harms identified over a decade ago “remain the same today, although their effects are magnified by the increased use of wireless devices for broadband,” said the Telecommunications Industry Association, a standards-development group. “If an RF jamming solution is deployed at a correctional facility, such deployment risks not only interfering with voice communications but disrupting vital broadband services as well within the facility itself as well as the surrounding community.”

Verizon told the FCC that the Communications Act “requires more restrictive use of jamming devices than the NPRM [Notice of Proposed Rulemaking] proposes.” The CTIA argued that jamming isn’t necessary because the wireless industry already offers Managed Access Systems (MAS) as “a safe and effective contraband interdiction ecosystem.”

A Managed Access System establishes “a private cellular network that captures communications (voice, text, data) on commercial wireless frequencies within a correctional facility, determines whether that exchange is coming from or going to a contraband device, and, if so, prevents those communications from connecting to the wireless provider’s network,” the CTIA said. “At the same time, MAS allows communications to and from approved devices to be transmitted without interruption, including 911 and public safety calls within the correctional facility.”

Wi-Fi and GPS groups warn of jamming risks

More opposition came from the Wi-Fi Alliance, a tech industry group that tests and certifies interoperability of Wi-Fi products. The FCC proposal failed to “address the potential impact of such jamming on lawfully operating Wi-Fi and other unlicensed devices,” the group told the FCC.

The FCC plan is not limited to jamming of phones on spectrum licensed for the exclusive use of wireless carriers. The FCC additionally sought comment on whether contraband devices operating on Wi-Fi airwaves and other unlicensed spectrum should be subject to jamming. That’s concerning to the Wi-Fi Alliance because Wi-Fi operates on unlicensed spectrum that is shared by many users.

“Accordingly, declaring that a jammer on unlicensed spectrum is permitted to disrupt the communications of another device also operating on unlicensed spectrum is contrary to the foundational principle of Part 15 [of FCC rules], under which all unauthorized devices must cooperate in the use of spectrum,” the group said. “Moreover, authorizing the use of jamming equipment in unlicensed spectrum pursuant to Part 15 would undermine decades of global spectrum policy, weaken trust in license-exempt technologies by providing no assurance that devices using those technologies will work, and set a dangerous precedent for the intentional misuse of unlicensed spectrum.”

Letting jammers interfere with Wi-Fi and other unlicensed devices would effectively turn the jammers into “a de facto licensed service, operating with primary status in bands that are designated for unlicensed use,” the Wi-Fi Alliance said. “To achieve that undesirable result, the Commission would be required to change the Table of Frequency Allocations and issue authorizations for operations on unlicensed spectrum (just as it contemplates for the use of cell phone spectrum in jamming devices). That outcome would upend the premise of Part 15 operations.”

The GPS Innovation Alliance, another industry group, warned that even if the FCC imposes strict limits on transmission power and out-of-band emissions, “jammer transmissions can have spillover effects on adjacent and nearby band operations. Only specialized, encrypted signals, and specialized receivers and devices designed to decrypt those signals, are jam-resistant, in contrast to how most commercial technologies work.”

Now that public comments are in, Carr has to decide whether to move ahead with the plan as originally written, scrap it entirely, or come up with a compromise that might address some of the concerns raised by opponents. The FCC’s NPRM suggests a pilot program could be used to evaluate interference risks before a broader rollout, and the pilot idea received some support from carriers in their comments. A final proposal would be put to a vote of commissioners at the Republican-majority FCC.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Letting prisons jam contraband phones is a bad idea, phone companies tell FCC Read More »

news-orgs-win-fight-to-access-20m-chatgpt-logs-now-they-want-more.

News orgs win fight to access 20M ChatGPT logs. Now they want more.

Describing OpenAI’s alleged “playbook” to dodge copyright claims, news groups accused OpenAI of failing to “take any steps to suspend its routine destruction practices.” There were also “two spikes in mass deletion” that OpenAI attributed to “technical issues.”

However, OpenAI made sure to retain outputs that could help its defense, the court filing alleged, including data from accounts cited in news organizations’ complaints.

OpenAI did not take the same care to preserve chats that could be used as evidence against it, news groups alleged, citing testimony from Mike Trinh, OpenAI’s associate general counsel. “In other words, OpenAI preserved evidence of the News Plaintiffs eliciting their own works from OpenAI’s products but deleted evidence of third-party users doing so,” the filing said.

It’s unclear how much data was deleted, plaintiffs alleged, since OpenAI won’t share “the most basic information” on its deletion practices. But it’s allegedly very clear that OpenAI could have done more to preserve the data, since Microsoft apparently had no trouble doing so with Copilot, the filing said.

News plaintiffs are hoping the court will agree that OpenAI and Microsoft aren’t fighting fair by delaying sharing logs, which they said prevents them from building their strongest case.

They’ve asked the court to order Microsoft to “immediately” produce Copilot logs “in a readily searchable remotely-accessible format,” proposing a deadline of January 9 or “within a day of the Court ruling on this motion.”

Microsoft declined Ars’ request for comment.

And as for OpenAI, it wants to know if the deleted logs, including “mass deletions,” can be retrieved, perhaps bringing millions more ChatGPT conversations into the litigation that users likely expected would never see the light of day again.

On top of possible sanctions, news plaintiffs asked the court to keep in place a preservation order blocking OpenAI from permanently deleting users’ temporary and deleted chats. They also want the court to order OpenAI to explain “the full scope of destroyed output log data for all of its products at issue” in the litigation and whether those deleted chats can be restored, so that news plaintiffs can examine them as evidence, too.

News orgs win fight to access 20M ChatGPT logs. Now they want more. Read More »

appeals-court-agrees-that-congress-blocked-cuts-to-research-costs

Appeals court agrees that Congress blocked cuts to research costs

While indirect rates (the money paid for indirects as a percentage of the money that goes directly to the researcher to support their work) average about 30 percent, many universities have ended up with indirect cost rates above 50 percent. A sudden and unexpected drop to the 15 percent applied retroactively, as planned by the Trump administration, would create serious financial problems for major research universities.

The district court’s initial ruling held that this change was legally problematic in several ways. It violated the Administrative Procedures Act by being issued without any notice or comment, and the low flat rate was found to be arbitrary and capricious, especially compared to the system it was replacing. The ruling determined that the new policy also violated existing procedures within the Department of Health and Human Services.

But the Appeals Court panel of three judges unanimously determined that they didn’t even have to consider all of those issues because Congress had already prohibited exactly this action. In 2017, the first Trump administration also attempted to set all indirect costs to the same low, flat fee, and Congress responded by attaching a rider to a budget agreement that blocked alterations to the NIH overhead policy. Congress has been renewing that rider ever since.

A clear prohibition

In arguing for its new policy, the government tried to present it as consistent with Congress’s prohibition. The rider allowed some exceptions to the normal means of calculating overhead rates, but they were extremely limited; the NIH tried to argue that these exceptions could include every single grant issued to a university, something the court found was clearly inconsistent with the limits set by Congress.

The court also noted that, as announced, the NIH policy applied to every single grant, regardless of whether the recipient was at a university—something it later contended was a result of “inartful language.” But the judges wrote that it’s a bit late to revise the policy, saying, “We cannot, of course, disregard what the Supplemental Guidance actually says in favor of what NIH now wishes it said.”

Appeals court agrees that Congress blocked cuts to research costs Read More »

orsted-seeks-injunction-against-us-government-over-project-freeze

Ørsted seeks injunction against US government over project freeze

In October, Ørsted raised $9 billion from investors in a rights issue after Trump’s attempts to block a rival developer’s project spooked investors.

The US government then issued a stop-work order against the company’s $1.5 billion Revolution Wind project off the coast of Rhode Island, although Ørsted has persuaded a judge to lift the order.

In November, Ørsted agreed to sell half of the world’s largest offshore wind farm to Apollo in a $6.5 billion deal. Then on December 22, the company received orders from the US government to suspend “all ongoing activities on the outer continental shelf for the next 90 days.”

According to the company, the Revolution Wind project is now about 87 percent complete, with 58 out of its 65 wind turbines installed.

While Trump has made Ørsted’s planned offshore wind projects in the US far more difficult, its troubles predate his administration.

In 2023, the company had to walk away from two large projects in the US because of rising costs that have affected the entire industry.

In a statement on Ørsted’s legal challenge, White House spokesperson Taylor Rogers said: “For years, Americans have been forced to pay billions more for the least reliable source of energy. The Trump administration has paused the construction of all large-scale offshore wind projects because our number one priority is to put America First and protect the national security of the American people.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Ørsted seeks injunction against US government over project freeze Read More »

the-nation’s-strictest-privacy-law-just-took-effect,-to-data-brokers’-chagrin

The nation’s strictest privacy law just took effect, to data brokers’ chagrin

Californians are getting a new, supercharged way to stop data brokers from hoarding and selling their personal information, as a recently enacted law that’s among the strictest in the nation took effect at the beginning of the year.

According to the California Privacy Protection Agency, more than 500 companies actively scour all sorts of sources for scraps of information about individuals, then package and store it to sell to marketers, private investigators, and others.

The nonprofit Consumer Watchdog said in 2024 that brokers trawl automakers, tech companies, junk-food restaurants, device makers, and others for financial info, purchases, family situations, eating, exercising, travel, entertainment habits, and just about any other imaginable information belonging to millions of people.

Scrubbing your data made easy

Two years ago, California’s Delete Act took effect. It required data brokers to provide residents with a means to obtain a copy of all data pertaining to them and to demand that such information be deleted. Unfortunately, Consumer Watchdog found that only 1 percent of Californians exercised these rights in the first 12 months after the law went into effect. A chief reason: Residents were required to file a separate demand with each broker. With hundreds of companies selling data, the burden was too onerous for most residents to take on.

On January 1, a new law known as DROP (Delete Request and Opt-out Platform) took effect. DROP allows California residents to register a single demand for their data to be deleted and no longer collected in the future. CalPrivacy then forwards it to all brokers.

The nation’s strictest privacy law just took effect, to data brokers’ chagrin Read More »

anna’s-archive-loses.org-domain,-says-suspension-likely-unrelated-to-spotify-piracy

Anna’s Archive loses .org domain, says suspension likely unrelated to Spotify piracy

Legal problems

As TorrentFreak writes, “It is rare to see a .org domain involved in domain name suspensions. The American non-profit Public Interest Registry (PIR), which oversees the .org domains, previously refused to suspend domain names voluntarily, including thepiratebay.org. The registry’s cautionary stance suggests that the actions against annas-archive.org are backed by a court order.”

A spokesperson for the Public Interest Registry told Ars that “PIR is unable to comment on the situation at this time.”

Anna’s Archive’s domain registrar is Tucows. A Tucows spokesperson told Ars that “server-type statuses can only be set by the registry (PIR, in this case).” Tucows also said it doesn’t have any information on what led to the Anna’s Archive serverHold. “PIR has not contacted us about it and we were unaware of the status before you alerted us to it,” a Tucows spokesperson said.

After last month’s Spotify incident, Spotify told Ars that it “identified and disabled the nefarious user accounts that engaged in unlawful scraping” and “implemented new safeguards for these types of anti-copyright attacks.” We asked Spotify today if it has taken any additional steps against Anna’s Archive and will update this article if it provides a response.

Anna’s Archive is also facing a lawsuit from OCLC, a nonprofit that operates the WorldCat library catalog on behalf of member libraries. The lawsuit alleges that Anna’s Archive “illegally hacked WorldCat.org” to steal 2.2TB of data.

An OCLC motion for default judgment filed in November asked for a permanent injunction prohibiting Anna’s Archive from scraping or distributing WorldCat data and requiring Anna’s Archive to delete all its copies of WorldCat data. OCLC said it hopes such a judgment would compel web hosting services to take action.

“OCLC hopes to take the judgment to website hosting services so that OCLC’s WorldCat data will be removed from Anna’s Archive’s websites,” said the November 17 motion filed in US District Court for the Southern District of Ohio. The court has not yet ruled on the motion.

Anna’s Archive loses .org domain, says suspension likely unrelated to Spotify piracy Read More »

x-blames-users-for-grok-generated-csam;-no-fixes-announced

X blames users for Grok-generated CSAM; no fixes announced

No one knows how X plans to purge bad prompters

While some users are focused on how X can hold users responsible for Grok’s outputs when X is the one training the model, others are questioning how exactly X plans to moderate illegal content that Grok seems capable of generating.

X is so far more transparent about how it moderates CSAM posted to the platform. Last September, X Safety reported that it has “a zero tolerance policy towards CSAM content,” the majority of which is “automatically” detected using proprietary hash technology to proactively flag known CSAM.

Under this system, more than 4.5 million accounts were suspended last year, and X reported “hundreds of thousands” of images to the National Center for Missing and Exploited Children (NCMEC). The next month, X Head of Safety Kylie McRoberts confirmed that “in 2024, 309 reports made by X to NCMEC led to arrests and subsequent convictions in 10 cases,” and in the first half of 2025, “170 reports led to arrests.”

“When we identify apparent CSAM material, we act swiftly, and in the majority of cases permanently suspend the account which automatically removes the content from our platform,” X Safety said. “We then report the account to the NCMEC, which works with law enforcement globally—including in the UK—to pursue justice and protect children.”

At that time, X promised to “remain steadfast” in its “mission to eradicate CSAM,” but if left unchecked, Grok’s harmful outputs risk creating new kinds of CSAM that this system wouldn’t automatically detect. On X, some users suggested the platform should increase reporting mechanisms to help flag potentially illegal Grok outputs.

Another troublingly vague aspect of X Safety’s response is the definitions that X is using for illegal content or CSAM, some X users suggested. Across the platform, not everybody agrees on what’s harmful. Some critics are disturbed by Grok generating bikini images that sexualize public figures, including doctors or lawyers, without their consent, while others, including Musk, consider making bikini images to be a joke.

Where exactly X draws the line on AI-generated CSAM could determine whether images are quickly removed or whether repeat offenders are detected and suspended. Any accounts or content left unchecked could potentially traumatize real kids whose images may be used to prompt Grok. And if Grok should ever be used to flood the Internet with fake CSAM, recent history suggests that it could make it harder for law enforcement to investigate real child abuse cases.

X blames users for Grok-generated CSAM; no fixes announced Read More »