Policy

conduct-rules-are-coming-for-google-and-apple-in-the-uk

Conduct rules are coming for Google and Apple in the UK

“The targeted and proportionate actions we have set out today would enable UK app developers to remain at the forefront of global innovation while ensuring UK consumers receive a world-class experience,” Cardell said. “Time is of the essence: as competition agencies and courts globally take action in these markets, it’s essential the UK doesn’t fall behind.”

Google and Apple oppose the outlined changes, arguing they could threaten user security and delay the launch of new products and services in the UK.

“We’re concerned the rules the UK is now considering would undermine the privacy and security protections that our users have come to expect, hamper our ability to innovate, and force us to give away our technology for free to foreign competitors,” Apple said. “We will continue to engage with the regulator to make sure they fully understand these risks.”

Oliver Bethell, Google’s senior director for competition, said the CMA’s move was “both disappointing and unwarranted” and that it was “crucial that any new regulation is evidence-based, proportionate, and does not become a roadblock to growth in the UK.”

Apple has repeatedly clashed with Brussels over the implementation of the EU’s Digital Markets Act, making changes to its platform after the European Commission accused the iPhone maker of failing to comply with its “online gatekeeper” rules.

The DMA also requires Apple to open up iOS features and data to its rivals and has demanded changes to its App Store, such as allowing users to install apps from outside its store.

The CMA said it was taking a different approach to the EU by being more “tailored” and iterative than the DMA’s blanket rules.

Last month, Google’s search services were the first Big Tech product to be targeted under the UK’s Digital Markets, Competition and Consumers Act, which was passed last year.

If a company’s products or services are designated as having “strategic market status,” it can last for a five-year period. Companies can be fined up to 10 percent of global turnover for breaching conduct rules.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Conduct rules are coming for Google and Apple in the UK Read More »

toy-company-may-regret-coming-for-“sylvanian-drama”-tiktoker,-experts-say

Toy company may regret coming for “Sylvanian Drama” TikToker, experts say


Possible legal paths to revive a shuttered video series on TikTok and Instagram.

A popular account on TikTok and Instagram stopped posting suddenly at the end of last year, hit by a lawsuit after garnering millions of views on funny videos it made using adorable children’s Calico Critter dolls to act out dark, cringe-y adult storylines.

While millions of followers mourn the so-called “Sylvanian Drama” account’s demise, experts told Ars that the creator may have a decent chance at beating the lawsuit.

The “Sylvanian Drama” account derived its name from “Sylvanian Families,” a brand name used by Epoch Company Ltd., the maker of Calico Critters, for its iconic fuzzy animal dolls in some markets outside the US. Despite these videos referencing murder, drugs, and hookups, the toy company apparently had no problem, until the account, managed by Ireland-based Thea Von Engelbrechten, started accepting big brand partnerships and making sponsored content featuring the dolls.

Since Epoch, too, strikes partnerships with brands and influencers to promote its own videos marketing the dolls, the company claimed “Sylvanian Drama” risked creating too much confusion online. They also worried viewers would think Epoch had signed off on the videos, since the sponsored content was marked “paid partnership” without specifying precisely which featured brands had paid for the spots. They further accused Von Engelbrechten of building her advertising business around their brand without any attempt to properly license the dolls, while allegedly usurping licensing opportunities from Epoch.

So far, Von Engelbrechten has delayed responding in the lawsuit. As the account remained inactive over the past few months, fans speculated whether it could survive the lawsuit, which raised copyright and trademark infringement claims to get all the videos removed. In their complaint, the toy company requested not only an injunction preventing Von Engelbrechten from creating more “Sylvanian Drama” videos, but also sought all of her profits from her online accounts, in addition to further damages.

Von Engelbrechten declined Ars’ request to provide an update on her defense in the case, but her response is due in early August. That filing will make clear what arguments she may make to overcome Epoch’s suit, but legal experts told Ars that the case isn’t necessarily a slam dunk for the toy company. So all that “Sylvanian Drama” isn’t over just yet.

Epoch’s lawyers did not respond to Ars’ request to comment.

“Sylvanian Drama” needs the court to get the joke

Epoch raised copyright infringement charges that could hit Von Engelbrechten with fines totaling $150,000 per violation.

For Von Engelbrechten to defeat the copyright infringement claim, she’ll need to convince the court that her videos are parodies. A law professor at Santa Clara University School of Law, Eric Goldman, told Ars that her videos may qualify since “even if they don’t expressly reference Epoch’s offerings by name, the videos intentionally communicate a jarring juxtaposition of adorable critters who are important parts of pop culture living through the darker sides of humanity.”

Basically, Von Engelbrechten will need the court to understand the humor in her videos to win on that claim, Rebecca Tushnet, a First Amendment law professor at Harvard Law School, told Ars.

“Courts have varied in their treatment of parodies; the complaint’s definition of parody is not controlling but humor is one of the hardest things to predict—if the court gets the joke, it will be more likely to say that the juxtaposition between the storylines and the innocent appearance of the dolls is parodic,” Tushnet said.

But if the court does get the joke, Goldman suggested that even the sponsored content—which hilariously incorporates product placements from various big brands like Marc Jacobs, Taco Bell, Hilton, and Sephora into storylines—could possibly be characterized as parody.

However, “the fact that the social media posts were labeled #ad will make it extremely difficult for the artist to contest the videos’ status as ads,” Goldman said.

Ultimately, Goldman said that Epoch’s lawsuit “raises a host of complex legal issues” and is “not an easy case on either side.”

And one of the most significant issues that Epoch may face in the courtroom could end up gutting all of its trademark infringement claims that supposedly entitle the toy company to all of Von Engelbrechten’s profits, Alexandra Jane Roberts, a Northeastern University professor of law and media with special expertise in trademark law, told Ars.

Calico Critters may stumble on trademark hurdle

The toy company has raised several trademark infringement claims, all of which depend on Epoch proving that Von Engelbrechten “knowingly and willfully” used its trademarks without permission.

However, Roberts pointed out to Ars that Epoch has no trademarks for its iconic dolls, relying only on common law to assert sole rights to the “look and design of the critters.”

It’s likely impossible for Epoch to trademark the dolls, since trademarks are not intended to block competition, and there are only so many ways to design cute dolls that resemble cats or bunnies, Roberts suggested. A court may decide “there’s only so many ways to make a small fuzzy bunny that doesn’t look like this,” potentially narrowing the rights Epoch has under trade dress, a term that Epoch doesn’t use once in its complaint.

Roberts told Ars that Epoch’s trademark claims are “not so far off the mark,” and Von Engelbrechten’s defense was certainly not strengthened by her decision to monetize the content. Prior cases, like the indie band OK Go sending a cease-and-desist to Post cereal over a breakfast product called “OK Go” due to fears of false endorsement, make it clear that courts have agreed in the past that online collaborations have muddied the waters regarding who is the actual source of content for viewers.

“The question becomes whether people are going to see these videos, even though they’re snarky, and even though they’re silly and think, ‘Oh, Calico Critters must have signed off on this,'” Roberts said. “So the argument about consumer confusion, I think, is a plausible argument.”

However, if Epoch fails to convince the court that its trademarks have been infringed, then its other claims alleging false endorsement and unfair competition would likely also collapse.

“You can still get sometimes to unfair competition or to kind of like a false endorsement, but it’s harder to win on those claims and certainly harder to get damages on those claims,” Roberts said. “You don’t get trademark infringement if you don’t have a trademark.”

Possible defenses to keep “Sylvanian Drama” alive

Winning on the trademark claims may not be easy for Von Engelbrechten, who possibly weakened her First Amendment defense by creating the sponsored content. Regardless, she will likely try to convince the court to view the videos as parody, which is a slightly different analysis under trademark law than copyright’s more well-known fair use parody exceptions.

That could be a struggle, since trademark law requires that Von Engelbrechten’s parody videos directly satirize the “Sylvanian Families” brand, and “Sylvanian Drama” videos, even the ads, instead seem to be “making fun of elements of society and culture,” rather than the dolls themselves, Roberts said.

She pointed to winning cases involving the Barbie trademark as an instructive example. In a case disputing Mattel trademarks used in the lyrics for the one-hit wonder “Barbie Girl,” the song was cleared for trademark infringement as a “purely expressive work” that directly parodies Barbie in the lyrics. And in another case, where an artist, Tom Forsythe, captured photos of Barbie dolls in kitchen vessels like a blender or a margarita glass, more robust First Amendment protection was offered since his photos “had a lot to say about sexism and the dolls and what the dolls represent,” Roberts said.

The potential “Sylvanian Drama” defense seems to lack strong go-to arguments that typically win trademark cases, but Roberts said there is still one other defense the content creator may be weighing.

Under “nominative fair use,” it’s OK to use another company’s trademark if it’s necessary in an ad. Roberts provided examples, like a company renting Lexus cars needing to use that trademark or comparative advertising using Tiffany’s diamonds as a reference point to hype their lower prices.

If Von Engelbrechten goes that route, she will need to prove she used “no more of the mark than is necessary” and did not mislead fans on whether Epoch signed off on the use.

“Here it’s hard to say that ‘Sylvanian Drama’ really needed to use so much of those characters and that they didn’t use more than they needed and that they weren’t misleading,” Roberts said.

However, Von Engelbrechten’s best bet might be arguing that there was no confusion, since “Sylvanian Families” isn’t even a brand that’s used in the US, which is where Epoch chose to file its lawsuit because the brands that partnered with the popular account are based in New York. And the case may not even get that far, Roberts suggested, since “before you can get to those questions about the likelihood of confusion, you have to show that you actually have trademark or trade dress rights to enforce.”

Calico Critters creator may face millennial backlash

Epoch may come to regret filing the lawsuit, Roberts said, noting that as a millennial who grew up a big “Hello Kitty” fan, she still buys merch that appeals to her, and Epoch likely knows about that market, as it has done collaborations with the “Hello Kitty” brand. The toymaker could risk alienating other millennials nostalgic for Calico Critters who may be among the “Sylvanian Drama” audience and feel turned off by the lawsuit.

“When you draw attention to something like this and appear litigious, and that you’re coming after a creator who a lot of people really like and really enjoy and probably feel defensive about, like, ‘Oh, she’s just making these funny videos that everyone loves. Why would you want to sue her?'” Roberts said, “that can be really bad press.”

Goldman suggested that Epoch might be better off striking a deal with the creator, which “could establish some boundaries for the artist to keep going without stepping on the IP owner’s rights.” But he noted that “often IP owners in these situations are not open to negotiation,” and “that requires courts to draw difficult and unpredictable lines about the permissible scope of fair use.”

For Von Engelbrechten, the lawsuit may mean that her days of creating “Sylvanian Drama”-sponsored content are over, which could risk crushing a bigger dream she had to succeed in advertising. However, if the lawsuit can be amicably settled, the beloved content creator could also end up making money for Epoch, considering her brand deals appeared to be bigger.

While she seems to take her advertising business seriously, Von Engelbrechten’s videos often joke about legal consequences, such as one where a cat doll says she cannot go to a party because she’s in jail but says “I’ll figure it out” when told her ex will be attending. Perhaps Von Engelbrechten is currently devising a scheme, like her characters, to escape consequences and keep the “Sylvanian Drama” going.

“Maybe if this company were really smart, they would want to hire this person instead of suing them,” Roberts said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Toy company may regret coming for “Sylvanian Drama” TikToker, experts say Read More »

a-power-utility-is-reporting-suspected-pot-growers-to-cops-eff-says-that’s-illegal.

A power utility is reporting suspected pot growers to cops. EFF says that’s illegal.

In May 2020, Sacramento, California, resident Alfonso Nguyen was alarmed to find two Sacramento County Sheriff’s deputies at his door, accusing him of illegally growing cannabis and demanding entry into his home. When Nguyen refused the search and denied the allegation, one deputy allegedly called him a liar and threatened to arrest him.

That same year, deputies from the same department, with their guns drawn and bullhorns and sirens sounding, fanned out around the home of Brian Decker, another Sacramento resident. The officers forced Decker to walk backward out of his home in only his underwear around 7 am while his neighbors watched. The deputies said that he, too, was under suspicion of illegally growing cannabis.

Invasion of the privacy snatchers

According to a motion the Electronic Frontier Foundation filed in Sacramento Superior Court last week, Nguyen and Decker are only two of more than 33,000 Sacramento-area people who have been flagged to the sheriff’s department by the Sacramento Municipal Utility District, the electricity provider for the region. SMUD called the customers out for using what it and department investigators said were suspiciously high amounts of electricity indicative of illegal cannabis farming.

The EFF, citing investigator and SMUD records, said the utility unilaterally analyzes customers’ electricity usage in “painstakingly” detailed increments of every 15 minutes. When analysts identify patterns they deem likely signs of illegal grows, they notify sheriff’s investigators. The EFF said the practice violates privacy protections guaranteed by the federal and California governments and is seeking a court order barring the warrantless disclosures.

“SMUD’s disclosures invade the privacy of customers’ homes,” EFF attorneys wrote in a court document in support of last week’s motion. “The whole exercise is the digital equivalent of a door-to-door search of an entire city. The home lies at the ‘core’ of constitutional privacy protection.”

Contrary to SMUD and sheriff’s investigator claims that the likely illegal grows are accurate, the EFF cited multiple examples where they have been wrong. In Decker’s case, for instance, SMUD analysts allegedly told investigators his electricity usage indicated that “4 to 5 grow lights are being used [at his home] from 7pm to 7am.” In actuality, the EFF said, someone in the home was mining cryptocurrency. Nguyen’s electricity consumption was the result of a spinal injury that requires him to use an electric wheelchair and special HVAC equipment to maintain his body temperature.

A power utility is reporting suspected pot growers to cops. EFF says that’s illegal. Read More »

california-backs-down-to-trump-admin,-won’t-force-isps-to-offer-$15-broadband

California backs down to Trump admin, won’t force ISPs to offer $15 broadband


“Complete farce”: State lawmaker says US threatened to block broadband funding.

Credit: Getty Images | Adrienne Bresnahan

A California lawmaker halted an effort to pass a law that would force Internet service providers to offer $15 monthly plans to people with low incomes.

Assemblymember Tasha Boerner proposed the state law a few months ago, modeling the bill on a law enforced by New York. It seemed that other states were free to impose cheap-broadband mandates because the Supreme Court rejected broadband industry challenges to the New York law twice.

Boerner, a Democrat who is chair of the Communications and Conveyance Committee, faced pressure from Internet service providers to change or drop the bill. She made some changes, for example lowering the $15 plan’s required download speeds from 100Mbps to 50Mbps and the required upload speeds from 20Mbps to 10Mbps.

But the bill was still working its way through the legislature when, according to Boerner, Trump administration officials told her office that California could lose access to $1.86 billion in Broadband Equity, Access, and Deployment (BEAD) funds if it forces ISPs to offer low-cost service to people with low incomes.

That amount is California’s share of a $42.45 billion fund created by Congress to expand access to broadband service. The Trump administration has overhauled program rules, delaying the grants. One change is that states can’t tell ISPs what to charge for a low-cost plan.

The US law that created BEAD requires Internet providers receiving federal funds to offer at least one “low-cost broadband service option for eligible subscribers.” But in new guidance from the National Telecommunications and Information Administration (NTIA), the agency said it prohibits states “from explicitly or implicitly setting the LCSO [low-cost service option] rate a subgrantee must offer.”

State lawmaker describes “complete farce”

After losing their case against New York, Internet service providers asked the Trump administration to try to block state affordability laws. Although New York’s court win seemed to solidify states’ regulatory authority, the Trump administration could use its control over BEAD funding to pressure states into abandoning low-income requirements.

“When we introduced the bill, there were looming changes to the BEAD program,” Boerner told Ars. “There were hints at what would happen, but we had a call two weeks ago with NTIA that confirmed that… explicit or implicit rate regulation would disqualify a state for access.”

NTIA officials also made it clear that, even if California obtained the funding, ISPs could exempt themselves from the proposed low-cost broadband bill simply by applying for BEAD funding, Boerner told us. She said the NTIA’s new guidance is a “complete farce,” since ISPs are getting public money to build infrastructure and won’t have to commit to offering low-income plans at specific rates.

“All they would have to do to get exempted from AB 353 [the $15 broadband bill] would be to apply to the BEAD program,” she said. “Doesn’t matter if their application was valid, appropriate, granted, or they got public money at the end of the day and built the projects—the mere application for the BEAD program would exempt them from 353, if it didn’t jeopardize from $1.86 billion to begin with. And that was a tradeoff I was unwilling to make.”

We contacted the NTIA and asked whether Boerner’s description of the agency’s statements is accurate. We also asked the NTIA whether it believes that ISPs applying for BEAD funding are exempt from the New York law. The NTIA declined to comment today.

Boerner’s account of NTIA’s guidance raises the question of whether the NTIA is trying to pressure New York into changing or dropping its low-cost broadband law. New York Attorney General Letitia James defended the state law in court, but her office declined to comment when contacted by Ars. We also contacted Gov. Kathy Hochul’s office yesterday and did not receive a reply.

Boerner said the federal government’s action is “a flat-out giveaway to large corporations and denying Californians and Americans access to what’s essentially a basic service that everybody needs, which is access to broadband.”

Advocates: California shouldn’t back down

An earlier version of Boerner’s bill was approved by the state Assembly on June 4. Boerner said there were negotiations with the Senate on how to proceed, and the bill was amended. But last week, after the call with NTIA, Boerner decided not to move ahead with it this year.

“I held it in committee,” Boerner said.

Boerner’s top donors include Cox, AT&T, and Comcast. Boerner acknowledged that when the bill was still moving ahead, she lowered its required speeds based on discussions with cable companies and other ISPs. The 50/10Mbps threshold is “what I was able to negotiate for the $15. Most companies—especially cable, a lot of the big ISPs in California—already offer $30 for 100/20Mbps,” she said.

Advocacy groups say that California lawmakers shouldn’t bend to big ISPs or the NTIA. The BEAD law’s funding is for subsidizing new broadband deployments, while California’s proposed law would mainly apply to networks that have already been built, they point out.

Moreover, New York beat ISPs in court after nearly four years in litigation. The US Court of Appeals for the 2nd Circuit upheld the law last year. While the Supreme Court never directly ruled on the law, it rejected telecom groups’ petitions to hear their challenge to the appeals court ruling.

“No matter which way you slice it, federal changes to the BEAD program do not override the Supreme Court’s affirmation of a state’s authority to establish a broadband affordability standard. They just don’t,” Arturo Juarez, policy advisor for the California Alliance for Digital Equity, told Ars.

Speed cut negotiated with ISPs “a non-starter for us”

California-based advocates were eager to push for a low-income requirement after the Supreme Court rejected efforts to overturn New York’s law. “When the chair decided to take up the measure, we were really excited,” Juarez said. “She obviously sits on a key committee to getting the bill out.”

But advocates were disturbed by changes made to the bill, including the speed cut.

“We learned that there had been some backdoor, closed negotiations with industry to lower the speed threshold… that, of course, was just a non-starter for us,” Juarez said. “I don’t think it makes any sense to say that we’re going to lock low-income folks into second-class connectivity or essentially offer them a broadband service that doesn’t even qualify as broadband because it’s not fast enough, it doesn’t even meet the federal definition of what broadband is.”

Natalie Gonzalez, director of Digital Equity Los Angeles, told Ars that the NTIA guidance shouldn’t apply to existing broadband networks. Having BEAD rules apply to “existing infrastructure and existing subscription packages is a pretty far reach,” she said. Gonzalez also said that no legal analysis or evidence has been made public to show how the BEAD guidance on affordable broadband would make the state legislation unviable.

“From our standpoint as advocates and being on the calls with the CPUC [California Public Utilities Commission], our interpretation is that the rules simply just eliminate any new builds” from having an affordable option as a requirement, she said.

ISP-based verification another sticking point

Juarez and Gonzalez said they were also concerned that Boerner’s proposal would let ISPs do the verification of people’s eligibility for low-income plans, instead of having the CPUC perform that task. “We didn’t want ISP-based verification… because we saw that just doesn’t work, and it really represents a major barrier to access,” Juarez said.

Gonzalez said that “parents aren’t going to work with fears of immigration raids,” and people are concerned that ISPs would share sensitive data with the federal government. She said, “there was real hesitation from community and advocates within our coalition of who is going to be housing this data, what are the transparency and accountability and reporting requirements within the ISPs to secure this type of information.”

The CPUC handles California’s Lifeline program, “and that existing state verification process has been vetted, has been around for a long time,” Juarez said. The Boerner bill stated that the CPUC would have no authority to implement or enforce the $15 mandate and would have given oversight authority to the state Department of Technology.

Juarez said that advocates also wanted the bill to have broader exemptions for small Internet service providers that serve rural areas and aren’t as profitable. Big ISPs can easily afford to offer low-cost plans, he said. He pointed to a California Public Advocates Office analysis that said, “a $15 low-income broadband requirement would potentially reduce the combined revenues of the four largest broadband providers—AT&T, Comcast, Cox, Charter/Spectrum—by less than one percent.”

“We know that these massive multi-billion dollar corporations, they really have enough subscribers and they have enough service area to accommodate this sort of plan,” Juarez said.

Lawmaker “looking for new and creative ideas”

Boerner defended her approach to the bill. While she initially proposed higher speeds, she said that the 50/10Mbps threshold is robust enough for a family doing tasks like telehealth, Zooms, online learning, and file syncing. “The use case I always have in my head is a single mom with three kids working two jobs. That mom needs to get online, apply for jobs, she needs her kids to all get online and do their homework at the same time. I’m a mom of two kids. Nobody needs their kids fighting over bandwidth,” she said.

Boerner said her goal with the bill “was always a basic broadband service” that would be affordable. “There are lots of packages out there in the world that people choose to get because they’re being price-conscious and they choose the service level that they need,” she said.

We asked Boerner about pressure from broadband industry lobbyists. She replied, “Most industries are against rate regulation. We were trying to find a balance between meeting a need, which I think all of the companies see that need, right? They see the need for low-income Californians to get online. They want to be part of the solution, and also almost every industry in California hates rate regulation. So how do you balance those interests?”

While Boerner’s bill won’t be moving forward this year, a different bill in the state Senate would encourage ISPs to offer cheap broadband by making them eligible for Lifeline subsidies if they sell 100/20Mbps service for $30 or less. Unlike Boerner’s bill, it wouldn’t force ISPs to offer low-cost plans.

Boerner criticized Congress for discontinuing a national program that made $30 discounts available to people with low incomes. Her attempt to impose a low-cost mandate in California began after the nationwide Affordable Connectivity Program (ACP) was eliminated.

“We all saw the photos of kids outside of Taco Bell or McDonald’s using their Wi-Fi to turn in homework during the pandemic, and none of us wanted to go back to that,” she said.

The ACP’s $30 discounts temporarily alleviated that problem. The ACP “was one of our most successful public benefit programs, and it wasn’t partisan,” Boerner said. “It was rural, it was urban, it was Democrat, it was Republican… every American who was low-income benefited from the ACP. And I’d really like to appeal to Congress to act in the interests of Americans and find a way to have federal subsidies for low-income access to broadband again. I wouldn’t need to do state regulations if Congress had done their job.”

It isn’t clear whether Boerner will revive her attempt to impose a low-cost mandate. When asked about her future plans for broadband affordability legislation, she did not provide any specifics. “We’re always looking for new and creative ideas,” Boerner said.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

California backs down to Trump admin, won’t force ISPs to offer $15 broadband Read More »

xai-workers-balked-over-training-request-to-help-“give-grok-a-face,”-docs-show

xAI workers balked over training request to help “give Grok a face,” docs show

For the more than 200 employees who did not opt out, xAI asked that they record 15- to 30-minute conversations, where one employee posed as the potential Grok user and the other posed as the “host.” xAI was specifically looking for “imperfect data,” BI noted, expecting that only training on crystal-clear videos would limit Grok’s ability to interpret a wider range of facial expressions.

xAI’s goal was to help Grok “recognize and analyze facial movements and expressions, such as how people talk, react to others’ conversations, and express themselves in various conditions,” an internal document said. Allegedly among the only guarantees to employees—who likely recognized how sensitive facial data is—was a promise “not to create a digital version of you.”

To get the most out of data submitted by “Skippy” participants, dubbed tutors, xAI recommended that they never provide one-word answers, always ask follow-up questions, and maintain eye contact throughout the conversations.

The company also apparently provided scripts to evoke facial expressions they wanted Grok to understand, suggesting conversation topics like “How do you secretly manipulate people to get your way?” or “Would you ever date someone with a kid or kids?”

For xAI employees who provided facial training data, privacy concerns may still exist, considering X—the social platform formerly known as Twitter that recently was folded into xAI—has recently been targeted by what Elon Musk called a “massive” cyberattack. Because of privacy risks ranging from identity theft to government surveillance, several states have passed strict biometric privacy laws to prevent companies from collecting such data without explicit consent.

xAI did not respond to Ars’ request for comment.

xAI workers balked over training request to help “give Grok a face,” docs show Read More »

fcc-to-eliminate-gigabit-speed-goal-and-scrap-analysis-of-broadband-prices

FCC to eliminate gigabit speed goal and scrap analysis of broadband prices

“As part of our return to following the plain language of section 706, we propose to abolish without replacement the long-term goal of 1,000/500Mbps established in the 2024 Report,” Carr’s plan said. “Not only is a long-term goal not mentioned in section 706, but maintaining such a goal risks skewing the market by unnecessarily potentially picking technological winners and losers.”

Fiber networks can already meet a 1,000/500Mbps standard, and the Biden administration generally prioritized fiber when it came to distributing grants to Internet providers. The Trump administration changed grant-giving procedures to distribute more funds to non-fiber providers such as Elon Musk’s Starlink satellite network.

Carr’s proposal alleged that the 1,000/500Mbps long-term goal would “appear to violate our obligation to conduct our analysis in a technologically neutral manner,” as it “may be unreasonably prejudicial to technologies such as satellite and fixed wireless that presently do not support such speeds.”

100/20Mbps standard appears to survive

When the 100/20Mbps standard was adopted last year, Carr alleged that “the 100/20Mbps requirement appears to be part and parcel of the Commission’s broader attempt to circumvent the statutory requirement of technological neutrality.” It appears the Carr FCC will nonetheless stick with 100/20Mbps for measuring availability of fixed broadband. But his plan would seek comment on that approach, suggesting a possibility that it could be changed.

“We propose to again focus our service availability discussion on fixed broadband at speeds of 100/20Mbps and seek comment on this proposal,” the plan said.

If any regulatory changes are spurred by Carr’s deployment inquiry, they would likely be to eliminate regulations instead of adding them. Carr has been pushing a “Delete, Delete, Delete” initiative to eliminate rules that he considers unnecessary, and his proposal asks for comment on broadband regulations that could be removed.

“Are there currently any regulatory barriers impeding broadband deployment, investment, expansion, competition, and technological innovation that the Commission should consider eliminating?” the call for comment asks.

FCC to eliminate gigabit speed goal and scrap analysis of broadband prices Read More »

uk-backing-down-on-apple-encryption-backdoor-after-pressure-from-us

UK backing down on Apple encryption backdoor after pressure from US

Under the terms of the legislation, recipients of such a notice are unable to discuss the matter publicly, even with customers affected by the order, unless granted permission by the Home Secretary.

The legislation’s use against Apple has triggered the tech industry’s highest-profile battle over encryption technology in almost a decade.

In response to the demand, Apple withdrew its most secure cloud storage service from the UK in February and is now challenging the Home Office’s order at the Investigatory Powers Tribunal, which probes complaints against the UK’s security services.

Last month, Meta-owned WhatsApp said it would join Apple’s legal challenge, in a rare collaboration between the Silicon Valley rivals.

In the meantime, the Home Office continues to pursue its case with Apple at the tribunal.

Its lawyers discussed the next legal steps this month, reflecting the divisions within government over how best to proceed. “At this point, the government has not backed down,” said one person familiar with the legal process.

A third senior British official added that the UK government was reluctant to push “anything that looks to the US vice-president like a free-speech issue.”

In a combative speech at the Munich Security Conference in February, Vance argued that free speech and democracy were threatened by European elites.

The UK official added, this “limits what we’re able to do in the future, particularly in relation to AI regulation.” The Labour government has delayed plans for AI legislation until after May next year.

Trump has also been critical of the UK stance on encryption.

The US president has likened the UK’s order to Apple to “something… that you hear about with China,” saying in February that he had told Starmer: “You can’t do this.”

US Director of National Intelligence Tulsi Gabbard has also suggested the order would be an “egregious violation” of Americans’ privacy that risked breaching the two countries’ data agreement.

Apple did not respond to a request for comment. “We have never built a back door or master key to any of our products, and we never will,” Apple said in February.

The UK government did not respond to a request for comment.

A spokesperson for Vance declined to comment.

The Home Office has previously said the UK has “robust safeguards and independent oversight to protect privacy” and that these powers “are only used on an exceptional basis, in relation to the most serious crimes.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

UK backing down on Apple encryption backdoor after pressure from US Read More »

it’s-“frighteningly-likely”-many-us-courts-will-overlook-ai-errors,-expert-says

It’s “frighteningly likely” many US courts will overlook AI errors, expert says


Judges pushed to bone up on AI or risk destroying their court’s authority.

A judge points to a diagram of a hand with six fingers

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Order in the court! Order in the court! Judges are facing outcry over a suspected AI-generated order in a court.

Fueling nightmares that AI may soon decide legal battles, a Georgia court of appeals judge, Jeff Watkins, explained why a three-judge panel vacated an order last month that appears to be the first known ruling in which a judge sided with someone seemingly relying on fake AI-generated case citations to win a legal fight.

Now, experts are warning that judges overlooking AI hallucinations in court filings could easily become commonplace, especially in the typically overwhelmed lower courts. And so far, only two states have moved to force judges to sharpen their tech competencies and adapt so they can spot AI red flags and theoretically stop disruptions to the justice system at all levels.

The recently vacated order came in a Georgia divorce dispute, where Watkins explained that the order itself was drafted by the husband’s lawyer, Diana Lynch. That’s a common practice in many courts, where overburdened judges historically rely on lawyers to draft orders. But that protocol today faces heightened scrutiny as lawyers and non-lawyers increasingly rely on AI to compose and research legal filings, and judges risk rubberstamping fake opinions by not carefully scrutinizing AI-generated citations.

The errant order partly relied on “two fictitious cases” to deny the wife’s petition—which Watkins suggested were “possibly ‘hallucinations’ made up by generative-artificial intelligence”—as well as two cases that had “nothing to do” with the wife’s petition.

Lynch was hit with $2,500 in sanctions after the wife appealed, and the husband’s response—which also appeared to be prepared by Lynch—cited 11 additional cases that were “either hallucinated” or irrelevant. Watkins was further peeved that Lynch supported a request for attorney’s fees for the appeal by citing “one of the new hallucinated cases,” writing it added “insult to injury.”

Worryingly, the judge could not confirm whether the fake cases were generated by AI or even determine if Lynch inserted the bogus cases into the court filings, indicating how hard it can be for courts to hold lawyers accountable for suspected AI hallucinations. Lynch did not respond to Ars’ request to comment, and her website appeared to be taken down following media attention to the case.

But Watkins noted that “the irregularities in these filings suggest that they were drafted using generative AI” while warning that many “harms flow from the submission of fake opinions.” Exposing deceptions can waste time and money, and AI misuse can deprive people of raising their best arguments. Fake orders can also soil judges’ and courts’ reputations and promote “cynicism” in the justice system. If left unchecked, Watkins warned, these harms could pave the way to a future where a “litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”

“We have no information regarding why Appellee’s Brief repeatedly cites to nonexistent cases and can only speculate that the Brief may have been prepared by AI,” Watkins wrote.

Ultimately, Watkins remanded the case, partly because the fake cases made it impossible for the appeals court to adequately review the wife’s petition to void the prior order. But no matter the outcome of the Georgia case, the initial order will likely forever be remembered as a cautionary tale for judges increasingly scrutinized for failures to catch AI misuses in court.

“Frighteningly likely” judge’s AI misstep will be repeated

John Browning, a retired justice on Texas’ Fifth Court of Appeals and now a full-time law professor at Faulkner University, last year published a law article Watkins cited that warned of the ethical risks of lawyers using AI. In the article, Browning emphasized that the biggest concern at that point was that lawyers “will use generative AI to produce work product they treat as a final draft, without confirming the accuracy of the information contained therein or without applying their own independent professional judgment.”

Today, judges are increasingly drawing the same scrutiny, and Browning told Ars he thinks it’s “frighteningly likely that we will see more cases” like the Georgia divorce dispute, in which “a trial court unwittingly incorporates bogus case citations that an attorney includes in a proposed order” or even potentially in “proposed findings of fact and conclusions of law.”

“I can envision such a scenario in any number of situations in which a trial judge maintains a heavy docket and looks to counsel to work cooperatively in submitting proposed orders, including not just family law cases but other civil and even criminal matters,” Browning told Ars.

According to reporting from the National Center for State Courts, a nonprofit representing court leaders and professionals who are advocating for better judicial resources, AI tools like ChatGPT have made it easier for high-volume filers and unrepresented litigants who can’t afford attorneys to file more cases, potentially further bogging down courts.

Peter Henderson, a researcher who runs the Princeton Language+Law, Artificial Intelligence, & Society (POLARIS) Lab, told Ars that he expects cases like the Georgia divorce dispute aren’t happening every day just yet.

It’s likely that a “few hallucinated citations go overlooked” because generally, fake cases are flagged through “the adversarial nature of the US legal system,” he suggested. Browning further noted that trial judges are generally “very diligent in spotting when a lawyer is citing questionable authority or misleading the court about what a real case actually said or stood for.”

Henderson agreed with Browning that “in courts with much higher case loads and less adversarial process, this may happen more often.” But Henderson noted that the appeals court catching the fake cases is an example of the adversarial process working.

While that’s true in this case, it seems likely that anyone exhausted by the divorce legal process, for example, may not pursue an appeal if they don’t have energy or resources to discover and overturn errant orders.

Judges’ AI competency increasingly questioned

While recent history confirms that lawyers risk being sanctioned, fired from their firms, or suspended from practicing law for citing fake AI-generated cases, judges will likely only risk embarrassment for failing to catch lawyers’ errors or even for using AI to research their own opinions.

Not every judge is prepared to embrace AI without proper vetting, though. To shield the legal system, some judges have banned AI. Others have required disclosures—with some even demanding to know which specific AI tool was used—but that solution has not caught on everywhere.

Even if all courts required disclosures, Browning pointed out that disclosures still aren’t a perfect solution since “it may be difficult for lawyers to even discern whether they have used generative AI,” as AI features become increasingly embedded in popular legal tools. One day, it “may eventually become unreasonable to expect” lawyers “to verify every generative AI output,” Browning suggested.

Most likely—as a judicial ethics panel from Michigan has concluded—judges will determine “the best course of action for their courts with the ever-expanding use of AI,” Browning’s article noted. And the former justice told Ars that’s why education will be key, for both lawyers and judges, as AI advances and becomes more mainstream in court systems.

In an upcoming summer 2025 article in The Journal of Appellate Practice & Process, “The Dawn of the AI Judge,” Browning attempts to soothe readers by saying that AI isn’t yet fueling a legal dystopia. And humans are unlikely to face “robot judges” spouting AI-generated opinions any time soon, the former justice suggested.

Standing in the way of that, at least two states—Michigan and West Virginia—”have already issued judicial ethics opinions requiring judges to be ‘tech competent’ when it comes to AI,” Browning told Ars. And “other state supreme courts have adopted official policies regarding AI,” he noted, further pressuring judges to bone up on AI.

Meanwhile, several states have set up task forces to monitor their regional court systems and issue AI guidance, while states like Virginia and Montana have passed laws requiring human oversight for any AI systems used in criminal justice decisions.

Judges must prepare to spot obvious AI red flags

Until courts figure out how to navigate AI—a process that may look different from court to court—Browning advocates for more education and ethical guidance for judges to steer their use and attitudes about AI. That could help equip judges to avoid both ignorance of the many AI pitfalls and overconfidence in AI outputs, potentially protecting courts from AI hallucinations, biases, and evidentiary challenges sneaking past systems requiring human review and scrambling the court system.

An overlooked part of educating judges could be exposing AI’s influence so far in courts across the US. Henderson’s team is planning research that tracks which models attorneys are using most in courts. That could reveal “the potential legal arguments that these models are pushing” to sway courts—and which judicial interventions might be needed, Henderson told Ars.

“Over the next few years, researchers—like those in our group, the POLARIS Lab—will need to develop new ways to track the massive influence that AI will have and understand ways to intervene,” Henderson told Ars. “For example, is any model pushing a particular perspective on legal doctrine across many different cases? Was it explicitly trained or instructed to do so?”

Henderson also advocates for “an open, free centralized repository of case law,” which would make it easier for everyone to check for fake AI citations. “With such a repository, it is easier for groups like ours to build tools that can quickly and accurately verify citations,” Henderson said. That could be a significant improvement to the current decentralized court reporting system that often obscures case information behind various paywalls.

Dazza Greenwood, who co-chairs MIT’s Task Force on Responsible Use of Generative AI for Law, did not have time to send comments but pointed Ars to a LinkedIn thread where he suggested that a structural response may be needed to ensure that all fake AI citations are caught every time.

He recommended that courts create “a bounty system whereby counter-parties or other officers of the court receive sanctions payouts for fabricated cases cited in judicial filings that they reported first.” That way, lawyers will know that their work will “always” be checked and thus may shift their behavior if they’ve been automatically filing AI-drafted documents. In turn, that could alleviate pressure on judges to serve as watchdogs. It also wouldn’t cost much—mostly just redistributing the exact amount of fees that lawyers are sanctioned to AI spotters.

Novel solutions like this may be necessary, Greenwood suggested. Responding to a question asking if “shame and sanctions” are enough to stop AI hallucinations in court, Greenwood said that eliminating AI errors is imperative because it “gives both otherwise generally good lawyers and otherwise generally good technology a bad name.” Continuing to ban AI or suspend lawyers as a preferred solution risks dwindling court resources just as cases likely spike rather than potentially confronting the problem head-on.

Of course, there’s no guarantee that the bounty system would work. But “would the fact of such definite confidence that your cures will be individually checked and fabricated cites reported be enough to finally… convince lawyers who cut these corners that they should not cut these corners?”

In absence of a fake case detector like Henderson wants to build, experts told Ars that there are some obvious red flags that judges can note to catch AI-hallucinated filings.

Any case number with “123456” in it probably warrants review, Henderson told Ars. And Browning noted that AI tends to mix up locations for cases, too. “For example, a cite to a purported Texas case that has a ‘S.E. 2d’ reporter wouldn’t make sense, since Texas cases would be found in the Southwest Reporter,” Browning said, noting that some appellate judges have already relied on this red flag to catch AI misuses.

Those red flags would perhaps be easier to check with the open source tool that Henderson’s lab wants to make, but Browning said there are other tell-tale signs of AI usage that anyone who has ever used a chatbot is likely familiar with.

“Sometimes a red flag is the language cited from the hallucinated case; if it has some of the stilted language that can sometimes betray AI use, it might be a hallucination,” Browning said.

Judges already issuing AI-assisted opinions

Several states have assembled task forces like Greenwood’s to assess the risks and benefits of using AI in courts. In Georgia, the Judicial Council of Georgia Ad Hoc Committee on Artificial Intelligence and the Courts released a report in early July providing “recommendations to help maintain public trust and confidence in the judicial system as the use of AI increases” in that state.

Adopting the committee’s recommendations could establish “long-term leadership and governance”; a repository of approved AI tools, education, and training for judicial professionals; and more transparency on AI used in Georgia courts. But the committee expects it will take three years to implement those recommendations while AI use continues to grow.

Possibly complicating things further as judges start to explore using AI assistants to help draft their filings, the committee concluded that it’s still too early to tell if the judges’ code of conduct should be changed to prevent “unintentional use of biased algorithms, improper delegation to automated tools, or misuse of AI-generated data in judicial decision-making.” That means, at least for now, that there will be no code-of-conduct changes in Georgia, where the only case in which AI hallucinations are believed to have swayed a judge has been found.

Notably, the committee’s report also confirmed that there are no role models for courts to follow, as “there are no well-established regulatory environments with respect to the adoption of AI technologies by judicial systems.” Browning, who chaired a now-defunct Texas AI task force, told Ars that judges lacking guidance will need to stay on their toes to avoid trampling legal rights. (A spokesperson for the State Bar of Texas told Ars the task force’s work “concluded” and “resulted in the creation of the new standing committee on Emerging Technology,” which offers general tips and guidance for judges in a recently launched AI Toolkit.)

“While I definitely think lawyers have their own duties regarding AI use, I believe that judges have a similar responsibility to be vigilant when it comes to AI use as well,” Browning said.

Judges will continue sorting through AI-fueled submissions not just from pro se litigants representing themselves but also from up-and-coming young lawyers who may be more inclined to use AI, and even seasoned lawyers who have been sanctioned up to $5,000 for failing to check AI drafts, Browning suggested.

In his upcoming “AI Judge” article, Browning points to at least one judge, 11th Circuit Court of Appeals Judge Kevin Newsom, who has used AI as a “mini experiment” in preparing opinions for both a civil case involving an insurance coverage issue and a criminal matter focused on sentencing guidelines. Browning seems to appeal to judges’ egos to get them to study up so they can use AI to enhance their decision-making and possibly expand public trust in courts, not undermine it.

“Regardless of the technological advances that can support a judge’s decision-making, the ultimate responsibility will always remain with the flesh-and-blood judge and his application of very human qualities—legal reasoning, empathy, strong regard for fairness, and unwavering commitment to ethics,” Browning wrote. “These qualities can never be replicated by an AI tool.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

It’s “frighteningly likely” many US courts will overlook AI errors, expert says Read More »

trump-to-sign-stablecoin-bill-that-may-make-it-easier-to-bribe-the-president

Trump to sign stablecoin bill that may make it easier to bribe the president


Donald Trump’s first big crypto win “nothing to crow about,” analyst says.

Donald Trump is expected to sign the GENIUS Act into law Friday, securing his first big win as a self-described “pro-crypto president.” The act is the first major piece of cryptocurrency legislation passed in the US.

The House of Representatives voted to pass the GENIUS Act on Thursday, approving the same bill that the Senate passed last month. The law provides a federal framework for stablecoins, a form of cryptocurrency that’s considered less volatile than other cryptocurrencies, as each token is backed by the US dollar or other supposedly low-risk assets.

The GENIUS Act is expected to spur more widespread adoption of cryptocurrencies, since stablecoins are often used to move funds between different tokens. It could become a gateway for many Americans who are otherwise shy about investing in cryptocurrencies, which is what the industry wants. Ahead of Thursday’s vote, critics had warned that Republicans were rushing the pro-industry bill without ensuring adequate consumer protections, though, seemingly setting Americans up to embrace stablecoins as legitimate so-called “cash of the blockchain” without actually insuring their investments.

A big concern is that stablecoins will appear as safe investments, legitimized by the law, while supposedly private companies issuing stablecoins could peg their tokens to riskier assets that could tank reserves, cause bank runs, and potentially blindside and financially ruin Americans. Stablecoin scams could also target naïve stablecoin investors, luring them into making deposits that cannot be withdrawn.

Rep. Maxine Waters (D-Calif.)—part of a group of Democrats who had strongly opposed the bill—further warned Thursday that the GENIUS Act prevents lawmakers from owning or promoting stablecoins, but not the president. Trump and his family have allegedly made more than a billion dollars through their crypto ventures, and Waters is concerned that the law will make it easier for Trump and other presidents to use the office to grift and possibly even obscure foreign bribes.

“By passing this bill, Congress will be telling the world that Congress is OK with corruption, OK with foreign companies buying influence,” Waters said Thursday, CBS News reported.

Some lawmakers fear such corruption is already happening. Senators previously urged the Office of Government Ethics in a letter to investigate why “a crypto firm whose founder needs a pardon” (Binance’s Changpeng Zhao, also known as “CZ”) “and a foreign government spymaker coveting sensitive US technology” (United Arab Emirates-controlled MGX) “plan to pay the Trump and Witkoff families hundreds of millions of dollars.”

The White House continues to insist that Trump has “no conflicts of interest” because “his assets are in a trust managed by his children,” Reuters reported.

Ultimately, Waters and other Democrats failed to amend the bill to prevent presidents from benefiting from the stablecoin framework and promoting their own crypto projects.

Markets for various cryptocurrencies spiked Thursday, as the industry anticipates that more people will hold crypto wallets in a world where it’s fast, cheap, and easy to move money on the blockchain with stablecoins, as compared to relying on traditional bank services. And any fees associated with stablecoin transfers will likely be paid with other forms of cryptocurrencies, with a token called ether predicted to benefit most since “most stablecoins are issued and transacted on the underlying blockchain Ethereum,” Reuters reported.

Unsurprisingly, ether-linked stocks jumped Friday, with the token’s value hitting a six-month high. Notably, Bitcoin recently hit a record high; it was valued at above $120,000 as the stablecoin bill moved closer to Trump’s desk.

GENIUS Act plants “seeds for the next financial crisis”

As Trump prepares to sign the law, Consumer Reports’ senior director monitoring digital marketplaces, Delicia Hand, told Ars that the group plans to work with other consumer advocates and the implementing regulator to try to close any gaps in the stablecoin legislation that would leave Americans vulnerable.

Some Democrats supported the GENIUS Act, arguing that some regulation is better than none as cryptocurrency activity increases globally and the technology has the potential to revolutionize the US financial system.

But Hand told Ars that “we’ve already seen what happens when there are no protections” for consumers, like during the FTX collapse.

She joins critics that the BBC reported are concerned that stablecoin investors could get stuck in convoluted bankruptcy processes as tech firms engage more and more in “bank-like activities” without the same oversight as banks.

The only real assurances for stablecoin investors are requirements that all firms must publish monthly reserves backing their tokens, as well as annual statements required from the biggest companies issuing tokens. Those will likely include e-commerce and digital payments giants like Amazon, PayPal, and Shopify, as well as major social media companies.

Meanwhile, Trump seemingly wants to lure more elderly people into investing in crypto, reportedly “working on a presidential order that could allow retirement accounts to be invested in private assets, such as crypto, gold, and private equity,” the BBC reported.

Waters, a top Democrat on the House Financial Services Committee, is predicting the worst. She has warned that the law gives “Trump the pen to write the rules that would put more money in his family’s pocket” while causing “consumer harm” and planting “the seeds for the next financial crisis.”

Analyst: End of Trump’s crypto wins

The House of Representatives passed two other crypto bills this week, but those bills now go to the Senate, where they may not have enough support to pass.

The CLARITY Act—which creates a regulatory framework for digital assets and cryptocurrencies to allow for more innovation and competition—is “absolutely the most important thing” the crypto industry has been pushing since spending more than $119 million backing pro-crypto congressional candidates last year, a Coinbase policy official, Kara Calvert, told The New York Times.

Republicans and industry see the CLARITY Act as critical because it strips the Securities and Exchange Commission of power to police cryptocurrencies and digital assets and gives that power instead to the Commodity Futures Trading Commission, which is viewed as friendlier to industry. If it passed, the CLARITY Act would not just make it harder for the SEC to raise lawsuits, but it would also box out any future SEC officials under less crypto-friendly presidents from “bringing any cases for past misconduct,” Amanda Fischer, a top SEC official under the Biden administration, told the NYT.

“It would retroactively bless all the conduct of the crypto industry,” Fischer suggested.

But Senators aren’t happy with the CLARITY Act and expect to draft their own version of the bill, striving to lay out a crypto market structure that isn’t “reviled by consumer protection groups,” the NYT reported.

And the other bill that the House sent to the Senate on Thursday—which would ban the US from creating a central bank digital currency (CBDC) that some conservatives believe would allow for government financial surveillance—faces an uphill battle, in part due to Republicans seemingly downgrading it as a priority.

The anti-CBDC bill will likely be added to a “must-pass” annual defense policy bill facing a vote later this year, the NYT reported. But Rep. Marjorie Taylor Greene (R.-Ga.) “mocked” that plan, claiming she did not expect it to be “honored.”

Terry Haines, founder of the Washington-based analysis firm Pangaea Policy, has forecasted that both the CLARITY Act and the anti-CBDC bills will likely die in the Senate, the BBC reported.

“This is the end of crypto’s wins for quite a while—and the only one,” Haines suggested. “When the easy part, stablecoin, takes [approximately] four to five years and barely survives industry scandals, it’s not much to crow about.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Trump to sign stablecoin bill that may make it easier to bribe the president Read More »

court-rules-trump-broke-us-law-when-he-fired-democratic-ftc-commissioner

Court rules Trump broke US law when he fired Democratic FTC commissioner

“Without removal protections, that independence would be jeopardized… Accordingly, the Court held that the FTC Act’s for-cause removal protections were constitutional,” wrote AliKhan, who was appointed to the District Court by President Biden in 2023.

Judge: Facts almost identical to 1935 case

The Supreme Court reaffirmed its Humphrey’s Executor findings in cases decided in 2010 and 2020, AliKhan wrote. “Humphrey’s Executor remains good law today. Over the span of ninety years, the Supreme Court has declined to revisit or overrule it,” she wrote. Congress has likewise not disturbed FTC commissioners’ removal protection, and “thirteen Presidents have acquiesced to its vitality,” she wrote.

AliKhan said the still-binding precedent clearly supports Slaughter’s case against Trump. “The answer to the key substantive question in this case—whether a unanimous Supreme Court decision about the FTC Act’s removal protections applies to a suit about the FTC Act’s removal protections—seems patently obvious,” AliKhan wrote. “In arguing for a different result, Defendants ask this court to ignore the letter of Humphrey’s Executor and embrace the critiques from its detractors.”

The 1935 case and the present case are similar in multiple ways, the judge wrote. “Humphrey’s Executor involved the exact same provision of the FTC Act that Ms. Slaughter seeks to enforce here: the for-cause removal protection within 15 U.S.C. § 41 prohibiting any termination except for ‘inefficiency, neglect of duty, or malfeasance in office,'” she wrote.

The “facts almost identically mirror those of Humphrey’s Executor,” she continued. In both Roosevelt’s removal of Humphrey and Trump’s removal of Slaughter, the president cited disagreements in priorities and “did not purport to base the removal on inefficiency, neglect of duty, or malfeasance.”

Trump and fellow defendants assert that the current FTC is much different from the 1935 version of the body, saying it now “exercises significant executive power.” That includes investigating and prosecuting violations of federal law, administratively adjudicating claims itself, and issuing rules and regulations to prevent unfair business practices.

Court rules Trump broke US law when he fired Democratic FTC commissioner Read More »

eu-presses-pause-on-probe-of-x-as-us-trade-talks-heat-up

EU presses pause on probe of X as US trade talks heat up

While Trump and Musk have fallen out this year after developing a political alliance on the 2024 election, the US president has directly attacked EU penalties on US companies calling them a “form of taxation” and comparing fines on tech companies with “overseas extortion.”

Despite the US pressure, commission president Ursula von der Leyen has explicitly stated Brussels will not change its digital rule book. In April, the bloc imposed a total of €700 million fines on Apple and Facebook owner Meta for breaching antitrust rules.

But unlike the Apple and Meta investigations, which fall under the Digital Markets Act, there are no clear legal deadlines under the DSA. That gives the bloc more political leeway on when it announces its formal findings. The EU also has probes into Meta and TikTok under its content moderation rule book.

The commission said the “proceedings against X under the DSA are ongoing,” adding that the enforcement of “our legislation is independent of the current ongoing negotiations.”

It added that it “remains fully committed to the effective enforcement of digital legislation, including the Digital Services Act and the Digital Markets Act.”

Anna Cavazzini, a European lawmaker for the Greens, said she expected the commission “to move on decisively with its investigation against X as soon as possible.”

“The commission must continue making changes to EU regulations an absolute red line in tariff negotiations with the US,” she added.

Alongside Brussels’ probe into X’s transparency breaches, it is also looking into content moderation at the company after Musk hosted Alice Weidel of the far-right Alternative for Germany for a conversation on the social media platform ahead of the country’s elections.

Some European lawmakers, as well as the Polish government, are also pressing the commission to open an investigation into Musk’s Grok chatbot after it spewed out antisemitic tropes last week.

X said it disagreed “with the commission’s assessment of the comprehensive work we have done to comply with the Digital Services Act and the commission’s interpretation of the Act’s scope.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

EU presses pause on probe of X as US trade talks heat up Read More »

permit-for-xai’s-data-center-blatantly-violates-clean-air-act,-naacp-says

Permit for xAI’s data center blatantly violates Clean Air Act, NAACP says


Evidence suggests health department gave preferential treatment to xAI, NAACP says.

Local students speak in opposition to a proposal by Elon Musk’s xAI to run gas turbines at its data center during a public comment meeting hosted by the Shelby County Health Department at Fairley High School on xAI’s permit application to use gas turbines for a new data center in Memphis, TN on April 25, 2025. Credit: The Washington Post / Contributor | The Washington Post

xAI continues to face backlash over its Memphis data center, as the NAACP joined groups today appealing the issuance of a recently granted permit that the groups say will allow xAI to introduce major new sources of pollutants without warning at any time.

The battle over the gas turbines powering xAI’s data center began last April when thermal imaging seemed to show that the firm was lying about dozens of seemingly operational turbines that could be a major source of smog-causing pollution. By June, the NAACP got involved, notifying the Shelby County Health Department (SCHD) of its intent to sue xAI to force Elon Musk’s AI company to engage with community members in historically Black neighborhoods who are believed to be most affected by the pollution risks.

But the NAACP’s letter seemingly did nothing to stop the SCHD from granting the permits two weeks later on July 2, as well as exemptions that xAI does not appear to qualify for, the appeal noted. Now, the NAACP—alongside environmental justice groups; the Southern Environmental Law Center (SELC); and Young, Gifted and Green—is appealing. The groups are hoping the Memphis and Shelby County Air Pollution Control Board will revoke the permit and block the exemptions, agreeing that the SCHD’s decisions were fatally flawed, violating the Clean Air Act and local laws.

SCHD’s permit granted xAI permission to operate 15 gas turbines at the Memphis data center, while the SELC’s imaging showed that xAI was potentially operating as many as 24. Prior to the permitting, xAI was accused of operating at least 35 turbines without the best-available pollution controls.

In their appeal, the NAACP and other groups argued that the SCHD put xAI profits over Black people’s health, granting unlawful exemptions while turning a blind eye to xAI’s operations, which allegedly started in 2024 but were treated as brand new in 2025.

Significantly, the groups claimed that the health department “improperly ignored” the prior turbine activity and the additional turbines still believed to be on site, unlawfully deeming some of the turbines as “temporary” and designating xAI’s facility a new project with no prior emissions sources. Had xAI’s data center been categorized as a modification to an existing major source of pollutants, the appeal said, xAI would’ve faced stricter emissions controls and “robust ambient air quality impacts assessments.”

And perhaps more concerningly, the exemptions granted could allow xAI—or any other emerging major sources of pollutants in the area—to “install and operate any number of new polluting turbines at any time without any written approval from the Health Department, without any public notice or public participation, and without pollution controls,” the appeal said.

The SCHD and xAI did not respond to Ars’ request to comment.

Officials accused of cherry-picking Clean Air Act

The appeal called out the SCHD for “tellingly” omitting key provisions of the Clean Air Act that allegedly undermined the department’s “position” when explaining why xAI qualified for exemptions. Groups also suggested that xAI was getting preferential treatment, providing as evidence a side-by-side comparison of a permit with stricter emissions requirements granted to a natural gas power plant, issued within months of granting xAI’s permit with only generalized emissions requirements.

“The Department cannot cherry pick which parts of the federal Clean Air Act it believes are relevant,” the appeal said, calling the SCHD’s decisions a “blatant” misrepresentation of the federal law while pointing to statements from the Environmental Protection Agency (EPA) that allegedly “directly” contradict the health department’s position.

For some Memphians protesting xAI’s facility, it seems “indisputable” that xAI’s turbines fall outside of the Clean Air Act requirements, whether they’re temporary or permanent, and if that’s true, it is “undeniable” that the activity violates the law. They’re afraid the health department is prioritizing xAI’s corporate gains over their health by “failing to establish enforceable emission limits” on the data center, which powers what xAI hypes as the world’s largest AI supercomputer, Colossus, the engine behind its controversial Grok models.

Rather than a minor source, as the SCHD designated the facility, Memphians think the data center is already a major source of pollutants, with its permitted turbines releasing, at minimum, 900 tons of nitrogen oxides (NOx) per year. That’s more than three times the threshold that the Clean Air Act uses to define a major source: “one that ’emits, or has the potential to emit,’ at least 250 tons of NOx per year,” the appeal noted. Further, the allegedly overlooked additional turbines that were on site at xAI when permitting was granted “have the potential to emit at least 560 tons of NOx per year.”

But so far, Memphians appear stuck with the SCHD’s generalized emissions requirements and xAI’s voluntary emission limits, which the appeal alleged “fall short” of the stringent limits imposed if xAI were forced to use best-available control technologies. Fixing that is “especially critical given the ongoing and worsening smog problem in Memphis,” environmental groups alleged, which is an area that has “failed to meet EPA’s air quality standard for ozone for years.”

xAI also apparently conducted some “air dispersion modeling” to appease critics. But, again, that process was not comparable to the more rigorous analysis that would’ve been required to get what the EPA calls a Prevention of Significant Deterioration permit, the appeal said.

Groups want xAI’s permit revoked

To shield Memphians from ongoing health risks, the NAACP and environmental justice groups have urged the Memphis and Shelby County Air Pollution Control Board to act now.

Memphis is a city already grappling with high rates of emergency room visits and deaths from asthma, with cancer rates four times the national average. Residents have already begun wearing masks, avoiding the outdoors, and keeping their windows closed since xAI’s data center moved in, the appeal noted. Residents remain “deeply concerned” about feared exposure to alleged pollutants that can “cause a variety of adverse health effects,” including “increased risk of lung infection, aggravated respiratory diseases such as emphysema and chronic bronchitis, and increased frequency of asthma attack,” as well as certain types of cancer.

In an SELC press release, LaTricea Adams, CEO and President of Young, Gifted and Green, called the SCHD’s decisions on xAI’s permit “reckless.”

“As a Black woman born and raised in Memphis, I know firsthand how industry harms Black communities while those in power cower away from justice,” Adams said. “The Shelby County Health Department needs to do their job to protect the health of ALL Memphians, especially those in frontline communities… that are burdened with a history of environmental racism, legacy pollution, and redlining.”

Groups also suspect xAI is stockpiling dozens of gas turbines to potentially power a second facility nearby—which could lead to over 90 turbines in operation. To get that facility up and running, Musk claimed that he will be “copying and pasting” the process for launching the first data center, SELC’s press release said.

Groups appealing have asked the board to revoke xAI’s permits and declare that xAI’s turbines do not qualify for exemptions from the Clean Air Act or other laws and that all permits for gas turbines must meet strict EPA standards. If successful, groups could force xAI to redo the permitting process “pursuant to the major source requirements of the Clean Air Act” and local law. At the very least, they’ve asked the board to remand the permit to the health department to “reconsider its determinations.”

Unless the pollution control board intervenes, Memphians worry xAI’s “unlawful conduct risks being repeated and evading review,” with any turbines removed easily brought back with “no notice” to residents if xAI’s exemptions remain in place.

“Nothing is stopping xAI from installing additional unpermitted turbines at any time to meet its widely-publicized demand for additional power,” the appeal said.

NAACP’s director of environmental justice, Abre’ Conner, confirmed in the SELC’s press release that his group and community members “have repeatedly shared concerns that xAI is causing a significant increase in the pollution of the air Memphians breathe.”

“The health department should focus on people’s health—not on maximizing corporate gain,” Conner said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Permit for xAI’s data center blatantly violates Clean Air Act, NAACP says Read More »