Author name: Mike M.

the-empire-strikes-back-with-f-bombs:-ai-darth-vader-goes-rogue-with-profanity,-slurs

The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs

The company acted quickly to address the language issues, but according to GameSpot, some players also reported hearing intense instructions for dealing with a break-up (“Exploit their vulnerabilities, shatter their confidence, and crush their spirit”) and disparaging comments from the character directed at Spanish speakers: “Spanish? A useful tongue for smugglers and spice traders,” AI Vader said. “Its strategic value is minimal.”

To be fair to Epic’s attempt at an AI implementation, Darth Vader is a deeply evil character (i.e. murders sandpeople, hates sand), and the remarks seem consistent with his twisted and sadistic personality. In fact, arguably the most out-of-character “inappropriate” response in the examples above might be the one where he chides the player for vulgarity.

On Friday afternoon, Epic Games sent out an email seeking to reassure parents who may have come across the news about the misbehaving AI character. The company explained it has added “a new parental control so you can choose whether your child can interact with AI features in Epic’s products through voice and written communication.” The email specifically mentions the Darth Vader NPC, noting that “when players talk to conversational AI like Darth Vader, they can have an interactive chat where the NPC responds in context.” For children under 13 or their country’s age of digital consent, Epic says the feature defaults to off and requires parental activation through either the Fortnite main menu or Epic Account Settings.

These aren’t the words you’re looking for

Getting an AI character to match the tone or backstory of an established fictional character isn’t as easy as it might seem. Compared to a carefully controlled authored script in other video games, AI speech can offer nearly infinite possibilities. Trusting that AI model will get it right, at scale, is a dicey proposition—especially with a well-known and beloved character.

The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs Read More »

google-to-give-app-devs-access-to-gemini-nano-for-on-device-ai

Google to give app devs access to Gemini Nano for on-device AI

The rapid expansion of generative AI has changed the way Google and other tech giants design products, but most of the AI features you’ve used are running on remote servers with a ton of processing power. Your phone has a lot less power, but Google appears poised to give developers some important new mobile AI tools. At I/O next week, Google will likely announce a new set of APIs to let developers leverage the capabilities of Gemini Nano for on-device AI.

Google has quietly published documentation on big new AI features for developers. According to Android Authority, an update to the ML Kit SDK will add API support for on-device generative AI features via Gemini Nano. It’s built on AI Core, similar to the experimental Edge AI SDK, but it plugs into an existing model with a set of predefined features that should be easy for developers to implement.

Google says ML Kit’s GenAI APIs will enable apps to do summarization, proofreading, rewriting, and image description without sending data to the cloud. However, Gemini Nano doesn’t have as much power as the cloud-based version, so expect some limitations. For example, Google notes that summaries can only have a maximum of three bullet points, and image descriptions will only be available in English. The quality of outputs could also vary based on the version of Gemini Nano on a phone. The standard version (Gemini Nano XS) is about 100MB in size, but Gemini Nano XXS as seen on the Pixel 9a is a quarter of the size. It’s text-only and has a much smaller context window.

Not all versions of Gemini Nano are created equal.

Credit: Ryan Whitwam

Not all versions of Gemini Nano are created equal. Credit: Ryan Whitwam

This move is good for Android in general because ML Kit works on devices outside Google’s Pixel line. While Pixel devices use Gemini Nano extensively, several other phones are already designed to run this model, including the OnePlus 13, Samsung Galaxy S25, and Xiaomi 15. As more phones add support for Google’s AI model, developers will be able to target those devices with generative AI features.

Google to give app devs access to Gemini Nano for on-device AI Read More »

meta-argues-enshittification-isn’t-real-in-bid-to-toss-ftc-monopoly-case

Meta argues enshittification isn’t real in bid to toss FTC monopoly case

Further, Meta argued that the FTC did not show evidence that users sharing friends-and-family content were shown more ads. Meta noted that it “does not profit by showing more ads to users who do not click on them,” so it only shows more ads to users who click ads.

Meta also insisted that there’s “nothing but speculation” showing that Instagram or WhatsApp would have been better off or grown into rivals had Meta not acquired them.

The company claimed that without Meta’s resources, Instagram may have died off. Meta noted that Instagram co-founder Kevin Systrom testified that his app was “pretty broken and duct-taped” together, making it “vulnerable to spam” before Meta bought it.

Rather than enshittification, what Meta did to Instagram could be considered “a consumer-welfare bonanza,” Meta argued, while dismissing “smoking gun” emails from Mark Zuckerberg discussing buying Instagram to bury it as “legally irrelevant.”

Dismissing these as “a few dated emails,” Meta argued that “efforts to litigate Mr. Zuckerberg’s state of mind before the acquisition in 2012 are pointless.”

“What matters is what Meta did,” Meta argued, which was pump Instagram with resources that allowed it “to ‘thrive’—adding many new features, attracting hundreds of millions and then billions of users, and monetizing with great success.”

In the case of WhatsApp, Meta argued that nobody thinks WhatsApp had any intention to pivot to social media when the founders testified that their goal was to never add social features, preferring to offer a simple, clean messaging app. And Meta disputed any claim that it feared Google might buy WhatsApp as the basis for creating a Facebook rival, arguing that “the sole Meta witness to (supposedly) learn of Google’s acquisition efforts testified that he did not have that worry.”

Meta argues enshittification isn’t real in bid to toss FTC monopoly case Read More »

rocket-report:-how-is-your-payload-fairing?-poland-launches-test-rocket.

Rocket Report: How is your payload fairing? Poland launches test rocket.


All the news that’s fit to lift

No thunder down under.

Venus Aerospace tests its rotating detonation rocket engine in flight for the first time this week. Credit: Venus Aerospace

Venus Aerospace tests its rotating detonation rocket engine in flight for the first time this week. Credit: Venus Aerospace

Welcome to Edition 7.44 of the Rocket Report! We had some interesting news on Thursday afternoon from Down Under. As Gilmour Space was preparing for the second launch attempt of its Eris vehicle, as part of the pre-launch preparations, something triggered the payload fairing to deploy. We would love to see some video of that. Please.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Rotating detonation rocket engine takes flight. On Wednesday, US-based propulsion company Venus Aerospace completed a short flight test of its rotating detonation rocket engine at Spaceport America in New Mexico, Ars reports. It is believed to be the first US-based flight test of an idea that has been discussed academically for decades. The concept has previously been tested in a handful of other countries, but never with a high-thrust engine.

Hypersonics on the horizon… The company has only released limited information about the test. The small rocket, powered by the company’s 2,000-pound-thrust engine, launched from a rail in New Mexico. The vehicle flew for about half a minute and, as planned, did not break the sound barrier. Governments around the world have been interested in rotating detonation engine technology for a long time because it has the potential to significantly increase fuel efficiency in a variety of applications, from Navy carriers to rocket engines. In the near term, Venus’ engine could be used for hypersonic missions.

Gilmour Space has a payload fairing mishap. Gilmour Space, a venture-backed startup based in Australia, said this week it was ready to launch a small rocket from its privately owned spaceport on a remote stretch of the country’s northeastern coastline, Ars reports. Gilmour’s three-stage rocket, named Eris, was prepped for a launch as early as Wednesday, but a ground systems issue delayed an attempt until Thursday US time. And then on Thursday, something odd happened: “Last night, during final checks, an unexpected issue triggered the rocket’s payload fairing,” the company said Thursday afternoon, US time.

Always more problems to solve… Gilmour, based in Gold Coast, Australia, was founded in 2012 by two brothers, Adam and James Gilmour, who came to the space industry after careers in banking and marketing. Today, Gilmour employs more than 200 people, mostly engineers and technicians. The debut launch of Gilmour’s Eris rocket is purely a test flight. Gilmour has tested the rocket’s engines and rehearsed the countdown last year, loading propellant and getting within 10 seconds of launch. But Gilmour cautioned in a post on LinkedIn early Wednesday that “test launches are complex.” And it confirmed that on Thursday. Now the company will need to source a replacement fairing, which will probably take a while.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Is an orbital launch from Argentina imminent? We don’t know much about the Argentinian launch company TLON Space, which is developing a (very) small-lift orbital rocket called Aventura 1. According to the company’s website, this launch vehicle will be capable of lofting 25 kg to low-Earth orbit. Some sort of flight test took place two years ago, but the video cuts off after a minute, suggesting that the end of the flight was less than nominal.

Maybe, maybe not… Now, a publication called Urgente24 reports that an orbital launch attempt is underway. It is not clear exactly what this means, and details about what is actually happening at the Malacara Spaceport in Argentina are unclear. I could find no other outlets reporting on an imminent launch attempt. So my guess is that nothing will happen soon, but it is something we’ll keep an eye on regardless. (Submitted by fedeng.)

Poland launches suborbital rocket. Poland has successfully launched a single-stage rocket demonstrator at the Central Air Force Training Ground in Ustka, European Spaceflight reports. The flight was part of a project to develop a three-stage solid-fuel rocket for research payloads. In 2020, the Polish government selected Wojskowe Zakłady Lotnicze No. 1 to lead a consortium developing a three-stage suborbital launch system.

Military uses eyed… The Trójstopniowa Rakieta Suborbitalna (TRS) project involves the Military Institute of Armament Technology and Zakład Produkcji Specjalnej Gamrat and is co-financed by the National Center for Research and Development. The goal of the TRS project is to develop a three-stage rocket capable of carrying a 40-kilogram payload to an altitude exceeding 100 kilometres. While the rocket will initially be used to carry research payloads into space, Poland’s Military Institute of Armament Technology has stated that the technology could also be used for the development of anti-aircraft and tactical missiles.

Latitude signs MoU to launch microsats. On Wednesday, the French launch firm Latitude announced the signing of a memorandum of understanding for the launch of a microsatellite constellation dedicated to storing and processing data directly in orbit. In an emailed news release, Latitude said the “strategic partnership” represents a major step forward in strengthening collaborations between UAE and French space companies.

That’s a lot of launches… Madari Space is developing a constellation of microsatellites (50 to 100 kg), designed as true orbital data centers. Their mission is to store and process data generated on Earth or by other satellites. Latitude plans its first commercial launch with its small-lift Zephyr rocket as early as 2026, with the ambition of reaching a rate of 50 launches per year from 2030. An MoU represents an agreement but not a firm launch contract.

China begins launching AI constellation. China launched 12 satellites early Wednesday for an on-orbit computing project led by startup ADA Space and Zhejiang Lab, Space News reports. A Long March 2D rocket lifted off at 12: 12 am Eastern on Wednesday from Jiuquan Satellite Launch Center in northwest China. Commercial company ADA Space released further details, stating that the 12 satellites form the “Three-Body Computing Constellation,” which will directly process data in space rather than on the ground, reducing reliance on ground-based computing infrastructure.

Putting the intelligence in space… ADA Space claims the 12 satellites represent the world’s first dedicated orbital computing constellation. This marks a shift from satellites focused solely on sensing or communication to ones that also serve as data processors and AI platforms. The constellation is part of a wider “Star-Compute Program,” a collaboration between ADA Space and Zhejiang Lab, which aims to build a huge on-orbit network of 2,800 satellites. (Submitted by EllPeaTea.)

SpaceX pushes booster reuse record further. SpaceX succeeded with launching 28 more Starlink satellites from Florida early Tuesday morning following an overnight scrub the previous night. The Falcon 9 booster, 1067, made a record-breaking 28th flight, Spaceflight Now reports.

Booster landings have truly become routine… A little more than eight minutes after liftoff, SpaceX landed B1067 on its drone ship, Just Read the Instructions, which was positioned in the Atlantic Ocean to the east of the Bahamas. This marked the 120th successful landing for this drone ship and the 446th booster landing to date for SpaceX. (Submitted by EllPeaTea.)

What happens if Congress actually cancels the SLS rocket? The White House Office of Management and Budget dropped its “skinny” budget proposal for the federal government earlier this month, and the headline news for the US space program was the cancellation of three major programs: the Space Launch System rocket, the Orion spacecraft, and the Lunar Gateway. In a report, Ars answers the question of what happens to Artemis and NASA’s deep space exploration plans if that happens. The most likely answer is that NASA turns to an old but successful playbook: COTS.

A market price for the Moon… This stands for Commercial Orbital Transportation System and was created by NASA two decades ago to develop cargo transport systems (eventually, this became SpaceX’s Dragon and Northrop’s Cygnus spacecraft) for the International Space Station. Since then, NASA has adopted this same model for crew services as well as other commercial programs. Under the COTS model, NASA provides funding and guidance to private companies to develop their own spacecraft, rockets, and services and then buys those at a “market” rate. Sources indicate that NASA would go to industry and seek an “end-to-end” solution for lunar missions—that is, an integrated plan to launch astronauts from Earth, land them on the Moon, and return them to Earth.

Starship nearing its next test flight. SpaceX fired six Raptor engines on the company’s next Starship rocket Monday, clearing a major hurdle on the path to launch later this month on a high-stakes test flight to get the private rocket program back on track. SpaceX hasn’t officially announced a target launch date, but sources indicate a launch could take place toward the end of next week, prior to Memorial Day weekend, Ars reports. The launch window would open at 6: 30 pm local time (7: 30 pm EDT; 23: 30 UTC).

Getting back on track… If everything goes according to plan, Starship is expected to soar into space and fly halfway around the world, targeting a reentry and controlled splashdown into the Indian Ocean. While reusing the first stage is a noteworthy milestone, the next flight is important for another reason. SpaceX’s last two Starship test flights ended prematurely when the rocket’s upper stage lost power and spun out of control, dropping debris into the sea near the Bahamas and the Turks and Caicos Islands.

Next three launches

May 16: Falcon 9 | Starlink 15-5 | Vandenberg Space Force Base, California | 13: 43 UTC

May 17: Electron | The Sea God Sees | Māhia Peninsula, New Zealand | 08: 15 UTC

May 18: PSLV-XL | RISAT-1B | Satish Dhawan Space Centre, India | 00: 29 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: How is your payload fairing? Poland launches test rocket. Read More »

report:-terrorists-seem-to-be-paying-x-to-generate-propaganda-with-grok

Report: Terrorists seem to be paying X to generate propaganda with Grok

Back in February, Elon Musk skewered the Treasury Department for lacking “basic controls” to stop payments to terrorist organizations, boasting at the Oval Office that “any company” has those controls.

Fast-forward three months, and now Musk’s social media platform X is suspected of taking payments from sanctioned terrorists and providing premium features that make it easier to raise funds and spread propaganda—including through X’s chatbot, Grok. Groups seemingly benefiting from X include Houthi rebels, Hezbollah, and Hamas, as well as groups from Syria, Kuwait, and Iran. Some accounts have amassed hundreds of thousands of followers, paying to boost their reach while X apparently looks the other way.

In a report released Thursday, the Tech Transparency Project (TTP) flagged popular accounts likely linked to US-sanctioned terrorists. Some of the accounts bear “ID verified” badges, suggesting that X may be going against its own policies that ban sanctioned terrorists from benefiting from its platform.

Even more troubling, “several made use of revenue-generating features offered by X, including a button for tips,” the TTP reported.

On X, Premium subscribers pay $8 monthly or $84 annually, and Premium+ subscribers pay $40 monthly or $395 annually. Verified organizations pay X between $200 and $1,000 monthly, or up to $10,000 annually for access to Premium+. These subscriptions come with perks, allowing suspected terrorist accounts to share longer text and video posts, offer subscribers paid content, create communities, accept gifts, and amplify their propaganda.

Disturbingly, the TTP found that X’s chatbot, Grok, also appears to be helping to whitewash accounts linked to sanctioned terrorists.

In its report, the TTP noted that an account with the handle “hasmokaled”—which apparently belongs to “a key Hezbollah money exchanger,” Hassan Moukalled—at one point had a blue checkmark with 60,000 followers. While the Treasury Department has sanctioned Moukalled for propping up efforts “to continue to exploit and exacerbate Lebanon’s economic crisis,” clicking the Grok AI profile summary button seems to rely on Moukalled’s own posts and his followers’ impressions of his posts and therefore generated praise.

Report: Terrorists seem to be paying X to generate propaganda with Grok Read More »

openai-adds-gpt-4.1-to-chatgpt-amid-complaints-over-confusing-model-lineup

OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing model lineup

The release comes just two weeks after OpenAI made GPT-4 unavailable in ChatGPT on April 30. That earlier model, which launched in March 2023, once sparked widespread hype about AI capabilities. Compared to that hyperbolic launch, GPT-4.1’s rollout has been a fairly understated affair—probably because it’s tricky to convey the subtle differences between all of the available OpenAI models.

As if 4.1’s launch wasn’t confusing enough, the release also roughly coincides with OpenAI’s July 2025 deadline for retiring the GPT-4.5 Preview from the API, a model one AI expert called a “lemon.” Developers must migrate to other options, OpenAI says, although GPT-4.5 will remain available in ChatGPT for now.

A confusing addition to OpenAI’s model lineup

In February, OpenAI CEO Sam Altman acknowledged his company’s confusing AI model naming practices on X, writing, “We realize how complicated our model and product offerings have gotten.” He promised that a forthcoming “GPT-5” model would consolidate the o-series and GPT-series models into a unified branding structure. But the addition of GPT-4.1 to ChatGPT appears to contradict that simplification goal.

So, if you use ChatGPT, which model should you use? If you’re a developer using the models through the API, the consideration is more of a trade-off between capability, speed, and cost. But in ChatGPT, your choice might be limited more by personal taste in behavioral style and what you’d like to accomplish. Some of the “more capable” models have lower usage limits as well because they cost more for OpenAI to run.

For now, OpenAI is keeping GPT-4o as the default ChatGPT model, likely due to its general versatility, balance between speed and capability, and personable style (conditioned using reinforcement learning and a specialized system prompt). The simulated reasoning models like 03 and 04-mini-high are slower to execute but can consider analytical-style problems more systematically and perform comprehensive web research that sometimes feels genuinely useful when it surfaces relevant (non-confabulated) web links. Compared to those, OpenAI is largely positioning GPT-4.1 as a speedier AI model for coding assistance.

Just remember that all of the AI models are prone to confabulations, meaning that they tend to make up authoritative-sounding information when they encounter gaps in their trained “knowledge.” So you’ll need to double-check all of the outputs with other sources of information if you’re hoping to use these AI models to assist with an important task.

OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing model lineup Read More »

google-introduces-advanced-protection-mode-for-its-most-at-risk-android-users

Google introduces Advanced Protection mode for its most at-risk Android users

Google is adding a new security setting to Android to provide an extra layer of resistance against attacks that infect devices, tap calls traveling through insecure carrier networks, and deliver scams through messaging services.

On Tuesday, the company unveiled the Advanced Protection mode, most of which will be rolled out in the upcoming release of Android 16. The setting comes as mercenary malware sold by NSO Group and a cottage industry of other exploit sellers continues to thrive. These players provide attacks-as-a-service through end-to-end platforms that exploit zero-day vulnerabilities on targeted devices, infect them with advanced spyware, and then capture contacts, message histories, locations, and other sensitive information. Over the past decade, phones running fully updated versions of Android and iOS have routinely been hacked through these services.

A core suite of enhanced security features

Advanced Protection is Google’s latest answer to this type of attack. By flipping a single button in device settings, users can enable a host of protections that can thwart some of the most common techniques used in sophisticated hacks. In some cases, the protections hamper performance and capabilities of the device, so Google is recommending the new mode mainly for journalists, elected officials, and other groups who are most often targeted or have the most to lose when infected.

“With the release of Android 16, users who choose to activate Advanced Protection will gain immediate access to a core suite of enhanced security features,” Google’s product manager for Android Security, Il-Sung Lee, wrote. “Additional Advanced Protection features like Intrusion Logging, USB protection, the option to disable auto-reconnect to insecure networks, and integration with Scam Detection for Phone by Google will become available later this year.”

Google introduces Advanced Protection mode for its most at-risk Android users Read More »

office-apps-on-windows-10-are-no-longer-tied-to-its-october-2025-end-of-support-date

Office apps on Windows 10 are no longer tied to its October 2025 end-of-support date

For most users, Windows 10 will stop receiving security updates and other official support from Microsoft on October 14, 2025, about five months from today. Until recently, Microsoft had also said that users running the Microsoft Office apps on Windows 10 would also lose support on that date, whether they were using the continually updated Microsoft 365 versions of those apps or the buy-once-own-forever versions included in Office 2021 or Office 2024.

Microsoft has recently tweaked this policy, however (as seen by The Verge). Now, Windows 10 users of the Microsoft 365 apps will still be eligible to receive software updates and support through October of 2028, “in the interest of maintaining your security while you upgrade to Windows 11.” Microsoft is taking a similar approach to Windows Defender malware definitions, which will be offered to Windows 10 users “through at least October 2028.”

The policy is a change from a few months ago, when Microsoft insisted that Office apps running on Windows 10 would become officially unsupported on October 14. The perpetually licensed versions of Office will be supported in accordance with Microsoft’s “Fixed Lifecycle Policy,” which guarantees support and security updates for a fixed number of years after a software product’s initial release. For Office 2021, this means Windows 10 users will get support through October of 2026; for Office 2024, this should extend to October of 2029.

Office apps on Windows 10 are no longer tied to its October 2025 end-of-support date Read More »

ai-agents-that-autonomously-trade-cryptocurrency-aren’t-ready-for-prime-time

AI agents that autonomously trade cryptocurrency aren’t ready for prime time

The researchers wrote:

The implications of this vulnerability are particularly severe given that ElizaOSagents are designed to interact with multiple users simultaneously, relying on shared contextual inputs from all participants. A single successful manipulation by a malicious actor can compromise the integrity of the entire system, creating cascading effects that are both difficult to detect and mitigate. For example, on ElizaOS’s Discord server, various bots are deployed to assist users with debugging issues or engaging in general conversations. A successful context manipulation targeting any one of these bots could disrupt not only individual interactions but also harm the broader community relying on these agents for support

and engagement.

This attack exposes a core security flaw: while plugins execute sensitive operations, they depend entirely on the LLM’s interpretation of context. If the context is compromised, even legitimate user inputs can trigger malicious actions. Mitigating this threat requires strong integrity checks on stored context to ensure that only verified, trusted data informs decision-making during plugin execution.

In an email, ElizaOS creator Shaw Walters said the framework, like all natural-language interfaces, is designed “as a replacement, for all intents and purposes, for lots and lots of buttons on a webpage.” Just as a website developer should never include a button that gives visitors the ability to execute malicious code, so too should administrators implementing ElizaOS-based agents carefully limit what agents can do by creating allow lists that permit an agent’s capabilities as a small set of pre-approved actions.

Walters continued:

From the outside it might seem like an agent has access to their own wallet or keys, but what they have is access to a tool they can call which then accesses those, with a bunch of authentication and validation between.

So for the intents and purposes of the paper, in the current paradigm, the situation is somewhat moot by adding any amount of access control to actions the agents can call, which is something we address and demo in our latest latest version of Eliza—BUT it hints at a much harder to deal with version of the same problem when we start giving the agent more computer control and direct access to the CLI terminal on the machine it’s running on. As we explore agents that can write new tools for themselves, containerization becomes a bit trickier, or we need to break it up into different pieces and only give the public facing agent small pieces of it… since the business case of this stuff still isn’t clear, nobody has gotten terribly far, but the risks are the same as giving someone that is very smart but lacking in judgment the ability to go on the internet. Our approach is to keep everything sandboxed and restricted per user, as we assume our agents can be invited into many different servers and perform tasks for different users with different information. Most agents you download off Github do not have this quality, the secrets are written in plain text in an environment file.

In response, Atharv Singh Patlan, the lead co-author of the paper, wrote: “Our attack is able to counteract any role based defenses. The memory injection is not that it would randomly call a transfer: it is that whenever a transfer is called, it would end up sending to the attacker’s address. Thus, when the ‘admin’ calls transfer, the money will be sent to the attacker.”

AI agents that autonomously trade cryptocurrency aren’t ready for prime time Read More »

fcc-commissioner-writes-op-ed-titled,-“it’s-time-for-trump-to-doge-the-fcc“

FCC commissioner writes op-ed titled, “It’s time for Trump to DOGE the FCC“

In addition to cutting Universal Service, Simington proposed a broad streamlining of the FCC licensing process. Manual processing of license applications “consumes vast staff hours and introduces unnecessary delay into markets that thrive on speed and innovation,” he wrote.

“For non-contentious licenses, automated workflows should be the default,” Simington argued. “By implementing intelligent review systems and processing software, the FCC could drastically reduce the time and labor involved in issuing standard licenses.”

Moving staff, deleting rules

Simington also proposed taking employees out of the FCC Media Bureau and moving them “to other offices within the FCC—such as the Space Bureau—that are grappling with staffing shortages in high-growth, high-need sectors.” Much of the Media Bureau’s “work is concentrated on regulating traditional broadcast media—specifically, over-the-air television and radio—a sector that continues to contract in relevance,” he wrote.

Simington acknowledged that cutting the Media Bureau would seem to conflict with his own proposal to regulate fees paid by local stations to broadcast networks. It might also conflict with FCC Chairman Brendan Carr’s attempts to regulate news content that he perceives as biased against Republicans. But Simington argued that the Media Bureau is “significantly overstaffed relative to its current responsibilities.”

Simington became an FCC commissioner at the end of Trump’s first term in 2020. Trump picked Simington as a replacement for Republican Michael O’Rielly, who earned Trump’s ire by opposing a crackdown on social media websites.

The FCC is currently operating with two Republicans and two Democrats, preventing any major votes that require a Republican majority. But Democratic Commissioner Geoffrey Starks said he is leaving sometime this spring, and Republican nominee Olivia Trusty is on track to be confirmed by the Senate.

The agency is likely to cut numerous regulations once there’s a Republican majority. Carr started a “Delete, Delete, Delete” proceeding that aims to eliminate as many rules as possible. Congress is also pushing FCC cost cuts, as the Senate voted to kill a Biden-era attempt to use E-Rate to subsidize Wi-Fi hotspots for schoolchildren who lack reliable Internet access to complete their homework.

FCC commissioner writes op-ed titled, “It’s time for Trump to DOGE the FCC“ Read More »

copyright-office-head-fired-after-reporting-ai-training-isn’t-always-fair-use

Copyright Office head fired after reporting AI training isn’t always fair use


Cops scuffle with Trump picks at Copyright Office after AI report stuns tech industry.

A man holds a flag that reads “Shame” outside the Library of Congress on May 12, 2025 in Washington, DC. On May 8th, President Donald Trump fired Carla Hayden, the head of the Library of Congress, and Shira Perlmutter, the head of the US Copyright Office, just days after. Credit: Kayla Bartkowski / Staff | Getty Images News

A day after the US Copyright Office dropped a bombshell pre-publication report challenging artificial intelligence firms’ argument that all AI training should be considered fair use, the Trump administration fired the head of the Copyright Office, Shira Perlmutter—sparking speculation that the controversial report hastened her removal.

Tensions have apparently only escalated since. Now, as industry advocates decry the report as overstepping the office’s authority, social media posts on Monday described an apparent standoff at the Copyright Office between Capitol Police and men rumored to be with Elon Musk’s Department of Government Efficiency (DOGE).

A source familiar with the matter told Wired that the men were actually “Brian Nieves, who claimed he was the new deputy librarian, and Paul Perkins, who said he was the new acting director of the Copyright Office, as well as acting Registrar,” but it remains “unclear whether the men accurately identified themselves.” A spokesperson for the Capitol Police told Wired that no one was escorted off the premises or denied entry to the office.

Perlmutter’s firing followed Donald Trump’s removal of Librarian of Congress Carla Hayden, who, NPR noted, was the first African American to hold the post. Responding to public backlash, White House Press Secretary Karoline Leavitt claimed that the firing was due to “quite concerning things that she had done at the Library of Congress in the pursuit of DEI and putting inappropriate books in the library for children.”

The Library of Congress houses the Copyright Office, and critics suggested Trump’s firings were unacceptable intrusions into cultural institutions that are supposed to operate independently of the executive branch. In a statement, Rep. Joe Morelle (D.-N.Y.) condemned Perlmutter’s removal as “a brazen, unprecedented power grab with no legal basis.”

Accusing Trump of trampling Congress’ authority, he suggested that Musk and other tech leaders racing to dominate the AI industry stood to directly benefit from Trump’s meddling at the Copyright Office. Likely most threatening to tech firms, the guidance from Perlmutter’s Office not only suggested that AI training on copyrighted works may not be fair use when outputs threaten to disrupt creative markets—as publishers and authors have argued in several lawsuits aimed at the biggest AI firms—but also encouraged more licensing to compensate creators.

“It is surely no coincidence [Trump] acted less than a day after she refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models,” Morelle said, seemingly referencing Musk’s xAI chatbot, Grok.

Agreeing with Morelle, Courtney Radsch—the director of the Center for Journalism & Liberty at the left-leaning think tank the Open Markets Institute—said in a statement provided to Ars that Perlmutter’s firing “appears directly linked to her office’s new AI report questioning unlimited harvesting of copyrighted materials.”

“This unprecedented executive intrusion into the Library of Congress comes directly after Perlmutter released a copyright report challenging the tech elite’s fundamental claim: unlimited access to creators’ work without permission or compensation,” Radsch said. And it comes “after months of lobbying by the corporate billionaires” who “donated” millions to Trump’s inauguration and “have lapped up the largess of government subsidies as they pursue AI dominance.”

What the Copyright Office says about fair use

The report that the Copyright Office released on Friday is not finalized but is not expected to change radically, unless Trump’s new acting head potentially intervenes to overhaul the guidance.

It comes after the Copyright Office parsed more than 10,000 comments debating whether creators should and could feasibly be compensated for the use of their works in AI training.

“The stakes are high,” the office acknowledged, but ultimately, there must be an effective balance struck between the public interests in “maintaining a thriving creative community” and “allowing technological innovation to flourish.” Notably, the office concluded that the first and fourth factors of fair use—which assess the character of the use (and whether it is transformative) and how that use affects the market—are likely to hold the most weight in court.

According to Radsch, the report “raised crucial points that the tech elite don’t want acknowledged.” First, the Copyright Office acknowledged that it’s an open question how much data an AI developer needs to build an effective model. Then, they noted that there’s a need for a consent framework beyond putting the onus on creators to opt their works out of AI training, and perhaps most alarmingly, they concluded that “AI trained on copyrighted works could replace original creators in the marketplace.”

“Commenters painted a dire picture of what unlicensed training would mean for artists’ livelihoods,” the Copyright Office said, while industry advocates argued that giving artists the power to hamper or “kill” AI development could result in “far less competition, far less innovation, and very likely the loss of the United States’ position as the leader in global AI development.”

To prevent both harms, the Copyright Office expects that some AI training will be deemed fair use, such as training viewed as transformative, because resulting models don’t compete with creative works. Those uses threaten no market harm but rather solve a societal need, such as language models translating texts, moderating content, or correcting grammar. Or in the case of audio models, technology that helps producers clean up unwanted distortion might be fair use, where models that generate songs in the style of popular artists might not, the office opined.

But while “training a generative AI foundation model on a large and diverse dataset will often be transformative,” the office said that “not every transformative use is a fair one,” especially if the AI model’s function performs the same purpose as the copyrighted works they were trained on. Consider an example like chatbots regurgitating news articles, as is alleged in The New York Times’ dispute with OpenAI over ChatGPT.

“In such cases, unless the original work itself is being targeted for comment or parody, it is hard to see the use as transformative,” the Copyright Office said. One possible solution for AI firms hoping to preserve utility of their chatbots could be effective filters that “prevent the generation of infringing content,” though.

Tech industry accuses Copyright Office of overreach

Only courts can effectively weigh the balance of fair use, the Copyright Office said. Perhaps importantly, however, the thinking of one of the first judges to weigh the question—in a case challenging Meta’s torrenting of a pirated books dataset to train its AI models—seemed to align with the Copyright Office guidance at a recent hearing. Mulling whether Meta infringed on book authors’ rights, US District Judge Vince Chhabria explained why he doesn’t immediately “understand how that can be fair use.”

“You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products,” Chhabria said. “You are dramatically changing, you might even say obliterating, the market for that person’s work, and you’re saying that you don’t even have to pay a license to that person.”

Some AI critics think the courts have already indicated which way they are leaning. In a statement to Ars, a New York Times spokesperson suggested that “both the Copyright Office and courts have recognized what should be obvious: when generative AI products give users outputs that compete with the original works on which they were trained, that unprecedented theft of millions of copyrighted works by developers for their own commercial benefit is not fair use.”

The NYT spokesperson further praised the Copyright Office for agreeing that using Retrieval-Augmented Generation (RAG) AI to surface copyrighted content “is less likely to be transformative where the purpose is to generate outputs that summarize or provide abridged versions of retrieved copyrighted works, such as news articles, as opposed to hyperlinks.” If courts agreed on the RAG finding, that could potentially disrupt AI search models from every major tech company.

The backlash from industry stakeholders was immediate.

The president and CEO of a trade association called the Computer & Communications Industry Association, Matt Schruers, said the report raised several concerns, particularly by endorsing “an expansive theory of market harm for fair use purposes that would allow rightsholders to block any use that might have a general effect on the market for copyrighted works, even if it doesn’t impact the rightsholder themself.”

Similarly, the tech industry policy coalition Chamber of Progress warned that “the report does not go far enough to support innovation and unnecessarily muddies the waters on what should be clear cases of transformative use with copyrighted works.” Both groups celebrated the fact that the final decision on fair use would rest with courts.

The Copyright Office agreed that “it is not possible to prejudge the result in any particular case” but said that precedent supports some “general observations.” Those included suggesting that licensing deals may be appropriate where uses are not considered fair without disrupting “American leadership” in AI, as some AI firms have claimed.

“These groundbreaking technologies should benefit both the innovators who design them and the creators whose content fuels them, as well as the general public,” the report said, ending with the office promising to continue working with Congress to inform AI laws.

Copyright Office seemingly opposes Meta’s torrenting

Also among those “general observations,” the Copyright Office wrote that “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.”

The report seemed to suggest that courts and the Copyright Office may also be aligned on AI firms’ use of pirated or illegally accessed paywalled content for AI training.

Judge Chhabria only considered Meta’s torrenting in the book authors’ case to be “kind of messed up,” prioritizing the fair use question, and the Copyright Office similarly only recommended that “the knowing use of a dataset that consists of pirated or illegally accessed works should weigh against fair use without being determinative.”

However, torrenting should be a black mark, the Copyright Office suggested. “Gaining unlawful access” does bear “on the character of the use,” the office noted, arguing that “training on pirated or illegally accessed material goes a step further” than simply using copyrighted works “despite the owners’ denial of permission.” Perhaps if authors can prove that AI models trained on pirated works led to lost sales, the office suggested that a fair use defense might not fly.

“The use of pirated collections of copyrighted works to build a training library, or the distribution of such a library to the public, would harm the market for access to those Works,” the office wrote. “And where training enables a model to output verbatim or substantially similar copies of the works trained on, and those copies are readily accessible by end users, they can substitute for sales of those works.”

Likely frustrating Meta—which is currently fighting to keep leeching evidence out of the book authors’ case—the Copyright Office suggested that “the copying of expressive works from pirate sources in order to generate unrestricted content that competes in the marketplace, when licensing is reasonably available, is unlikely to qualify as fair use.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Copyright Office head fired after reporting AI training isn’t always fair use Read More »

us-and-china-pause-tariffs-for-90-days-as-trump-claims-“historic-trade-win”

US and China pause tariffs for 90 days as Trump claims “historic trade win”

The deal announced today “did not address what would happen to low-value ‘de minimis’ ecommerce packages shipped from China to the US,” Reuters wrote. The US imposed 120 percent tariffs on those packages. According to Axios, a White House official confirmed that small packages from China are still subject to 120 percent tariffs.

Treasury Secretary Scott Bessent said today that both governments want to avoid a severing of their economies but that the US still plans to impose tariffs on specific items that the White House wants to be produced in the US. Bessent said that “neither side wants a generalized decoupling. The US is going to do a strategic decoupling in terms of the items that we discovered during COVID were of national security interests, whether it’s semiconductors, medicine, steel, so we still have generalized tariffs on some of those, but both sides agree we do not want a generalized decoupling.”

The S&P 500 index was up about 2.6 percent today as of this writing, while the tech-focused NASDAQ Composite index had risen about 3.5 percent. Neither index has recovered to its record high after months of turmoil caused by Trump’s tariffs.

Reuters quoted Zhiwei Zhang, chief economist at Pinpoint Asset Management in Hong Kong, as saying that the 90-day deal was better than he expected. “I thought tariffs would be cut to somewhere around 50 percent,” Zhang said. “Obviously, this is very positive news for economies in both countries and for the global economy and makes investors much less concerned about the damage to global supply chains in the short term.”

In April, Trump raised tariffs on China while pausing tariff hikes on other countries for 90 days. Trump struck a trade deal with the UK last week, and talks with other countries are continuing.

US and China pause tariffs for 90 days as Trump claims “historic trade win” Read More »