Author name: Kris Guyer

m5-pro-and-m5-max-are-surprisingly-big-departures-from-older-apple-silicon

M5 Pro and M5 Max are surprisingly big departures from older Apple Silicon


Apple is using more chiplets and three types of CPU cores to make the M5 family.

As part of today’s MacBook Pro update, Apple has also unveiled the M5 Pro and M5 Max, the newest members of the M5 chip family.

Normally, the Pro and Max chips take the same basic building blocks from the basic chip and just scale them up—more CPU cores, more GPU cores, and more memory bandwidth. But the M5 chips are a surprisingly large departure from past generations, both in terms of the CPU architectures they use and in how they’re packaged together.

We won’t know the impact these changes have had on performance until we have hardware in hand to test, but here are all the technical details we’ve been able to glean about the new updates and how the M5 chip family stacks up against the past few generations of Apple Silicon chips.

New Fusion Architecture and a third type of CPU core

Apple says that M5 Pro and M5 Max use an “all-new Fusion Architecture” that welds two silicon chiplets into a single processor. Apple has used this approach before, but historically only to combine two Max chips together into an Ultra.

Apple’s approach here is different—for example, the M5 Pro is not just a pair of M5 chips welded together. Rather, Apple has one chiplet handling the CPU and most of the I/O, and a second one that’s mainly for graphics, both built on the same 3nm TSMC manufacturing process.

The first silicon die is always the same, whether you get an M5 Pro or M5 Max. It includes the 18-core CPU, the 16-core Neural Engine, and controllers for the SSD, for the Thunderbolt ports, and for driving displays.

The second die is where the two chips differ; the M5 Pro gets up to 20 GPU cores, a single media encoding/decoding engine, and a memory controller with up to 307 GB/s of bandwidth. The M5 Max gets up to 40 GPU cores, a pair of media encoding/decoding engines, and a memory controller that provides up to 614 GB/s of memory bandwidth (note that everything in the GPU die seems to be doubled, implying that Apple is, in fact, sticking two M5 Pro GPUs together to make one M5 Max GPU).

Apple’s spec sheets now list three distinct types of CPU cores: “super” cores, performance cores, and efficiency cores.

Credit: Apple

Apple’s spec sheets now list three distinct types of CPU cores: “super” cores, performance cores, and efficiency cores. Credit: Apple

Apple is also introducing a third distinct type of CPU core beyond the typical “performance cores” and “efficiency cores” that were included in older M-series processors.

At the top, you have “super cores,” which is Apple’s new M5-era branding for what it used to call “performance cores.” This change is retroactive and also applies to the regular M5; Apple’s spec sheet for the M5 MacBook Pro used to refer to the big cores as “performance cores” but now calls them “super cores.”

At the bottom of the hierarchy, you still have “efficiency cores” that are tuned for low power usage. The M5 still uses six efficiency cores, and unlike the super cores, they haven’t been rebranded since yesterday. These cores do help with multi-core performance, but they prioritize lower power usage and lower temperatures first, since they need to fit in fanless devices like the iPad Pro and MacBook Air.

And now, in the middle, we have a new type of “performance core” used exclusively in the M5 Pro and M5 Max.

These are, in fact, a new, third type of CPU core design, distinct from both the super cores and the M5’s efficiency cores. They apparently use designs similar to the super cores but prioritize multi-threaded performance rather than fast single-core performance. Apple’s approach with the new performance cores sounds similar to the one AMD uses in its laptop silicon: it has larger Zen 4 and Zen 5 CPU cores, optimized for peak clock speeds and higher power usage, and smaller Zen 4c and Zen 5c cores that support the same capabilities but run slower and are optimized to use less die space.

What we don’t know yet is how these new chips perform relative to the previous versions. Technically, the M4 Pro and M4 Max both had more “big” cores than the M5 Pro and M5 Max do—up to 10 for the M4 Pro and up to 12 for the M4 Max. But higher single-core performance from the six “super cores” and strong multi-core performance from the 12 performance cores should mean that the M5 generation still shakes out to be faster overall.

How all the chips compare

For Mac buyers choosing between these three processors, we’re updating the spec tables we’ve put together in the past, comparing the M5-generation chips to one another and to their counterparts in the M2, M3, and M4 generations.

Here’s how all of the M5 chips stack up, including the partly disabled versions of each chip that Apple sells in lower-end MacBook Air and Pro models:

CPU S/P/E-cores GPU cores RAM options Display support (including internal) Memory bandwidth Video decode/encode engines
Apple M5 (low) 4S/6E 8 16GB Up to three 153GB/s One
Apple M5 (high) 4S/6E 10 16/24/32GB Up to three 153GB/s One
Apple M5 Pro (low) 5S/10P 16 24GB Up to four 307GB/s One
Apple M5 Pro (high) 6S/12P 20 24/48/64GB Up to four 307GB/s One
Apple M5 Max (low) 6S/12P 32 36GB Up to five 460GB/s Two
Apple M5 Max (high) 6S/12P 40 48/64/128GB Up to five 614GB/s Two

Despite all the big under-the-hood changes, the basic hierarchy here remains the same as in past generations. The Pro tier offers the biggest bump to CPU performance compared to the basic M5, along with twice as many GPU cores. The Max chip is mainly meant for those who want better graphics, 128GB of RAM, or both.

Compared to M2, M3, and M4

CPU S/P/E-cores GPU cores RAM options Display support (including internal) Memory bandwidth
Apple M5 (high) 4S/6E 8 16/24/32GB Up to three 153GB/s
Apple M4 (high) 4P/6E 10 16/24/32GB Up to three 120GB/s
Apple M3 (high) 4P/4E 10 8/16/24GB Up to two 102.4GB/s
Apple M2 (high) 4P/4E 10 8/16/24GB Up to two 102.4GB/s

Compared to past generations, the M5 looks like the basic incremental improvement that we’re used to—no huge jumps in CPU or GPU core counts, relying mostly on architectural improvements and memory bandwidth increases to deliver the expected generation-over-generation speed boost. The Pro and Max chips have similar graphics core counts across generations, but there has been more variability when it comes to the CPU cores.

CPU S/P/E-cores GPU cores RAM options Display support (including internal) Memory bandwidth
Apple M5 Pro (high) 6S/12P 20 24/48/64GB Up to four 307GB/s
Apple M4 Pro (high) 10P/4E 20 24/48/64GB Up to three 273GB/s
Apple M3 Pro (high) 6P/6E 18 18/36GB Up to three 153.6GB/s
Apple M2 Pro (high) 8P/4E 19 16/32GB Up to three 204.8GB/s

The Pro chips have been sort of all over the place, and the M3 generation in particular is an outlier. When we tested it at the time, we found it to be more or less a wash compared to the M2 Pro, which was (and still is) rare for Apple Silicon generations. The M4 Pro was a better upgrade, and the M5 Pro should still feel like an improvement over the M4 Pro despite the big underlying changes.

CPU S/P/E-cores GPU cores RAM options Display support (including internal) Memory bandwidth
Apple M5 Max (high) 6S/12P 40 48/64/128GB Up to five 614GB/s
Apple M4 Max (high) 12P/4E 40 48/64/128GB Up to five 546GB/s
Apple M3 Max (high) 12P/4E 40 48/64/128GB Up to five 409.6GB/s
Apple M2 Max (high) 8P/4E 38 64/96GB Up to five 409.6GB/s

The M5 Max will be the biggest test for Apple’s new performance cores. According to our testing of the M5 in the 14-inch MacBook Pro, the M5-generation super cores are about 12 to 15 percent faster than the M4 generation’s performance cores. The M4 Max had up to 12 of those cores, while the M5 Max only has six. That leaves a pretty substantial gap for M5 Max’s new non-super P-cores to close.

Aside from that, the biggest outstanding question is how the M5 shakeup changes Apple’s approach to Ultra chips, assuming the company continues to make them (Apple has already said that not every processor generation will see an Ultra update).

The M1 Ultra, M2 Ultra, and M3 Ultra were all made by fusing two Max chips together, perfectly doubling the CPU and GPU core counts. Will an M5 Ultra still weld two M5 Max chips together using the same basic ingredients to make an even larger processor? Or will Apple create distinct CPU and GPU chiplets just for the Ultra series? All we can say for sure is that we can no longer make assumptions based on Apple’s past behavior, which tends to be the most reliable predictor of its future behavior.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

M5 Pro and M5 Max are surprisingly big departures from older Apple Silicon Read More »

apple-intros-m5-pro-and-max-macbook-pros-and-its-first-new-monitors-in-years

Apple intros M5 Pro and Max MacBook Pros and its first new monitors in years

Apple updated its low-end MacBook Pro with the Apple M5 chip back in October, but the higher-end 14-inch and 16-inch Pros stuck with the M4 Pro and M4 Max chips. This morning, Apple circled back and updated the rest of the lineup, adding the M5 Pro and M5 Max to the higher-end machines and bumping the base storage—the M5 Pro now comes with 1TB of storage by default, while M5 Max chips come with 2TB of storage by default. The internal storage is said to be “up to 2x faster” than the previous-generation Pros. Apple is also bumping the base storage for the M5 MacBook Pro from 512GB to 1TB.

Unlike Apple’s other announcements this week, though, these upgrades also come with increases to their starting prices; the 14-inch MacBook Pro with an M5 Pro chip now starts at $2,199 instead of $1,999, and the 16-inch model with an M5 Pro chip starts at $2,699 instead of $2,499. The M5 MacBook Pro now starts at $1,699, up from $1,599. Granted, you’re getting double the storage of those old base models, but you no longer have the option to pay less if you don’t need 1TB of space.

The M5 Pro and M5 Max look like fairly major updates from the M4 Pro and M4 Max. Both use an 18-core CPU with six higher-performing cores and 12 lower-performing cores, but Apple is changing how it talks about each kind of core. The high-performance cores are now called “super cores,” a change that Apple says will retroactively apply to the high-performance cores in the basic Apple M5. The M5 has four of them, and M5 Pro and M5 Max have six.

Apple says the 12 other CPU cores in the M5 Pro and M5 Max are an “all-new performance core that is optimized to deliver greater power-efficient, multithreaded performance for pro workloads.” These appear to be different from the efficiency cores used in M5 and older Apple chips. Apple didn’t make direct generation-over-generation performance comparisons, but it did say that M5 Pro and M5 Max “deliver up to 2.5x higher multithreaded performance than M1 Pro and M1 Max.”

Apple intros M5 Pro and Max MacBook Pros and its first new monitors in years Read More »

as-moon-interest-heats-up,-two-companies-unveil-plans-for-a-lunar-“harvester”

As Moon interest heats up, two companies unveil plans for a lunar “harvester”

Starting smaller with FLIP

This is not the first time the two companies have worked together. Last August, Interlune announced that it would fly a multispectral camera on a smaller prototype rover being built by Astrolab. This camera will be used to estimate helium-3 quantities and concentration in Moon dirt, or regolith.

This FLIP rover, about the size of a go-kart, is due to launch later this year on a lunar lander built by Astrobotic. It will fly atop the Griffin lander, taking the place of NASA’s VIPER rover, which has been moved to another spacecraft.

The mission will therefore be a learning exercise for both Astrolab, in testing out its software and other features of a small lunar rover, as well as Interlune, which will seek to ground truth data about the concentration of Helium-3 that has previously been estimated from samples returned to Earth during the Apollo program.

In addition to FLIP, Astrolab is developing a larger rover, FLEX, that is about the size of a minivan. This vehicle has a horseshoe-shaped chassis that can accommodate about 3 cubic meters of payload. This allows for a broad array of activities, from carrying multiple scientific instruments across the Moon and providing a long-distance rover for two astronauts, to moving large equipment or, in the case of Interlune, serving as a mobile harvester.

“Our thesis is to make the most versatile platform possible so we can serve a wide array of customers and achieve NASA’s goal of being one customer among many,” said Jaret Matthews, Astrolab founder and chief executive, in an interview. “So we have essentially a modular approach that allows us to either pick up cargo or implements or payloads. And so in this case, the excavating equipment that Interlune is developing would basically go under the belly of the rover.”

As Moon interest heats up, two companies unveil plans for a lunar “harvester” Read More »

a-tale-of-three-contracts

A Tale of Three Contracts

The attempt on Friday by Secretary of War Pete Hegsted to label Anthropic as a supply chain risk and commit corporate murder had a variety of motivations.

On its face, the conflict is a tale of three contracts and the associated working relationships.

  1. The contract Anthropic signed with the Department of War (DoW) in 2025.

  2. The new contract Anthropic was negotiating with DoW, that would have been modified to favor DoW, but where the parties could not reach agreement.

  3. The contract OpenAI was negotiating and signed with DoW, which was per OpenAI modified favorably to OpenAI and thus may be modified further.

The contracts and negotiations need to be confidential, so we only have limited details, and especially only limited details have been shared in public.

We do know a lot, and we know a lot more than we did yesterday morning.

This post is what we know about those three contracts.

For further details and sources, and in particular for a more detail-oriented smackdown on a variety of false or misleading claims and takes, see the long version from yesterday. That post uses very careful qualifiers for my sources of information.

This is a short version and will summarize to that end.

I strongly believe the situation can still be salvaged, if cooler heads can prevail. The rhetoric of the last week has been highly unfortunate but can be walked back.

But I also strongly believe that, as of writing this, we are not yet out of the woods for the worst outcomes, including a potential attempt to murder Anthropic. I worry there are those on the government side who actively are working to engineer that outcome, directly against the wishes of POTUS and most of the DoW and administration, who do not care about or do not understand the dire consequences for America if that were to happen.

We do not have direct access to any language from the original Anthropic contract. I presume that without DoW permission, they are very wise not to share those terms.

That makes this difficult. Details are very important.

Based on a variety of sources public and private, here is what I am confident about the contract and the activities under that contract.

Note in particular that Anthropic did have a customized safety stack that they have been doing extensive work on for some time, including model refusals, external monitoring classifiers and forward deployed engineers (FDEs).

  1. Anthropic signed a contract for up to $200 million with DoD (now DoW) last year.

  2. At the time, as he is now, Pete Hegseth was the Secretary of Defense. He agreed.

  3. This agreement was for Claude to be deployed on classified networks.

  4. There was also access made available under a contract with Palantir.

  5. Anthropic created Claude Gov as a special model for classified networks.

  6. Claude Gov had improved handling of classified materials and was customized for various skills and capabilities relevant to national security work.

  7. Claude Gov had a safety stack. Claude Gov was not a ‘helpful only’ model.

  8. Claude Gov’s safety stack included model refusals.

  9. Claude Gov’s safety stack included external monitoring classifiers, that could outright refuse requests or that could flag requests for later analysis.

  10. Anthropic had forward deployed engineers (FDEs) to assist with system deployment, and who were able to monitor queries and ensure the safety stack.

  11. We do not know whether the contract language ensured they could do that, or if they were simply permitted to do that, since of course they should be doing that.

  12. Anthropic had red lines explicitly written into the contract, including prohibitions on domestic mass surveillance and on use of autonomous weapons without a human in the kill chain.

  13. All of this was only possible because Anthropic made this a priority, in a way other companies including OpenAI did not, in order to assist national defense.

  14. Claude Gov is the only LLM so far successfully deployed on classified networks.

  15. Claude Gov provided a lot of value to the DoD (now DoW) and they are very happy with the results. It has enhanced national security and national defense.

  16. No one has claimed the system refused a request it should have accepted. No one has complained that the product had any functional problems, or that any of the actual outputs was ‘too woke.’ Hegseth said ‘they’re so good we need them.’

  17. Anthropic did not object to any use of the system by DoD/DoW.

  18. Anthropic had no problems whatsoever with the raid that captured Maduro.

  19. This contract does not include a clause allowing ‘all lawful use’ and such explicit language is atypical in defense contracts. This modification was requested later.

  20. This contract is still in effect. Claude Gov continues to assist DoW as before in its operations, including in the ongoing Iran conflict, with no known issues, and continues to enhance our national security.

  21. There is intent by POTUS/DoW to end this contact in no longer than six months.

  22. Claude Gov continues to be the only model deployed on classified networks. xAI and OpenAI have signed contracts but their models are not yet ready.

We don’t know how much of these protections was contractual, versus how much of it was a working relationship and the ability of Anthropic to withdraw if unhappy.

This contract appears to have worked out to the benefit of all parties until this past week. Anthropic was happy to assist in the national defense. OpenAI was happy that Anthropic had taken on this burden so they did not have to do so. The Department of War was happy with the product. National security was enhanced.

Anthropic was (and is) happy to continue under the current contract, and believes it preserves their red lines.

Anthropic is also willing to have the contract be terminated, and to assist the Department of War with any wind down period to ensure national security.

The Department of War decided it was unhappy with the restrictions placed upon it, and requested that the contract be modified.

In particular, their public demand was that the contract allow ‘all lawful use.’

That is highly unusual language.

This became DoW saying: If you don’t agree to this highly unusual contract language, we will not only terminate your contract, we will attempt to destroy your company.

That’s what they said in public. We do not know the full details of the proposed terms of the new contract by either party. I will share here what details of the negotiations that I am at least reasonably confident in, from both public and private sources. It is of course possible that my private sources are incorrect or lying on some details.

Note that OpenAI felt it was not legally able to consult Anthropic about terms, due to antitrust law, so notes were never compared.

  1. This renegotiation was at the request of DoW, and its sole purpose was to weaken the restrictions around government use of Claude. Anthropic may have asked for other reassurances in return.

  2. Anthropic was willing to weaken some restrictions from its previous contract, but was holding firm to some version of its two red lines of domestic mass surveillance (DMS) and autonomous weapons without a human in the kill chain.

  3. This kind of contract based restriction is entirely ordinary procedure for defense contracts. Everyone who says otherwise is misinformed or lying.

  4. DoW demanded in public a contract that was purely ‘all lawful use’ despite their now clear willingness to accept functional restrictions on this.

  5. Assurances around the safety stack and FDEs were one thing under negotiation.

  6. DMS is not a standard term in American law. We do not know its definition here.

  7. The reason both sides care so much about the wording of the contract is the belief that it would be illegal to engineer the safety stack to intentionally refuse legal and otherwise safe requests. The standard of ‘all legal use’ will effectively apply. The contract needs to specify anything else, and the act of being a contract violation would then render that action illegal, which in turn allows the safety stack to prevent it, and in extremus the contract terminated after a wind down. That doesn’t mean that opinion is correct.

  8. DoW and Anthropic agreed upon most of the language of the new contract, and were potentially close to a deal. Ostensive disagreements were contract details.

  9. One key detail was that DoW wanted to use the term ‘as appropriate’ and Anthropic’s lawyers felt this was a de facto escape hatch that would allow DoW to get around the relevant restrictions.

  10. In particular, as per Emil Michael, the DoW wanted the clause ‘The department affirms that it will ensure appropriate human oversight is in place and that it will monitor and retain the ability to override or disable the AI system…as appropriate.’

  11. My plain reading of that is that if DoW then determined it would not be appropriate to have a way to override or disable the system, for example for the (highly reasonable) purpose of avoiding an enemy potentially using the override, then that would be that. The clause would not function.

  12. It is likely that the parties had found acceptable language around the use of autonomous weapons without a human in the kill chain. If not, I don’t understand why they were unable to do so. I don’t see a fundamental disagreement.

  13. Late in negotiations, Anthropic expressed new willingness to allow use on FISA, so long as they were not required to analyze large amounts of third-party and public data on Americans, which they had not previously offered to DoW.

  14. DoW, in last minute negotiations, was willing to compromise, in ways that would have rendered the contract de facto not ‘all lawful use,’ and drop at least most uses of ‘as appropriate,’ but insisted upon analysis of large amounts of third-party and public data on Americans.

  15. This contradicts DoW’s public rhetoric, further invalidates their attempt at a supply chain risk designation, and reveals what they actually care about in this negotiation, unless they actively wanted it to fail in order to justify an attempt to murder Anthropic.

  16. Given OpenAI’s revisions to its agreement with DoW, if the two parties can find a way to trust each other again and the goal is to enhance national security, and DoW can puts its egos aside, then I see no reason why the two sides could now negotiate a deal that satisfies DoW needs while protecting Anthropic’s red lines.

  17. Alas, any agreement like this requires trust. It would be highly reasonable for either or both sides to believe that given DoW’s actions, trust is now sufficiently damaged that for now an agreement cannot be reached. In that case, it makes sense to cancel the contract and retire Claude Gov once there is a suitable replacement available from OpenAI, until trust can be repaired.

  18. None of this has any bearing whatsoever on the operation of the commercial version of Claude, and trying to designate that or the entire company a supply chain risk is illegal, arbitrary, capricious and absurd, and can only amount to retaliation, up to and including an attempt at corporate murder. It needs to be off the table, whether or not trust can now be rebuilt. What we need is an off ramp.

Anthropic was looking for a contract that preserved a particular narrow set of red lines, including for some things they consider domestic mass surveillance that the DoW considers legal and where courts would back the DoW up on that.

DoW was unwilling to agree to that, at least not with Anthropic. No deal.

More centrally, as I understand it, DoW’s attitude was ‘no one tells us what to do.’

Anthropic’s attitude was ‘here are two things we will not do.’

DoW interpreted or represented that as telling DoW what to do. They could not abide.

If we could simply agree that:

  1. No one tells DoW what to do except POTUS.

  2. Anthropic can decide what things Anthropic does with its private property.

  3. If DoW needs someone who will follow all orders, it should go elsewhere.

Then we can all move on with our lives. There’s a lot of other fires out there.

On Friday evening, OpenAI signed a deal with DoW. Moving this quickly was a mistake, and I think in various ways they did not fully understand the deal they signed, but they believed they were doing this to de-escalate the situation.

On Monday evening, Sam Altman announced they were ‘going to amend’ the deal with DoW in ways favorable to OpenAI, including sharing new contract language.

We only have a small amount of the wording of this contract, and of what OpenAI claims are going to be modifications to the contract.

I am unhappy with the way OpenAI chose to represent this contract, and the ways in which they contrasted it with Anthropic’s contract and potential contract.

Fundamentally, the decision to sign this contract was primarily based on mutual trust, and secondarily on the idea that OpenAI can build a robust safety stack and DoW would respect if the safety stack refused things, so long as OpenAI didn’t ‘tell DoW what to do.’

That is a position one can take. It may work out fine for everyone, especially if DoW is indeed willing to work to modify the terms now in good faith. I do genuinely believe that Altman was trying to de-escalate the situation, whether or not he had or is having the intended effect.

But if this is the plan, one must admit it, rather than repeatedly insisting that the contract provides other protections that it does not, claiming it has more or stronger protections, or presenting the presence of a safety stack and forward deployed engineers as something that is distinct from what Anthropic was already doing.

Here are the things we know, or can at least be confident about, this new contract.

  1. OpenAI began negotiating this deal on Wednesday and signed it on Friday. As Altman now admits, that was not enough time, and mistakes were made. It also inadvertently undermined Anthropic’s position in an escalatory way. They are hoping to fix the terms now. I agree with him that this is a very good lesson with rushing other things out in the future, potentially with even higher stakes.

  2. At a high level, OpenAI’s approach is to sign a deal based on mutual trust. DoW is trusting OpenAI to deliver. OpenAI is trusting DoW to use it honorably.

  3. The deal was not identical to the deal Anthropic turned down, but it included much or all of the specific language that Anthropic rejected, and it is not clear in what important ways it is different prior to the intended modifications.

  4. Altman and others at OpenAI strongly insisted that this agreement had more safeguards and was stronger even than Anthropic’s original agreement. That is not true of the existing contract language we have seen so far.

  5. OpenAI highlighted that their agreement allowed them to build and run their own safety stack, including having forward deployed engineers (FDEs). This was clearly presented as a contrast to Anthropic’s contract, especially in allowing them to deliver any safety stack and DoW will have to honor any refusals. We now know that Anthropic has its own existing safety stack and FDEs, but we do not know if that is a right that is protected by their current contract, or what they feel they are legally or practically allowed to have it refuse.

  6. OpenAI’s Friday night deal allows any models it delivers to be used for ‘all lawful use.’ Based on communications from OpenAI and Altman, it is clear they are trusting DoW to decide what constitutes all lawful use.

  7. Based on several sources it seems they believe they can deliver any safety stack they decide upon, and DoW is going to respect its refusals. Others are deeply skeptical both that refusals can stop DoW, or that OpenAI would be allowed them. Again, we don’t know the language here.

  8. All legal use clearly includes many things that I and most of you would consider to be ‘mass domestic surveillance,’ and that would violate Anthropic’s redlines. OpenAI’s intended plan is to prevent these via the safety stack, if needed.

  9. This is a very different set of legal understandings and theories than Anthropic.

  10. Altman claims that it is not up to OpenAI to interpret the law or decide what is and is not allowed, but also that he and OpenAI would refuse to comply with unconstitutional use or orders, no matter the opinion of DoW on its legality. I don’t know the right way to reconcile these two claims.

  11. OpenAI shared the following legal language: “The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.”

  12. Continued: For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

  13. OpenAI claimed this language enshrined the current versions of these laws. Almost all legal opinions believe this to be mistaken. I am very confident that the second paragraph does not enshrine current wording. I can see a case that the wording on 3000.09 does, but 3000.09 is little practical barrier.

  14. OpenAI and DoW made various additional claims about the functional nature of the language here that most legal experts and others who review the text, as well as leading LLMs, do not believe to be the case.

  15. OpenAI’s Katrina explicitly said this deal excludes the NSA. Many expressed skepticism about this. Altman says the revisions include affirming that the services will not be used by the DoW intelligence agencies, including NSA.

  16. The planned revisions claim to add this language: Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.

  17. Also this: For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

  18. This language represents codifying in a public contract an understanding of the law, in a way that renders that understanding potentially enforceable. It is incompatible with the demands for ‘unfettered access’ or that a ‘a private company not set policy’ or various hyperbolic variations of such claims, and the contract then vests the associated safety stack implementation to OpenAI.

  19. This closes some loopholes and addresses questions OpenAI now realizes were left unaddressed in the original language.

  20. Legal opinions I have seen are that the new language uses terms of art, and that it would still allow DoW to legally do quite a lot of things if it decided to do them.

Here are some legal opinions about the new legal language.

Essentially, the new language is very helpful in establishing intent, and is clearly an improvement, but not robust to a theoretical ‘evil DOW General Counsel.’

But even then, it does justify the creation of an associated safety stack using OpenAI’s interpretation, which if it works creates a much higher practical bar.

Brad Carson: Lots of interesting takes on the new OAI-DOW agreement. @j_asminewang , @CharlieBul58993 , @_NathanCalvin , @JTillipman have all inquired or made posts.

Some casual impressions from me.

I think, if executed in good faith, the language does seem an improvement. A prohibition on using commercial datasets for intelligence purposes goes beyond current (bad) law and introduces a new (and needed) restriction. To repeat, under current law, intelligence agencies can without recourse analyze US persons using commercially available databases. LLMs will turbocharge this, and I’d love to see this type of intelligence analysis limited in any way.

But, as is usual for contracts, definitions are key. So let me play evil DOW General Counsel and tell you how I’d get around what has been presented, just for the sake of argument.

The load-bearing word is “surveillance.” Importantly, this is a term of art defined in FISA. Under FISA, “surveillance” means the acquisition by an electronic, mechanical, or other surveillance device of the contents of any wire communication to or from a person in the United States. Being evil or maybe even just ordinary, I’d argue “surveillance” in this OAI contract means exactly what the IC means by it; after all, FISA is explicitly referenced! So, evil GC says, analyzing commercially purchased location data, browsing patterns, and behavioral records through GPT isn’t “surveillance” at all. It’s only data analysis of lawfully acquired commercial information. In other words, the clause doesn’t prohibit it because the activity the clause describes doesn’t fall within the category the clause addresses.

So to be clear: the IC sees commercial data analysis as not even being “surveillance.” So, says evil DOD GC, the new contract language is like saying we agree not to do surveillance of those things that we’ve already defined as not being objects of “surveillance.” Whew!

“Tracking” and “monitoring” cause evil GC more problems. These are not terms of art (IIRC!). But I’m ingenious. “Tracking” implies persistence and it requires a direct object. So general and static queries like “Tell me who went to the mosque in Tulsa and booked a trip to New York” isn’t tracking at all. Same with “Who in Tulsa had a Samsung phone around 41st Street on March 2nd?”

“Monitoring” also implies persistence. So static searches that don’t persist over time aren’t even monitoring.

So, concludes evil GC, running searches without targets that don’t persist over time and that don’t intercept communication is entirely outside this agreement! This is what we do right now! And we get to continue, with LLM empowerment!

And, evil GC says, I particularly like that part where we say, rather strangely but certainly meaningful in some occult way, “the Department understands” rather than simply “This limitation prohibits….” I can probably argue that the latter is stronger than the former, so it must be meaningful in a way that helps my evil ways.

Brad again: is this a realistic way that this will be interpreted? Maybe. In the end, intelligence requires us to trust people to act in good faith and not be evil. Most of the time, history shows that to right. But not always. We have to hope for ethical leadership, with proper oversight and accountability from Congress.

At its best, this new language might be a new and welcome limitation. At its worst, it’s blinding us with very precise terms of art that dupe us into thinking the words instead have ordinary meaning.

Jeremy Howard: OK here’s the informal/unofficial/etc answer from our law firm CEO — tldr, this language doesn’t seem to add much to the previously shared contract details:

“shall not be used” is constrained to be where that’s “Consistent with applicable laws”. And “the Department understands this limitation to prohibit” doesn’t do much lifting. It may help with a courts interpretation because a contract requires a “meeting of the minds,” but it still doesn’t negate the fact that “applicable laws” is a moving target.

I’m concerned/surprised that the bar doesn’t extend to negligence or at minimum recklessness. The hierarchy of mens rea is purposely > knowingly > recklessly > negligently and courts often read “intentionally” to be somewhere in between “purposely” and “knowingly.” “intentionally” is a higher bar and more difficult to prove than recklessness or negligence.

All of our stated concerns about autonomous weapons still apply, since they are not addressed at all by the latest update. [this says it does not freeze current law.]

Charlie Bullock: Initial takes:

1. This seems like a significant improvement over the previous language with respect to surveillance, and I’m glad to see it.

2. It does not address autonomous weapons concerns, nor does it claim to.

3. It’s hard to say anything definite without seeing the full contract — but also, it does make a certain amount of sense that a defense contractor wouldn’t be allowed to share the full text of their important defense contract publicly (Anthropic, for example, has not shared the text of their contract, and this is not surprising to me).

4. Because we don’t have access to the full text of the contract, the public is in an awkward position where we have to choose between trusting OpenAI and not trusting OpenAI. This contract snippet does seem good, but there’s no way to know whether it actually is good without knowing the relevant context. If you trust OpenAI’s leadership and think that their intentions are pure, you’re probably reassured by this; if you don’t, you probably aren’t.

5. I hope that OpenAI employees have access to more information than the public does about what the contract actually says.

6. If this language is in the new contract, and does what Altman seems to be claiming it does, I am confused about why the Pentagon would accept this language when they just tried to nuke Anthropic for asking for something very similar to this. Maybe it is just a situation where DoW didn’t care much about the actual red lines themselves and was just lashing out at Anthropic leadership for being libs? If so, that’s bad news for DoW’s case in the lawsuit that Anthropic will almost certainly file challenging DoW’s attempt to designate Anthropic as a supply chain risk (though, to be fair, that case was unlikely to go well for DoW even before this).

LASST remains concerned. Nathan Calvin thinks we would need to see the full document. John Oleske focuses on the definition of ‘surveillance’ and also ‘intentional’ and ‘deliberate’ as DoW legal arguments. Lawrence Chen emphasizes that as well. Dave Kasten is even less polite about this than everyone else.

Here is a since-deleted Tweet on the subject from the government position:

I believe that this only emphasizes the incoherence of the government position.

I translate this as ‘Anthropic insisted upon not allowing ‘all lawful use’ so we labeled them a supply chain risk, unlike OpenAI, which will not allow ‘all lawful use.’

OpenAI is absolutely going to use its safety stack to vest within itself interpretive power in an unaccountable private counterparty, with respect to use of its own private property under a contract. That’s how an OpenAI-selected safety stack inevitably works. No, OpenAI will not wait for those questions to be ‘answered through political and legal processes’ that unfold over years.

Which I think here is totally fine, if you don’t like it then don’t sign the contract or use a different model. But it’s at least as true here as it was for Anthropic.

Discussion about this post

A Tale of Three Contracts Read More »

apple’s-new-iphone-17e-has-an-a19-chip,-magsafe,-and-256gb-of-storage-for-$599

Apple’s new iPhone 17e has an A19 chip, MagSafe, and 256GB of storage for $599

The iPhone 17e will support MagSafe, which was notably absent from the 16e.

Credit: Apple

The iPhone 17e will support MagSafe, which was notably absent from the 16e. Credit: Apple

The 17e comes in three color options: black, white, and a pastel pink. It still includes a USB-C port, a notched display rather than a Dynamic Island, an Action Button, a 6.1-inch 60 Hz OLED display without ProMotion or always-on support, and a single 48 megapixel rear camera (which is still capable of taking 2x telephoto images by cropping a 24 MP chunk out of the middle of the image sensor).

The biggest problem with the iPhone 17e is still that it’s just $200 cheaper than the iPhone 17, which is an exceptionally strong version of Apple’s default phone. That $200 gets you a better main camera, a wide-angle lens, a slightly larger 6.3-inch display with ProMotion support and a Dynamic Island, and marginally faster graphics performance. But the 17e’s 256GB storage upgrade and the new chip do make it more appealing than the $699 iPhone 16, which also lacks a ProMotion display and only has 128GB of storage.

The new phone is part of a string of announcements that Apple is planning in the run-up to a “special experience” event on Wednesday morning. The company also announced a new iPad Air with an M4 chip today and is also widely expected to debut a new low-end iPad and a new MacBook that’s substantially cheaper than the MacBook Air.

Apple’s new iPhone 17e has an A19 chip, MagSafe, and 256GB of storage for $599 Read More »

trump-fcc’s-equal-time-crackdown-doesn’t-apply-equally—or-at-all—to-talk-radio

Trump FCC’s equal-time crackdown doesn’t apply equally—or at all—to talk radio


FCC Chairman Brendan Carr’s unequal enforcement of the equal-time rule.

James Talarico and Stephen Colbert on the set of The Late Show with Stephen Colbert. Credit: Getty Images

In the Trump FCC’s latest series of attacks on TV broadcasters, Federal Communications Commission Chairman Brendan Carr has been threatening to enforce the equal-time rule on daytime and late-night talk shows. The interview portions of talk shows have historically been exempt from equal-time regulations, but Carr has a habit of interpreting FCC rules in novel ways to target networks disfavored by President Trump.

Critics of Carr point out that his threats of equal-time enforcement apply unequally since he hasn’t directed them at talk radio, which is predominantly conservative. Given the similarities between interviews on TV and radio shows, Carr has been asked to explain why he issued an equal-time enforcement warning to TV but not radio broadcasters.

Carr’s responses to the talk radio questions have been vague, even as he tangled with Late Show host Stephen Colbert and launched an investigation into ABC’s The View over its interview with Texas Democratic Senate candidate James Talarico. In a press conference after the FCC’s February 18 meeting, Deadline reporter Ted Johnson asked Carr why he has not expressed “the same concern about broadcast talk radio as broadcast TV talk shows.”

The Deadline reporter pointed out that “Sean Hannity’s show featured Ken Paxton in December.” Paxton, the Texas attorney general, is running for a US Senate seat in this year’s election. Carr claimed in response that TV broadcasters have been “misreading” FCC precedents while talk radio shows have not been.

“It appeared that programmers were either overreading or misreading some of the case law on the equal-time rule as it applies to broadcast TV,” Carr replied. “We haven’t seen the same issues on the radio side, but the equal-time rule is going to apply to broadcast across the board, and we’ll take a look at anything that arises at the end of the day.”

Carr’s radio claim “a bunch of nonsense”

Carr didn’t provide any specifics to support his claim that radio programmers have interpreted precedents correctly while TV programmers have not. The most obvious explanation for the disparate treatment is that Carr isn’t targeting conservative talk radio because he’s primarily interested in stifling critics of Trump. Carr has consistently used his authority to fight Trump’s battles against the media, particularly TV broadcasters, and backed Trump’s declaration that historically independent agencies like the FCC are no longer independent from the White House.

Carr’s claim that TV but not radio broadcasters have misread FCC precedents is “a bunch of nonsense,” said Gigi Sohn, a longtime lawyer and consumer advocate who served as counselor to then-FCC Chairman Tom Wheeler during the Obama era. Carr “was responding to criticism from people like Sean Hannity that the guidance would apply to conservative talk radio just as much as it would to so-called ‘liberal’ TV,” Sohn told Ars. “It doesn’t matter whether a broadcaster is a radio broadcaster or a TV broadcaster, the Equal Opportunities law and however the FCC implements it must apply to both equally.”

Sean Hannity during a Fox News Channel program on October 30, 2025.

Credit: Getty Images | Bloomberg

Sean Hannity during a Fox News Channel program on October 30, 2025. Credit: Getty Images | Bloomberg

Hannity, who hosts a Fox News show and a nationally syndicated radio show, pushed back against content regulation shortly after Carr’s FCC issued the equal-time warning to TV broadcasters in January. “Talk radio is successful because people are smart and understand we are the antidote to corrupt and abusively biased left wing legacy media,” Hannity said in a statement to the Los Angeles Times. “We need less government regulation and more freedom. Let the American people decide where to get their information from without any government interference.”

Carr’s claim of misreadings relates to the bona fide news exceptions to the equal-time rule, which is codified under US law as the Equal Opportunities Requirement. The rule requires that when a station gives time to one political candidate, it must provide comparable time and placement to an opposing candidate if an opposing candidate makes a request.

But when a political candidate appears on a bona fide newscast or bona fide news interview, a broadcaster does not have to make equal time available to opposing candidates. The exception also applies to news documentaries and on-the-spot coverage of news events.

Equal time didn’t apply to Jay Leno or Howard Stern

In the decades before Trump appointed Carr to the FCC chairmanship, the commission consistently applied bona fide exemptions to talk shows that interview political candidates. Phil Donahue’s show won a notable exemption in 1984, and over the ensuing 22 years, the FCC exempted shows hosted by Sally Jessy Raphael, Jerry Springer, Bill Maher, and Jay Leno. On the radio side, Howard Stern won a bona fide news exemption in 2003.

Despite the seemingly well-settled precedents, the FCC’s Media Bureau said in a January 21 public notice that the agency’s previous decisions do not “mean that the interview portion of all arguably similar entertainment programs—whether late night or daytime—are exempted from the section 315 equal opportunities requirement under a bona fide news exemption… these decisions are fact-specific and the exemptions are limited to the program that was the subject of the request.”

The Carr FCC warned that a program “motivated by partisan purposes… would not be entitled to an exemption under longstanding FCC precedent.” But if late-night show hosts are “motivated by partisan purposes,” what about conservative talk radio hosts? Back in 2017, Hannity described himself as “an advocacy journalist.” In previous years, he said he’s not a journalist at all.

“Remember when Sean Hannity used to claim he wasn’t a journalist, then claimed to be an ‘advocacy journalist’?” Harold Feld, a longtime telecom lawyer and senior VP of advocacy group Public Knowledge, told Ars. “Given that the Media Bureau guidance leans heavily into the question of whether the motivation is ‘for partisan purposes’ or ‘designed for the specific advantage of a candidate,’ it would seem that conservative talk radio is rather explicitly a problem under this guidance.”

“To put it bluntly, Carr’s explanation that shows that Trump has expressly disliked are ‘misreading’ the law, while conservative radio shows are not, strains credulity,” Feld said.

Conservative radio boomed after FCC ditched Fairness Doctrine

Conservative talk radio benefited from the FCC’s long-term shift away from regulating TV and radio content. A major change came in 1987 when the FCC decided to stop enforcing the Fairness Doctrine, a decision that helped fuel the late Rush Limbaugh’s success.

FCC regulation of broadcast content through the Fairness Doctrine had been upheld in 1969 by the Supreme Court in the Red Lion Broadcasting decision, which said broadcasters had special obligations because of the scarcity of radio frequencies. But the Reagan-era FCC decided 18 years later that the scarcity rationale “no longer justifies a different standard of First Amendment review for the electronic press” in “the vastly transformed, diverse market that exists today.” The FCC made that decision after an appeals court ruled that the FCC acted arbitrarily and capriciously in its enforcement of the doctrine against a TV station.

Even where the FCC didn’t eliminate content-based rules, it reduced enforcement. But after decades of the FCC scaling back enforcement of content-based regulations, Donald Trump was elected president.

Trump’s first FCC chair, Ajit Pai, rejected Trump’s demands to revoke station licenses over content that Trump claimed was biased against him. Pai and his successor, Biden-era FCC Chairwoman Jessica Rosenworcel, agreed that the First Amendment prohibits the FCC from revoking station licenses simply because the president doesn’t like a network’s news content.

After winning a second term, Trump promoted Carr to the chairmanship. Carr, an unabashed admirer of Trump, has said in interviews that “President Trump is fundamentally reshaping the media landscape” and that “President Trump ran directly at the legacy mainstream media, and he smashed a facade that they’re the gatekeepers of truth.” Carr describes Trump as “the political colossus of modern times.”

FCC Commissioner Brendan Carr standing next to and speaking to Donald Trump, who is wearing a

President-elect Donald Trump speaks to Brendan Carr, his intended pick for Chairman of the Federal Communications Commission, as he attends a SpaceX Starship rocket launch on November 19, 2024 in Brownsville, Texas.

Credit: Getty Images | Brandon Bell

President-elect Donald Trump speaks to Brendan Carr, his intended pick for Chairman of the Federal Communications Commission, as he attends a SpaceX Starship rocket launch on November 19, 2024 in Brownsville, Texas. Credit: Getty Images | Brandon Bell

Carr has led the charge in Trump’s war against the media by repeatedly threatening to revoke licenses under the FCC’s rarely enforced news distortion policy. Carr’s aggressive stance, particularly in his attacks on ABC’s Jimmy Kimmel, even alarmed prominent Republicans such as Sens. Rand Paul (R-Ky.) and Ted Cruz (R-Texas). Cruz said that trying to dictate what the media can say during Trump’s presidency will come back to haunt Republicans in future Democratic administrations.

With both the news distortion policy and equal-time rule, Carr hasn’t formally imposed any punishment. But his threats have an effect. Kimmel was temporarily suspended, CBS owner Paramount agreed to install what Carr called a “bias monitor” in exchange for a merger approval, and Texas-based ABC affiliates have filed equal-time notices with the FCC as a result of Carr’s threats against The View.

Colbert said on his show that CBS forbade him from interviewing Talarico because of Carr’s equal-time threats. CBS denied prohibiting the interview but acknowledged giving Colbert “legal guidance,” and Carr claimed that Colbert lied about the incident.

Colbert did not put his interview with Talarico on his broadcast show but released it on YouTube, where it racked up nearly 9 million views. “Only a handful of people would’ve seen it if it had run live,” Christopher Terry, a professor of media law and ethics at the University of Minnesota, told Ars. “But what is it up to, 8 million views on YouTube now? It’s like the biggest thing, everybody in the world’s talking about it now. CBS gave Talarico the best press they ever could have by not letting him on the air… Oldest lesson in the First Amendment handbook, the more you try to suppress speech, the more powerful you make it.”

FCC misread its own rules, Feld says

Feld said the Carr FCC’s public notice “misreads the law and ignores inconvenient precedent.” The notice describes the equal-time rule as a public-interest obligation for broadcasters that have licenses to use spectrum, and Carr has repeatedly said the rule is only for licensed broadcasters. But Feld said the rule also applies to cable channels, which are referred to as community antenna television systems in the Equal Opportunities law as written by Congress.

Moreover, Feld said the FCC guidance “conflates two separate statutory exemptions,” the bona fide newscast exemption and the bona fide news interview exemption. FCC precedents didn’t find that Howard Stern and Jerry Springer were doing newscasts but that their interviews “met the criteria for a bona fide news interview,” Feld said. Despite that, the Carr FCC’s “guidance appears to require that Late Night Shows must be news shows, not merely host an interview segment,” he said.

The FCC guidance describes the Jay Leno decision as an outlier that was “contrary” to a 1960 decision involving Jack Paar and “the first time that such a finding had been applied to a late night talk show, which is primarily an entertainment offering.”

Feld pointed out that Politically Incorrect with Bill Maher was the first late-night show to receive the exemption in 1999, seven years before Leno. Maher’s show was on ABC at the time. The FCC guidance also “fails to explain any meaningful difference” between late-night shows and afternoon shows like Jerry Springer’s, Feld said.

Carr may label TV hosts as “partisan political actors”

At the February 18 press conference, Johnson asked Carr to explain how the FCC is “assessing whether a candidate appearance on a talk show is motivated by partisan purposes.” The reporter asked if there were specific criteria, like a talk show host giving money to a political candidate or hosting a fundraiser.

“Yeah it’s possible, all of that could be relevant,” Carr said. Whether a program is “animated by a partisan political motivation” can be determined “through discovery,” and “people can come forward with their own showings in a petition for a declaratory ruling, but this is something that will be explored,” Carr said. “It’s part of the FCC’s case law, and the idea is that if you’re a partisan political actor under the case law, then you’re likely not going to qualify under the bona fide news exception. That’s OK, it just means you have to either provide equal airtime to the different candidates or there’s different ways you can get your message out through streaming services and other means for which the equal-time rule doesn’t apply.”

In a follow-up question, Johnson asked, “A partisan political actor would mean a talk show host or someone whose show it is?” Carr replied, “It could be that, yeah, it could be that.”

Carr confirmed reports that the FCC is investigating The View over the show’s interview with Talarico. “Yes, the FCC has an enforcement action underway on that and we’re taking a look at it,” Carr said at the press conference.

We contacted Carr’s office to ask for specifics about how TV programmers have allegedly misread the FCC’s equal-time precedents. We also asked whether the FCC is concerned that talk radio shows may be misreading the Howard Stern precedent or other rulings related to radio and have not received a response.

Carr targeted SNL on Trump’s behalf

Carr hasn’t been truthful in his statements about the equal-time rule, Terry said. “Carr is just an obnoxious figure who needs attention, and remember he absolutely lied about the NBC/Kamala Harris equal-time thing,” Terry said. Terry was referring to Carr’s November 2024 allegation that when NBC put Kamala Harris on Saturday Night Live before the election, it was “a clear and blatant effort to evade the FCC’s Equal Time rule.”

In fact, NBC gave Trump free airtime during a NASCAR telecast and an NFL post-game show and filed an equal-time notice with the FCC to comply with the rule. Terry filed a Freedom of Information Act request for emails that showed Carr discussing NBC’s equal-time notice on November 3, 2024, but Carr reiterated his allegation over a month later despite being aware of the steps NBC took to comply with the rule.

Terry said Carr has taken a similarly dishonest approach with his claim that talk shows don’t qualify for the equal-time exception. “I think it’s like a lot of things Carr says. Just because he says it doesn’t mean it’s true, right? It’s nonsense,” Terry told Ars. “Every precedent suggests that a show like The View or one of the talk shows at night is an interview-based talk show, and that’s what the bona fide news exception was designed to cover.”

Terry said applying Carr’s “partisan purposes” test would likely require “a complete rulemaking proceeding” and would be difficult now that the Supreme Court has limited the authority of federal agencies to interpret ambiguities in US law. But it’s up to broadcasters to stand up to Carr, he said.

“If one broadcaster was like, ‘Oh yeah? Make us,’ he’d lose in court. He would. The precedent is absolutely against this,” Terry said.

Because the bona fide exemptions apply so broadly to TV and radio programs, the equal-time rule has applied primarily to advertising access for the past few decades, Terry said. If a station sells advertising to one candidate, “you have to make equal opportunities available to their opponents at the same price that reaches the same functional amount of audience,” he said.

Terry said he thinks NBC could make a good argument that Saturday Night Live is exempt, but the network has decided that it’s “easier just to provide time” to opposing candidates. Terry, a former radio producer, said, “I worked in talk radio for over 20 years. We never once even thought about equal time outside of advertising.”

Howard Stern precedent ignored

Howard Stern talking in a studio and gesturing with his hands during his radio show.

Howard Stern debuts his show on Sirius Satellite Radio on January 9, 2006, at the network’s studios at Rockefeller Center in New York City.

Credit: Getty Images

Howard Stern debuts his show on Sirius Satellite Radio on January 9, 2006, at the network’s studios at Rockefeller Center in New York City. Credit: Getty Images

Feld said the Carr FCC’s guidance “says the exact opposite” of what the FCC’s 2003 ruling on Howard Stern stated “with regard to how this process is supposed to work. The Howard Stern decision expressly states that licensees don’t need to seek permission first.”

The 2003 FCC’s Stern ruling said, “Although we take this action in response to [broadcaster] Infinity’s request, we emphasize that licensees airing programs that meet the statutory news exemption, as clarified in our case law, need not seek formal declaration from the Commission that such programs qualify as news exempt programming under Section 315(a).”

By contrast, the Carr FCC encouraged TV programs and stations “to promptly file a petition for declaratory ruling” if they want “formal assurance” that they are exempt from the equal-time rule. “Importantly, the FCC has not been presented with any evidence that the interview portion of any late night or daytime television talk show program on air presently would qualify for the bona fide news exemption,” the notice said.

The Lerman Senter law firm said that before the Carr FCC issued its public notice, broadcasters that met the criteria for the bona fide news interview exemption generally did not seek an FCC ruling. Because of the public notice, “stations can no longer rely on FCC precedent as to applicability of the bona fide news interview exemption,” the law firm said. “Only by obtaining a declaratory ruling, in advance, from the FCC can a station be assured that it will not face regulatory action for interviewing a candidate without providing equal opportunities to opposing candidates.”

This is “quite a switch,” Feld said. If this is the new standard, “then conservative talk radio hosts should also be required to affirmatively seek declaratory rulings,” he said.

FCC is “licensing speech”

Berin Szóka, president of think tank TechFreedom, told Ars that “the FCC is effectively creating a system of prior restraints, that is, licensing speech. This is the greatest of all First Amendment problems. What’s worse, the FCC is doing this selectively, discriminating on the basis of speakers.”

TechFreedom has argued that the FCC should repeal the news distortion policy that Carr has embraced, and Szóka is firmly against Carr on equal-time enforcement as well. As Szóka noted, the Supreme Court has made clear that “laws favoring some speakers over others demand strict scrutiny when the legislature’s speaker preference reflects a content preference.”

“That’s exactly what’s happening here,” Szóka said. “Carr is imposing a de facto requirement that TV broadcasters, but not radio broadcasters, must file for prior assessment as to their ‘news’ bona fides.” Ultimately, it means that TV broadcasters “can no longer have political candidates on their shows without offering equal time to all candidates in that race unless they seek prior pre-clearance from the FCC as to whether they qualify as providing bona fide news,” he said.

Carr’s enforcement push was applauded by Daniel Suhr, president of the Center for American Rights, a group that has supported Trump’s claims of media bias. The group filed bias complaints against CBS, ABC, and NBC stations that were dismissed during the Biden era, but those complaints were revived by Carr in January 2025.

“This major announcement from the FCC should stop one-sided left-wing entertainment shows masquerading as ‘bona fide news,’” Suhr wrote on January 21. “The abuse of the airwaves by ABC & NBC as DNC-TV must end. FCC is restoring respect for the equal time rules enacted by Congress.”

Suhr later argued in the Yale Journal on Regulation that Carr’s approach is consistent with FCC rulings from 1960 to 1980, before the commission started exempting the interview portions of talk shows.

“From 1984 to 2006, conversely, the Commission took a broader view that included less traditional shows,” Suhr wrote. “The Commission suggested a more traditional view in 2008, and again in 2015, each time qualifying a show because it ‘reports news of some area of current events, in a manner similar to more traditional newscasts.’”

But both decisions mentioned by Suhr granted bona fide exemptions and did not upend the precedents that broadcasters continued to rely on until Carr’s public notice. Suhr also argued that the Carr approach is supported by the Supreme Court’s 1969 decision upholding the Fairness Doctrine, although the Reagan-era FCC decided that the court’s 1969 rationale about scarcity of the airwaves could no longer be justified in the modern media market.

Don’t like a show? Change the channel

With the FCC having a 2-1 Republican majority, Democratic Commissioner Anna Gomez has been the only member pushing back against Carr. Gomez has also urged big media companies to assert their rights under the First Amendment and reject Carr’s threats.

When asked about Carr threatening TV broadcasters but not radio ones, Gomez told Ars in a statement that “the FCC’s equal-time rules apply equally to television and radio broadcasters. The Communications Act does not vary by platform, and it does not vary by politics. Our responsibility is to apply the law consistently, grounded in statute and precedent, not based on who supports or challenges those in power.”

FCC enforcement in the Trump administration has been “driven by politics rather than principle,” with decisions “shaped by whether a broadcaster is perceived as a critic of this administration,” Gomez said. “That is not how an independent agency operates. The FCC is not in the business of policing media bias, and it is wholly inappropriate to wield its authority selectively for political ends. When enforcement is targeted in this way, it damages the commission’s credibility, undermines confidence that the law is being applied fairly and impartially, and violates the First Amendment.”

Gomez addressed the disparity in enforcement during her press conference after the recent FCC meeting, saying the rules should be applied equally to TV and radio. She also pointed out that viewers and listeners can easily find different programs if one doesn’t suit their tastes.

“There’s plenty of content on radio I’m not particularly fond of, but that’s why I don’t listen to it,” Gomez said. “I have plenty of other outlets I can go to.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Trump FCC’s equal-time crackdown doesn’t apply equally—or at all—to talk radio Read More »

google-quantum-proofs-https-by-squeezing-15kb-of-data-into-700-byte-space

Google quantum-proofs HTTPS by squeezing 15kB of data into 700-byte space

Google and other browser makers require that all TLS certificates be published in public transparency logs, which are append-only distributed ledgers. Website owners can then check the logs in real time to ensure that no rogue certificates have been issued for the domains they use. The transparency programs were implemented in response to the 2011 hack of Netherlands-based DigiNotar, which allowed the minting of 500 counterfeit certificates for Google and other websites, some of which were used to spy on web users in Iran.

Once viable, Shor’s algorithm could be used to forge classical encryption signatures and break classical encryption public keys of the certificate logs. Ultimately, an attacker could forge signed certificate timestamps used to prove to a browser or operating system that a certificate has been registered when it hasn’t.

To rule out this possibility, Google is adding cryptographic material from quantum-resistant algorithms such as ML-DSA. This addition would allow forgeries only if an attacker were to break both classical and post-quantum encryption. The new regime is part of what Google is calling the quantum-resistant root store, which will complement the Chrome Root Store the company formed in 2022.

The MTCs use Merkle Trees to provide quantum-resistant assurances that a certificate has been published without having to add most of the lengthy keys and hashes. Using other techniques to reduce the data sizes, the MTCs will be roughly the same 4kB length they are now, Westerbaan said.

The new system has already been implemented in Chrome. For the time being, Cloudflare is enrolling roughly 1,000 TLS certificates to test how well the MTCs work. For now, Cloudflare is generating the distributed ledger. The plan is for CAs to eventually fill that role. The Internet Engineering Task Force standards body has recently formed a working group called the PKI, Logs, And Tree Signatures, which is coordinating with other key players to develop a long-term solution.

“We view the adoption of MTCs and a quantum-resistant root store as a critical opportunity to ensure the robustness of the foundation of today’s ecosystem,” Google’s Friday blog post said. “By designing for the specific demands of a modern, agile internet, we can accelerate the adoption of post-quantum resilience for all web users.”

Post updated to correct reported sizes of various items.

Google quantum-proofs HTTPS by squeezing 15kB of data into 700-byte space Read More »

rocket-report:-vulcan-“many-months”-from-flying;-falcon-9-extends-reuse-milestone

Rocket Report: Vulcan “many months” from flying; Falcon 9 extends reuse milestone


All the news that’s fit to lift

“As the original architect of Vector’s vision, it’s deeply meaningful to bring these assets home.”

Rocket Lab has completed qualification testing of its “Hungry Hippo” payload fairing. Credit: Rocket Lab

Welcome to Edition 8.31 of the Rocket Report! We have some late-breaking news this week with an update Thursday afternoon from Rocket Lab on the timing of its much-anticipated Neutron rocket. Following the failure of a first stage tank during testing, the company is pushing the medium-lift rocket’s debut into the fourth quarter of this year. Effectively that probably means 2027 for the booster, which is disappointing because we all very much want to see another reusable rocket take flight.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

The ghost of Vector lives on. Tucson, Arizona-based satellite and rocket developer Phantom Space, co-founded by Jim Cantrell in 2019, has acquired the remnants of Vector Launch, Space News reports. The announcement is notable because Cantrell left Vector as its finances deteriorated in 2019. Cantrell said some of the assets, comprising flight-proven design elements, engineering data, and other technology originally developed for Vector, will be immediately integrated into Phantom’s Daytona vehicle architecture to reduce development risk.

What’s your vector, Victor? … “As the original architect of Vector’s vision, it’s deeply meaningful to bring these assets home to Phantom,” Cantrell said in a statement. “This acquisition isn’t just about technology, it’s about momentum. We’re accelerating Daytona, creating high-tech aerospace jobs in Tucson, and moving faster toward orbital capability.” The small-lift Daytona rocket could use some acceleration since it has been delayed year after year for a while now. At present, it is slated to debut during the second half of 2027.

UK limits launch liability. An amendment to the United Kingdom’s Space Industry Act will mandate that limits are set on how much launch operators are financially liable if something goes wrong, European Spaceflight reports. According to Sarah Madden, a space lawyer at the London-based law firm Winckworth Sherwood, the amendment to the legislation removes the risk that operators launching from the UK might face unlimited liability.

Putting policy into law … Although the legislation provided certainty, all three launch operator licenses issued to date by the UK Civil Aviation Authority include a cap on indemnity to the government. Virgin Orbit’s 2022 horizontal launch license capped this at $250 million, while the vertical launch licenses granted to Skyrora and Rocket Factory Augsburg in 2025 set the cap at £10.5 million ($14.2 million). However, these limits were imposed as a matter of policy rather than law.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

PLD nabs launch contract. Spanish satellite operator Sateliot has signed a launch services agreement with PLD Space to launch its first two high-capacity 5G D2D (Direct-to-Device) Tritó satellites aboard a dedicated MIURA 5 mission, European Spaceflight reports. PLD Space is working toward the first flight of its 35.7-meter-tall MIURA 5 rocket in 2026. The rocket is designed to deliver payloads of up to 1,040 kilograms to low-Earth orbit and will initially launch from a new multi-user facility being built on the grounds of the Guiana Space Centre’s former Diamant launch complex.

Two at a time … PLD Space will attempt to carry its first two Tritó satellites to orbit aboard a dedicated MIURA 5 mission in 2027. According to the company, Sateliot selected PLD Space “based on MIURA 5’s ability to provide an independent, dedicated service tailored to the client’s specific needs, ensuring optimal launch conditions for deploying its space infrastructure.” Each Tritó satellite will have a mass of approximately 160 kilograms.

Neutron rocket launch slips to Q4 2026. As part of its quarterly earnings guidance update on Thursday, Rocket Lab provided a new launch target for the medium-lift Neutron rocket. Following the failure of first stage tank during testing, Neutron’s first launch is now targeted for “Q4 2026,” the company said. This is a notable slip, given that it was only last November that Rocket Lab announced a slip from the end of 2025 to “mid-2026.”

Invoking Berger’s Law … In its news release regarding the fourth quarter of 2025 earnings, the company said it completed successful qualification for Neutron’s thrust structure and entered the qualification phase for the interstage, and successfully qualified Neutron’s Hungry Hippo fairing and delivered it to the Assembly and Integration Complex in Virginia. I hate to do it, but I’m afraid that I am compelled to invoke Berger’s Law for rockets on this one, which states, “If a rocket is predicted to make its debut in Q4 of a calendar year, and that quarter is six or more months away, the launch will be delayed.” Since its inception in 2022, the law has been undefeated.

Falcon 9 extends its reuse milestone. SpaceX’s most-flown Falcon 9 rocket booster launched once again Saturday night, making its 33rd mission to space and back, Spaceflight Now reports. The 33rd flight of Falcon 9 booster 1067 came about two and a half months after its previous launch in early December. Its previous missions include four flights for NASA, the European Commission’s Galileo L13, and 20 batches of Starlink satellites.

Lordy, lordy, Falcon 9 is turning 40? … Nearly 8.5 minutes after liftoff, B1067 landed on the drone ship A Shortfall of Gravitas, positioned in the Atlantic Ocean. This was the 143rd landing on this vessel and the 575th booster landing to date for SpaceX. At present, SpaceX says it is working to certify its first stage of the Falcon 9 rocket for up to 40 flights.

Pentagon happy with military rockets. The Space Force officer tasked with overseeing more than $24 billion in research and development spending says the Pentagon is more interested in supporting startups building new space sensors and payloads than adding yet another rocket company to its portfolio, Ars reports. “We’re on path for mass-produced launch,” Maj. Gen. Stephen Purdy said at a space finance conference in Dallas.

Help needed to speed up payloads … Payloads, Purdy told Ars after his talk, are “the last frontier” for scaling space missions. “I remain convinced that we’re going to think about the mission that we need, and we’re going to need satellites out the door and launched and in orbit within the week, at scale,” Purdy said. “I’m very convinced that that’s the path that we’re going to move down on the commercial and government side.”

New data on how rockets pollute the atmosphere. New research bolsters growing concerns about the pollution produced by rocket launches, Ars reports. The new study in Nature analyzed a plume of pollution trailing part of a Falcon rocket that crashed through the upper atmosphere on February 19, 2025, after SpaceX lost control of its reentry. The authors said it is the first time debris from a specific spacecraft disintegration has been traced and measured in the near-space region about 80 to 110 kilometers above Earth. Changes there can affect the stratosphere, where ozone and climate processes operate. Until recent years, human activities had little impact on that region.

Studying the Ignorosphere … “I was surprised how big the event was, visually,” lead author Robin Wing, a researcher at the Leibniz Institute of Atmospheric Physics, said via email. He said people across northern Europe captured images of the burning debris, which was concentrated enough to enable high-resolution observations and to use atmospheric models to trace the lithium to its source. The study shows that instruments can detect rocket pollution “in the ‘Ignorosphere’ (upper atmosphere near space),” he wrote. “There is hope that we can get ahead of the problem and that we don’t run blind into a new era of emissions from space.”

Ambitious Chinese launch company moves into development. Chinese launch startup Space Epoch has secured B-round funding as the company moves toward a first orbital launch and recovery attempt late this year, Space News reports. The company says the funding means Space Epoch has entered a stage of large-scale development. “Three Yuanxingzhe-1 rockets already in production will undergo ground testing in the second half of the year, with the goal of achieving a successful first orbital launch and recovery by year’s end,” Space Epoch said in a statement.

Funding amount undisclosed … Yuanxingzhe-1 (YXZ-1) is a methane-liquid oxygen rocket designed for reusability. Space Epoch says it has a payload capacity of 13,800 kilograms to a 200-kilometer orbit and 9,000 kg to a 1,100 km orbit—the latter altitude being one associated with the national Guowang megaconstellation. It also claims a price of no more than 20,000 yuan per kilogram (about $2,900 per kg), with the rocket designed to be reusable 20 times. The company conducted a vertical takeoff and splashdown test in May 2025 using a YXZ-1 verification rocket, carrying out a reuse test two months later.

Vulcan likely “many months” from flying again. Twice, once in 2024 and again earlier this month, United Launch Alliance’s Vulcan rocket experienced issues with the nozzle on one of its solid rocket boosters during a launch. In both cases, the rocket’s main engines compensated for the issues, but the US military is not eager to test Vulcan’s ability to overcome such a dramatic problem again, Ars reports. “Any time there’s an anomaly, my team is going to be actively engaged with the contractors to make sure we understand what happened and we correct that issue,” said Col. Eric Zarybnisky, program acquisition executive for Space Systems Command’s space access program.

A nettlesome nozzle issue … Zarybnisky spoke with reporters Wednesday in a roundtable at the Air Force and Space Force Association’s Warfare Symposium near Denver. He said it was too early to provide details on the direction of the investigation but predicted it would be a “many months process” to identify the “exact technical issue” and the corrective actions required to prevent it from happening again. After the first booster issue in 2024, investigators identified a manufacturing defect in a carbon composite insulator, or heat shield, inside the nozzle. The latest incident suggests the defect was not fixed or that there is a separate problem with Northrop’s boosters. (submitted by philip verdieck)

SLS rocket rolls back to hangar. NASA Administrator Jared Isaacman announced this week that a new problem with the Space Launch System rocket will require the removal of the rocket from its launch pad in Florida. The large booster, with the Orion spacecraft stacked on top, then rolled back to the Vehicle Assembly Building. The latest issue appeared on the evening of February 20, when data showed an interruption in helium flow into the upper stage of the Space Launch System rocket, Ars reports. NASA officials were eyeing a launch attempt for Artemis II as soon as March 6, the first of five launch opportunities available in March.

Marching into April … There are approximately five days per month that the mission can depart the Earth after accounting for the position of the Moon in its orbit, the flight’s trajectory, and thermal and lighting constraints. The next series of launch dates begins on April 1. The space agency bypassed launch opportunities earlier this month after a fueling test on the SLS rocket revealed a hydrogen leak. After replacing seals in the fuel line leading into the SLS core stage, NASA completed a second fueling test last week with no significant leaks, raising hopes the mission could take off next month. With the discovery of the helium issue last Friday night, the March launch dates are now off the table.

Next three launches

February 27: Falcon 9 | Starlink 6-108 | Cape Canaveral Space Force Station, Florida | 10: 20 UTC

March 1: Alpha | Stairway to Seven | Vandenberg Space Force Base, California | 00: 50 UTC

March 1: Falcon 9 | Starlink 17-23 | Vandenberg Space Force Base, California | 08: 00 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: Vulcan “many months” from flying; Falcon 9 extends reuse milestone Read More »

how-strong-is-new-york’s-“illegal-gambling”-case-against-valve’s-loot-boxes?

How strong is New York’s “illegal gambling” case against Valve’s loot boxes?

“Calling it gambling because a user could, through several indirect steps, convert an item into cash risks stretching gambling law beyond its traditional limits,” Loiterman said. “If New York’s theory wins, it raises uncomfortable questions about things like Pokémon cards or promotional games (e.g. McDonald’s Monopoly). Courts will be cautious about going that far.”

New York also argues that Valve tacitly endorses third-party services that allow players to easily “cash out” their Steam inventories for real money. Whether Valve is culpable for the existence of those services is still an unsettled question in the law, Methenitis said, as it has been at least since he wrote about the legal implications of World of Warcraft‘s third-party gold resellers nearly two decades ago.

“I think companies have a pretty strong [legal] argument if they make some attempts to police [third-party resellers]—they obviously can’t fully control what people do outside their platform,” Methenitis said. “But if they turn a blind eye to it and allow it, I think they could be found liable.” Loiterman agreed that Valve “providing the tools that enable those [third-party] markets and tolerating them creates some degree of responsibility.”

“Judges tend to be cautious…”

In the end, the lawyers Ars spoke to were generally skeptical that courts would determine that Valve’s loot box system constitutes illegal gambling. Cases making similar arguments about other loot box systems have failed in other jurisdictions, “in part because gambling laws were drafted with casinos and lotteries in mind,” Loiterman said. “Judges tend to be cautious about breaking from an emerging consensus.”

Hoeg agreed that “the entire question [in this case] is novel, and… the courts are (small-‘c’) conservative institutions, not generally wanting to adopt novel arguments without direction from the legislative branches.” Even if Valve’s loot box system “may start to smell a bit like gambling,” Hoeg said he would “honestly be surprised if the courts went along with the characterization without a new law aimed at it.”

“I view it as a weak case offered primarily for political grandstanding/coverage over real legal effect,” Hoeg concluded. “We shall see, though.”

How strong is New York’s “illegal gambling” case against Valve’s loot boxes? Read More »

nasa-shakes-up-its-artemis-program-to-speed-up-lunar-return

NASA shakes up its Artemis program to speed up lunar return


“Launching SLS every three and a half years or so is not a recipe for success.”

Artist’s illustration of the Boeing-developed Exploration Upper Stage, with four hydrogen-fueled RL10 engines. Credit: NASA

NASA Administrator Jared Isaacman announced sweeping changes to the Artemis program on Friday morning, including an increased cadence of missions and cancellation of an expensive rocket stage.

The upheaval comes as NASA has struggled to fuel the massive Space Launch System rocket for the upcoming Artemis II lunar mission, and Isaacman has sought to revitalize an agency that has moved at a glacial pace on its deep space programs. There is ever-increasing concern that, absent a shake-up, China’s rising space program will land humans on the Moon before NASA can return there this decade with Artemis.

“NASA must standardize its approach, increase flight rate safely, and execute on the president’s national space policy,” Isaacman said. “With credible competition from our greatest geopolitical adversary increasing by the day, we need to move faster, eliminate delays, and achieve our objectives.”

Shaking things up

The announced changes to the Artemis program include:

  • Cancellation of the Exploration Upper Stage and Block IB upgrade for SLS rocket
  • Artemis II and Artemis III missions will use the SLS rocket with existing upper stage
  • Artemis IV, V (and any additional missions, should there be) will use a “standardized” upper stage
  • Artemis III will no longer land on the Moon; rather Orion will launch on SLS and dock with Starship and/or Blue Moon landers in low-Earth orbit
  • Artemis IV is now the first lunar landing mission
  • NASA will seek to fly Artemis missions annually, starting with Artemis III in “mid” 2027, followed by at least one lunar landing in 2028
  • NASA is working with SpaceX and Blue Origin to accelerate their development of commercial lunar landers for Artemis IV and beyond

At the core of Isaacman’s concerns is the low flight rate of the SLS rocket and Artemis missions. During past exploration missions, from Mercury through Gemini, Apollo, and the Space Shuttle program, NASA has launched humans on average about once every three months. It has been nearly 3.5 years since Artemis I launched.

“This is just not the right pathway forward,” Isaacman said.

A senior NASA official, speaking on background to Ars, noted that the space agency has experienced hydrogen and helium leaks during both the Artemis I and Artemis II pre-launch preparations, and these problems have led to monthslong delays in launch.

“If I recall, the timing between Apollo 7 and 8 was nine weeks,” the official said. “Launching SLS every three and a half years or so is not a recipe for success. Certainly, making each one of them a work of art with some major configuration change is also not helpful in the process, and we’re clearly seeing the results of it, right?”

The goal, therefore, is to standardize the SLS rocket into a single configuration to make it as reliable as possible and to launch it as frequently as every 10 months. NASA will fly the SLS vehicle until there are commercial alternatives to launch crew to the Moon, perhaps through Artemis V as Congress has mandated, or perhaps even a little longer.

Is everyone on board?

The NASA official said all of the agency’s key contractors are on board with the change, and senior leaders in Congress have been briefed on the proposed changes.

The biggest opposition to these proposals would seemingly come from Boeing, which is the prime contractor for the Exploration Upper Stage, a contract worth billions of dollars to develop a more powerful rocket that was due to launch for the first time later this decade. However, in a NASA news release, Boeing appeared to offer at least some support for the revised plans.

“Boeing is a proud partner to the Artemis mission and our team is honored to contribute to NASA’s vision for American space leadership,” said Steve Parker, Boeing Defense, Space & Security president and CEO, in the news release. “The SLS core stage remains the world’s most powerful rocket stage, and the only one that can carry American astronauts directly to the moon and beyond in a single launch. As NASA lays out an accelerated launch schedule, our workforce and supply chain are prepared to meet the increased production needs.”

Solid reasons for changing Artemis III

NASA’s new approach to Artemis reflects a return to the philosophy of the Apollo program. During the late 1960s, the space agency flew a series of preparatory crewed missions before the Apollo 11 lunar landing. These included Apollo 7 (a low-Earth orbit test of the Apollo spacecraft), Apollo 8 (a lunar orbiting mission), Apollo 9 (a low-Earth orbit rendezvous with the lunar lander), and Apollo 10 (a test of the lunar lander descending to the Moon, without touching down).

With its previous Artemis template, NASA skipped the steps taken by Apollo 7, 9, and 10. In the view of many industry officials, this leap from Artemis II—a crewed lunar flyby of the Moon testing only the SLS rocket and Orion spacecraft—to Artemis III and a full-on lunar landing was enormous and risky.

The new approach will, in NASA parlance, “buy down” some of the risk for a 21st-century lunar landing, including performance and handling of a lunar lander, rendezvous and docking, communications, spacesuit performance, and more.

It will also increase the challenges for NASA. In particular, the timeline to bring the Orion spacecraft to readiness for a mid-2027 launch will need to be accelerated, and efforts to integrate that vehicle with one or both lander providers will need serious attention.

For the Artemis IV lunar landing mission, NASA will also need to human-rate a new upper stage for the SLS rocket. The vehicle currently uses a modified Delta IV upper stage manufactured by United Launch Alliance. But that rocket production line is closed, and NASA only has two more of these stages. With the cancellation of the Exploration Upper Stage, NASA will now procure a new stage commercially. NASA officials only said they will seek a “standardized” upper stage. As Ars has previously reported, the most likely replacement would be the Centaur V upper stage currently flying on Vulcan rockets.

What of the Lunar Gateway?

Friday’s announcement—which, for the space community, is the equivalent of a major earthquake—left some key details unaddressed. For example, NASA has been developing a larger launch tower to support the Block 1B version of the SLS rocket, with its more powerful upper stage. Development of this tower, finally underway, has been a clown show, with project costs ballooning from an initial estimate of $383 million to $1.8 billion, and delays stacked on delays. Will this tower be scrapped or repurposed?

Isaacman and other NASA officials were also mum on the Lunar Gateway, a proposed space station in a high orbit around the Moon. Key elements of this space station are under construction. However, cancellation of the Exploration Upper Stage raises questions about its future. The main purpose of the Block 1B version of SLS was to launch heavier payloads, most notably elements of the Gateway along with Orion.

“The whole Gateway-Moon base conversation is not for today,” the senior NASA official said. “We, I can assure you, will talk about the Moon base in the weeks ahead. I would just not overly read into this, because we had manifested some Gateway modules on Falcon Heavy already. The implications of standardizing SLS and increasing launch rate are about the ability to return to the Moon. I don’t think we necessarily have to speculate too much on what the other downstream implications are.”

The Gateway program office is based at Johnson Space Center in Houston, where the lunar station is viewed as a successor to the International Space Station in terms of flight operations.

Key politicians, such as Sen. Ted Cruz, R-Texas, have been supportive of this new station. But during some recent congressional hearings, Cruz has indicated he is open to a lunar space station or an outpost on the lunar surface. He just wants to be sure NASA has an enduring presence on or near the Moon. One industry source said Isaacman could be laying the groundwork to replace the Gateway Program with a Moon Base program office in Houston. It is unclear how much of a political battle this would ultimately be.

Some of this has been well-predicted

Although the changes outlined by NASA on Friday are sweeping, they are not completely out of the blue.

In April 2024, Ars reported that some senior NASA officials were considering an Earth-orbit rendezvous between Orion and Starship as a means to buy down risk for a lunar landing. NASA ultimately punted on the idea before it was revived by Isaacman this month.

Additionally, in October 2024, Ars offered a guide to saving the “floundering” Artemis program by canceling the Block 1B upgrade for the SLS rocket, replacing its upper stage with a Centaur V, and canceling the Lunar Gateway. This would free up an estimated $2 billion annually to focus on accelerating a lunar landing, the publication estimated.

That may be the very course the space agency has embarked upon today.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

NASA shakes up its Artemis program to speed up lunar return Read More »

how-to-downgrade-from-macos-26-tahoe-on-a-new-mac

How to downgrade from macOS 26 Tahoe on a new Mac


Most new Macs can still be downgraded with few downsides. Here’s what to know.

An Ars Technica colleague recently bought a new M4 MacBook Air. I have essentially nothing bad to say about this hardware, except to point out that even in our current memory shortage apocalypse, Apple is still charging higher-than-market-rates for RAM and SSD upgrades. Still, most people buying this laptop will have a perfectly nice time with it.

But for this colleague, it was also their first interaction with macOS 26 Tahoe and the Liquid Glass redesign, the Mac’s first major software design update since the Apple Silicon era began with macOS 11 Big Sur in 2020.

Negative consumer reaction to Liquid Glass has been overstated by some members of the Apple enthusiast media ecosystem, and Apple’s data shows that iOS 26 adoption rates are roughly in line with those of the last few years. But the Mac’s foray into Liquid Glass has drawn particular ire from longtime users (developers Jeff Johnson and Norbert Heger have been tracking persistently weird Finder and window resizing behavior, to pick two concrete examples, and Daring Fireball’s John Gruber has encouraged users not to upgrade).

My general approach to software redesigns is to just roll with them and let their imperfections and quirks become background noise over time—it’s part of my job to point out problems where I see them, but I also need to keep up with new releases whether I’m in love with them or not.

But this person has no such job requirement, and they had two questions: Can I downgrade this? And if so, how?

The answer to the first question is “yes, usually,” and Apple provides some advice scattered across multiple documentation pages. This is an attempt to bring all of those steps together into one page, aimed directly at new Mac buyers who are desperate to switch from Tahoe to the more-familiar macOS 15 Sequoia.

Table of Contents

A preemptive warning about security updates and older versions of macOS

Before we begin: Apple handles macOS updates differently from iOS updates. Eventually, Apple requires devices that support the latest iOS and iPadOS versions to install those updates if they want to continue getting security patches. That means if your iPhone or iPad can run iOS or iPadOS 26, it needs to be running iOS or iPadOS 26 to stay patched.

Older macOS versions, on the other hand, are updated for around three years after they’re initially released. The first year, they get both security patches and new features. The next two years, they get security patches and new versions of the Safari browser. Macs running older-but-supported macOS versions also generally continue to get the same firmware updates as those running the latest macOS version.

Generally, we’d recommend against using macOS versions after security updates have dried up. For macOS 15 Sequoia, that will happen around September or October of 2027. Apple also sometimes leaves individual vulnerabilities unpatched on older operating systems; only the latest releases are guaranteed to get every patch. If you can look past the elements of Tahoe’s design that bother you most, staying on it is the safest option.

You can follow steps similar to the ones in this guide to downgrade some Macs to even older versions of macOS, but I wouldn’t recommend it; macOS 14 Sonoma will get security and Safari updates for only another six months or so, which isn’t long enough to justify spending the time to install it.

What we won’t cover is how to transfer data you want to keep from your Tahoe install to an older version of macOS. We’re assuming you have a new and relatively pristine Mac to downgrade, one that you haven’t loaded up with data other than what you already have synced to iCloud.

Can my Mac downgrade?

Mostly, yes. Any Mac with an M4 family chip or older, including the M4 MacBook Air and everything else in Apple’s current lineup, should support the current version of Sequoia (as of this writing, 15.7.4, with Safari 26.3).

As a rule of thumb, Macs will not run any version of macOS older than the one they shipped with when they launched. Apple provides security updates for older versions of macOS, but it doesn’t bother backporting drivers and other hardware support from newer versions to older ones.

The only Mac to launch since Tahoe was released is the M5 MacBook Pro, so owners of that system will need Tahoe or newer. If Apple puts out new Macs in early March as expected, those Macs will also only work with Tahoe or newer, and downgrades won’t be possible.

Although we’re mainly talking about new Macs here, these steps should all be identical for any Apple Silicon Mac, from the original M1 computers on up. If you buy a recently used Mac that ships with Tahoe installed, a downgrade still works the same way. We won’t cover the steps for installing anything on an Intel Mac—vanishingly few of them support Tahoe in the first place, and most people certainly shouldn’t be buying them at this late date.

Option one: A bootable USB installer

Apple hasn’t shipped physical install media for macOS in 15 years, but each downloadable installer still includes the bits you need to make a bootable USB install drive. And while late-Intel-era Macs with Apple T2 chips briefly made booting from external media kind of a pain, Apple Silicon Macs will boot from a USB drive just as easily and happily as early Intel-era Macs did.

This method will be the easiest for most people because it only requires you to own a single Mac—the one you’re downgrading.

Create the USB installer

Downloading the Sequoia installer through Software Update. Downloading this way serves as an additional compatibility check; your Mac won’t download any version of macOS too old for it to run.

Credit: Andrew Cunningham

Downloading the Sequoia installer through Software Update. Downloading this way serves as an additional compatibility check; your Mac won’t download any version of macOS too old for it to run. Credit: Andrew Cunningham

To make a USB installer, you’ll need a 32GB or larger USB flash drive and the downloadable macOS Sequoia installer. A 16GB drive was large enough for macOS for many years, but Sequoia and Tahoe are too large by a couple of gigabytes.

Apple’s support page here links to every downloadable macOS installer going back to 2011’s 10.7 Lion. In Tahoe, the macOS Sequoia link takes you to the App Store, which then bounces you to Software Update in the Settings app. This process has enough points of failure that it may not work the first time; try clicking the “Get” button in the App Store again and it usually goes.

If you’re downloading the installer from within macOS Tahoe, you’ll see a pop-up when the download completes, telling you that the installer can’t be run from within that version of macOS. Since we’ll be running it off of its own USB stick, you can safely ignore this message.

While the installer is downloading, install and prepare your USB drive. Open Disk Utility, click the View button, and select “show all devices.” Click the root of your USB drive under the “external” header in the left sidebar, and click the Erase button in the upper-right control area.

Change the disk’s name to whatever you want—I use “MyVolume” so I don’t have to change Apple’s sample terminal commands when copying the installer files—and make sure the Format is set to Mac OS Extended (Journaled) and the Scheme is set to GUID Partition Map. (That’s not an error; the macOS installer still wants an HFS+ filesystem rather than APFS.)

The handy thing is that if you have a larger USB drive, you can create installers for multiple macOS versions by partitioning the disk with the Partition button. A 64GB drive split into three ~21GB partitions could boot Tahoe, Sequoia, and another past or future macOS version; I just have it split into two volumes so I can boot Sequoia and Tahoe installers from the same drive.

Running the Terminal command to create our macOS 15 Sequoia boot drive.

Credit: Andrew Cunningham

Running the Terminal command to create our macOS 15 Sequoia boot drive. Credit: Andrew Cunningham

Once the Sequoia installer is in your Applications folder, run a Terminal command to copy the installer files. Apple has commands for each version of macOS on this page. Use this one for Sequoia:

sudo /Applications/Install macOS Sequoia.app/Contents/Resources/createinstallmedia --volume /Volumes/MyVolume

If you named the USB drive something other than MyVolume when you formatted it, change the name in the command as well. Note that names with spaces require a backslash before each space.

The Terminal will prompt you for your password and ask you to type Y to confirm. It will then reformat the drive and copy the files over. The time this takes will vary depending on the speed of the USB drive you’re using, but for most USB 3 drives, it should only take a few minutes to create the installer. When the Terminal command is done running, leave the disk inserted and shut down your Mac.

Using the USB installer

With your Mac powered down and the USB installer drive inserted, press and hold the power button on your Mac (the Touch ID button on any laptop or the dedicated power button on a desktop) until the text under the Apple logo changes to “loading startup options.” You should see the macOS Sequoia installer listed alongside Macintosh HD as a boot option; highlight it and click Continue. If you don’t see the Sequoia installer, you may need an extra step—highlight Options, then click Continue, and we’ll talk more about this momentarily.

Once booted, the Sequoia installer will automatically launch the macOS installer to do an in-place upgrade, which isn’t what we want. Hit Command+Q to quit the installer and click through the confirmation, and you’ll get the typical menu of recovery environment options; from here, launch Disk Utility, click the top level of the internal Macintosh HD disk, and click Erase. Click through the prompts to erase the Mac and restart.

My own macOS USB installer from my beloved Micro Center.

Credit: Andrew Cunningham

My own macOS USB installer from my beloved Micro Center. Credit: Andrew Cunningham

After the Mac restarts, you’ll need an Internet connection to activate it before you can do anything else with it; connect using the Wi-Fi menu in the top-right, typing in your network SSID and password manually if the menu doesn’t auto-populate. This will activate your Mac and get you back to the recovery environment menu.

Here, select the Sequoia installer and click through the prompts—you should be able to install Sequoia on the now-empty Macintosh HD volume with no difficulty. From here, there’s nothing else to do. Wait until the installation completes, and when it’s ready, it will boot into a fresh Sequoia install, ready to be set up.

If you didn’t see your Sequoia installer in the boot menu before and you clicked the Options gear instead, it usually means that FileVault encryption or Find My was enabled on the Mac—maybe you signed into your Apple account when you were initially setting up Tahoe before you decided you wanted nothing to do with it.

When you boot into the recovery environment, you’ll be asked to select a user you know the password for, which will unlock the encrypted disk. If all you want to do is erase the Mac and make it bootable from your USB stick, don’t worry about this; just select Recovery Assistant from the menu, select Erase Mac, and click through the prompts. Then, use the steps above to boot from your USB stick, and you should be able to install a fresh copy of whatever macOS version you want to the now-erased internal drive.

The nuclear option: A DFU restore

Normally, a bootable USB installer does everything you need it to do. It wipes the data from your Mac’s internal storage and replaces it with new data. But occasionally you need to drill a little deeper, either because your Mac becomes unresponsive or you’ve been running beta software and want to switch back to a stable release. Or just because other steps haven’t worked for you.

The nuclear option for resetting a Mac is a DFU (or Device Firmware Upgrade) restore. Based on the restore process for iPhones and iPads, a DFU restore uses a compressed IPSW archive that contains not only the macOS system files but also firmware files for all Apple Silicon Macs. The USB installer just replaces macOS; the DFU restore replaces everything from the firmware on up. (These are also the same files used to create macOS virtual machines using Apple’s Virtualization Framework.)

Because a DFU restore can only be performed on a Mac that’s booted into a special DFU mode, you’ll need a second Mac with a USB-C or Thunderbolt port, plus a USB-C cable. Apple says the USB-C charging cable included with Macs will work for this but not to use a Thunderbolt cable; I’ve used a generic USB-C cable, and it has worked fine.

The first step is to download the relevant IPSW file from Apple. This page on the Mr. Macintosh site is the one I have bookmarked because it’s a good repository of virtually every macOS IPSW file Apple has ever released, including beta versions for when those are useful.

First, download the macOS 15.6.1 IPSW file linked on that page (and here) to your host Mac (Apple stops releasing IPSW files for older OSes once newer ones have been released, so this is the newest file you’ll be able to get for macOS 15). Both iPhones and iPads have model-specific IPSW files, but for macOS, there’s just one image that works with all Macs.

On the Mac you’re trying to restore—we’ll call it the “target Mac” for simplicity’s sake—figure out which of its USB-C ports is the designated DFU port. There’s only one that will work, and it’s usually the leftmost or rightmost port. Plug one end of the USB-C cable into that DFU port and the other into any USB-C port on your host Mac and follow Apple’s instructions for how to boot the system into DFU mode.

A Mac in DFU mode will need permission before your Mac can work with it.

Credit: Andrew Cunningham

A Mac in DFU mode will need permission before your Mac can work with it. Credit: Andrew Cunningham

When it’s successfully booted into DFU mode, your host Mac will see the target Mac, and you’ll see the same notification you get any time you plug in USB accessories for the first time. Allow it to connect, open a Finder window, and scroll down the left-hand sidebar until you get to “Mac” under the Locations heading.

The Finder’s DFU interface is pretty simple—a picture, a line of text, and two buttons. We want to restore, not revive, the Mac. Clicking the Revive Mac button will normally download and install the latest macOS version from Apple. But you can force it to use a different IPSW file—like the Sequoia one we just downloaded—by holding down the Option key as you click it. Navigate to the IPSW file, open it, and allow the restore process to begin.

This will take some time; you can track progress in the first phase in the Finder window. After a few minutes, the Mac you’re restoring will light back up, and you can watch its progress there. Once the target Mac reboots with its signature chime, the process is complete.

Because the IPSW file is for an outdated version of Sequoia, the first thing you’ll want to do is hit Software Update for the latest Sequoia and Safari versions; you’ll be offered a Tahoe upgrade, but you obviously won’t want to do that after the trouble you just went through. Scroll down to “other updates,” and you’ll be offered all the non-Tahoe updates available.

Downgrader’s remorse?

You will run into a handful of downsides when running an older version of macOS, especially if you’re trying to use it with iPhones and/or iPads that have been updated to version 26.

Most of the awkwardness will involve new features introduced in Messages, Notes, Reminders, and other Apple apps that sync between devices. The Messages app in Sequoia doesn’t support background images or polls, and it handles spam filtering slightly differently. They’re minor absences and annoyances, mostly, but they’re still absences and annoyances.

At least for the time being, though, you’ll find Sequoia pretty well-supported by most of Apple’s ecosystem. Core services like iCloud and iMessage aren’t going anywhere; Xcode still supports Sequoia, as does every Apple Creator Studio app update aside from the new Pixelmator Pro. App support may eventually drop off, but there’s not a lot that requires the latest and greatest version of macOS.

If and when you decide it’s time to step up to a newer version of macOS, Tahoe (or whatever macOS 27 is called) will be there in Software Update waiting for you. You’ll need to install a new version eventually if you want to keep getting app updates and security patches. But you don’t have to yet.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

How to downgrade from macOS 26 Tahoe on a new Mac Read More »

judge-doesn’t-trust-doj-with-search-of-devices-seized-from-wash.-post-reporter

Judge doesn’t trust DOJ with search of devices seized from Wash. Post reporter


Let me search that for you

Court to search devices itself instead of letting government have full access.

The Washington Post building on August 6, 2013 in Washington, DC, Credit: Getty Images | Saul Loeb

A federal court will conduct a search of devices seized from a Washington Post reporter after a magistrate judge decided yesterday that the Department of Justice cannot be trusted to perform the search on its own.

US Magistrate Judge William Porter criticized government prosecutors for not including key information in a search warrant application. The court wasn’t aware of a 1980 law that limits searches and seizures of journalists’ work materials when it approved the warrant, Porter acknowledged.

The decision came six weeks after the FBI executed the search warrant at the Virginia home of reporter Hannah Natanson. Porter declined the Post and Natanson’s request to return the devices immediately but decided on a court-led process to ensure that the search is limited to materials that may aid a criminal case against an alleged leaker who was in contact with Natanson. He also rescinded the portion of the search warrant that authorized the government to open, access, review, or otherwise examine the seized data.

“The government acknowledges that it established probable cause to obtain only a small fraction of the material it seized,” Porter wrote in yesterday’s order. “Allowing the government to search through the entirety of a reporter’s work product—when probable cause exists for only a narrow subset—would authorize an unlawful general warrant.”

Porter’s ruling said the government’s proposed search would also violate the Department of Justice’s own guidelines that search warrants directed at the press must be narrowly drawn and that searches of materials must be designed to minimize intrusion into newsgathering activities and materials that are unrelated to the investigation. Keyword searches can be used to limit the intrusion, but Porter rejected the government’s request to use its own “filter team” to conduct the search.

“Given the documented reporting on government leak investigations and the government’s well-chronicled efforts to stop them, allowing the government’s filter team to search a reporter’s work product—most of which consists of unrelated information from confidential sources—is the equivalent of leaving the government’s fox in charge of the Washington Post’s henhouse,” Porter wrote.

Rejecting what he called an “unsupervised, wholesale search of all Movants’ seized data,” Porter said the court will develop a process for the search in consultation with the parties involved in the case.

US prosecuting alleged leaker

The US is seeking information for its prosecution of Aurelio Perez-Lugones, a government contractor accused of leaking classified information to Natanson. Porter wrote that the court will conduct the search to “gather the information the government needs to prosecute its criminal case without authorizing an unrestrained search and violating Movants’ First Amendment and attorney-client privileges.”

Porter, who presides in US District Court for the Eastern District of Virginia, said that a 4th Circuit appeals court precedent mandates this result. The US could appeal Porter’s ruling to that court.

On January 21, Porter ordered the government to stop its search of Natanson’s devices until further decisions from the court. That standstill order will remain in effect while the court conducts its review of the seized materials. Porter denied the Post and Natanson’s motion to return seized materials without prejudice and said that issue will be taken up in future proceedings.

The government started searching devices before the standstill order and was able to access Natanson’s work MacBook Pro by compelling her to unlock it with her fingerprint. But the government said it was unable to access data from the iPhone because it was protected by Apple’s Lockdown Mode. Natanson has said she uses encrypted Signal chats to communicate with sources and that her list of contacts exceeds 1,100 current and former government employees.

Porter’s ruling recounted the events leading to the government search of Natanson’s home. He said the government’s search warrant application should have discussed limitations imposed by the Privacy Protection Act (PPA) of 1980.

Porter said magistrate judges give the government some leeway in their role “as probable cause gatekeepers for search warrants,” given the “fast-paced environment” in which the requests are processed. The Natanson search warrant was one of 46 requested by the government that week.

Court admits “gap” in its analysis

Porter admitted that he was unaware of the PPA’s existence at the time he approved the warrant application:

As the judge who found probable cause and approved the search warrant, the Court acknowledges that it did not independently identify the PPA when reviewing the warrant application. As far as this Court knows, courts have approved search warrants directed at members of the press in only a handful of instances. This Court had never received such an application and, at the time it approved the warrant, was unaware of the PPA. This Court’s review was limited to probable cause, and the Court accepts that gap in its own analysis.

Porter went on to say that “the government’s failure to identify the PPA as applicable to a request for a search warrant on a member of the press—and to analyze it in its warrant application… has seriously undermined the Court’s confidence in the government’s disclosures in this proceeding.”

The PPA, he wrote, generally prohibits government officers “from searching for or seizing ‘work product materials’ or ‘documentary materials’ possessed by a person ‘reasonably believed to have a purpose to disseminate to the public a newspaper, book, broadcast, or other similar form of public communication.’” There are exceptions allowing search warrants when a reporter is suspected of a crime, when a seizure is needed to prevent death or serious injury, or when there is reason to believe that issuing a subpoena would result in the destruction of documents.

A Washington Post article said that Porter “scolded prosecutors about this omission at a hearing on the search warrant in an Alexandria courthouse Friday.” Prosecutor Gordon Kromberg reportedly responded that he didn’t mention the law in the application because he didn’t believe it applied to the case.

Porter’s ruling said that if the government had mentioned the law in its application, “the Court may well have rejected the search warrant application and directed the government to proceed by subpoena instead. At the very least, it would have asked more questions. The government deprived the Court of the opportunity to make those real-time decisions.”

Judge should have gone further, press group says

Even without being aware of the PPA, the court did not approve the Natanson warrant right away. Porter’s order said the court rejected the government’s first two requests for a search warrant because they were too broad. The court was “concerned about both the scope of the proposed search warrant and the government’s apparent attempt to collect information about Ms. Natanson’s confidential sources,” he wrote.

The search warrant ultimately approved by the court was limited to information that Natanson received from Aurelio Luis Perez-Lugones and information related to Perez-Lugones that could be evidence in the case against him.

“The government expressly alleged that Ms. Natanson received classified information from Mr. Perez-Lugones,” but its search warrant application did not say whether Natanson herself was a target of the criminal investigation, Porter wrote. “The Court learned that Ms. Natanson was not a focus of the investigation only through press reports published the day the warrant was executed,” he wrote.

Porter said the court has to take seriously the government’s claim that the case “involves top secret national security information,” even though the court doesn’t know whether disclosure of the information would cause harm. “The Court takes the government at its word, while acknowledging the well-documented concern that the government has at times overclassified information to avoid embarrassing disclosures rather than to protect genuine secrets,” he wrote.

The Freedom of the Press Foundation said that “Judge Porter was right to treat the seizure as a prior restraint and to limit the government from fishing through the irrelevant data it seized to snoop on reporters,” and right to reprimand prosecutors for the omission in their search warrant application. But the order didn’t go far enough, the foundation said.

“Judge Porter should have required all of Natanson’s materials seized pursuant to the deceptive warrant application to be returned to her,” the group said. “And he should not have credited the administration’s claims that any of the seized materials posed a national security threat without strict proof—as Judge Porter acknowledged, this administration, even more so than others, has a long track record of falsely claiming national security threats to protect itself from embarrassment and further its political agenda. It has earned zero deference from the judiciary on claims of national security threats, particularly when press freedom is at stake.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Judge doesn’t trust DOJ with search of devices seized from Wash. Post reporter Read More »