Author name: Paul Patrick

taiwan-starts-weaponizing-chip-access-after-us-urged-it-to,-expert-says

Taiwan starts weaponizing chip access after US urged it to, expert says

Taiwan has begun evolving its trade strategy to start wielding its dominant position as a leading supplier of cutting-edge chips as a weapon, Bloomberg reported.

The move comes amid Donald Trump’s heightening global trade war and after years of Taiwan’s use of its chip dominance as a shield against Chinese aggression, with Taiwan allying with the US to stave off China’s threats of invasion. Under the so-called “one-China principle,” China has rejected Taiwan’s independence, requiring allies to sever ties with Taiwan.

On Tuesday, Taiwan announced that it would be limiting shipments of semiconductors into South Africa—among 47 restricted products—due to national security concerns. The rare export curbs could hit South Africa’s “electronics, telecom, and auto parts sectors” hard, MSN reported, if South Africa doesn’t meet with Taiwan to discuss better terms within the next 60 days.

As Bloomberg previously reported, Taiwan is upset that South Africa unilaterally moved to relocate Taiwan’s embassy from Pretoria to Johannesburg after meeting with China’s president, Xi Jinping, in 2023. As a major ally to China, South Africa recently intensified pressure to move the embassy in July ahead of another meeting in November that Xi is expected to attend—attempting to signal that South Africa was weakening ties with Taiwan, as China had demanded.

Taiwan’s Ministry of Foreign Affairs immediately protested South Africa’s efforts in July, accusing South Africa of suppressing Taiwan and promising countermeasures if South Africa refused to consult with Taiwan on the embassy relocation.

In a statement, South Africa’s foreign ministry spokesperson, Chrispin Phiri, insisted that South Africa’s ties with Taiwan are “non-political,” while noting that “South Africa is a critical supplier of platinum group metals, like palladium, essential to the global semiconductor industry,” Bloomberg reported.

On Wednesday, China’s foreign ministry spokesperson, Guo Jiakun, criticized Taiwan’s export curbs as “a deliberate move to destabilize global chip industrial and supply chains and counter the prevailing international commitment to the one-China principle by weaponizing chips.”

Taiwan starts weaponizing chip access after US urged it to, expert says Read More »

the-dhs-has-been-quietly-harvesting-dna-from-americans-for-years

The DHS has been quietly harvesting DNA from Americans for years


The DNA of nearly 2,000 US citizens has been entered into an FBI crime database.

For years, Customs and Border Protection agents have been quietly harvesting DNA from American citizens, including minors, and funneling the samples into an FBI crime database, government data shows. This expansion of genetic surveillance was never authorized by Congress for citizens, children, or civil detainees.

According to newly released government data analyzed by Georgetown Law’s Center on Privacy & Technology, the Department of Homeland Security, which oversees CBP, collected the DNA of nearly 2,000 US citizens between 2020 and 2024 and had it sent to CODIS, the FBI’s nationwide system for policing investigations. An estimated 95 were minors, some as young as 14. The entries also include travelers never charged with a crime and dozens of cases where agents left the “charges” field blank. In other files, officers invoked civil penalties as justification for swabs that federal law reserves for criminal arrests.

The findings appear to point to a program running outside the bounds of statute or oversight, experts say, with CBP officers exercising broad discretion to capture genetic material from Americans and have it funneled into a law-enforcement database designed in part for convicted offenders. Critics warn that anyone added to the database could endure heightened scrutiny by US law enforcement for life.

“Those spreadsheets tell a chilling story,” Stevie Glaberson, director of research and advocacy at Georgetown’s Center on Privacy & Technology, tells WIRED. “They show DNA taken from people as young as 4 and as old as 93—and, as our new analysis found, they also show CBP flagrantly violating the law by taking DNA from citizens without justification.”

DHS did not respond to a request for comment.

For more than two decades, the FBI’s Combined DNA Index System, or CODIS, has been billed as a tool for violent crime investigations. But under both recent policy changes and the Trump administration’s immigration agenda, the system has become a catchall repository for genetic material collected far outside the criminal justice system.

One of the sharpest revelations came from DHS data released earlier this year showing that CBP and Immigrations and Customs Enforcement have been systematically funneling cheek swabs from immigrants—and, in many cases, US citizens—into CODIS. What was once a program aimed at convicted offenders now sweeps in children at the border, families questioned at airports, and people held on civil—not criminal—grounds. WIRED previously reported that DNA from minors as young as 4 had ended up in the FBI’s database, alongside elderly people in their 90s, with little indication of how or why the samples were taken.

The scale is staggering. According to Georgetown researchers, DHS has contributed roughly 2.6 million profiles to CODIS since 2020—far above earlier projections and a surge that has reshaped the database. By December 2024, CODIS’s “detainee” index contained over 2.3 million profiles; by April 2025, the figure had already climbed to more than 2.6 million. Nearly all of these samples—97 percent—were collected under civil, not criminal, authority. At the current pace, according to Georgetown Law’s estimates, which are based on DHS projections, Homeland Security files alone could account for one-third of CODIS by 2034.

The expansion has been driven by specific legal and bureaucratic levers. Foremost was an April 2020 Justice Department rule that revoked a long-standing waiver allowing DHS to skip DNA collection from immigration detainees, effectively green-lighting mass sampling. Later that summer, the FBI signed off on rules that let police booking stations run arrestee cheek swabs through Rapid DNA machines—automated devices that can spit out CODIS-ready profiles in under two hours.

The strain of the changes became apparent in subsequent years. Former FBI director Christopher Wray warned during Senate testimony in 2023 that the flood of DNA samples from DHS threatened to overwhelm the bureau’s systems. The 2020 rule change, he said, had pushed the FBI from a historic average of a few thousand monthly submissions to 92,000 per month—over 10 times its traditional intake. The surge, he cautioned, had created a backlog of roughly 650,000 unprocessed kits, raising the risk that people detained by DHS could be released before DNA checks produced investigative leads.

Under Trump’s renewed executive order on border enforcement, signed in January 2025, DHS agencies were instructed to deploy “any available technologies” to verify family ties and identity, a directive that explicitly covers genetic testing. This month, federal officials announced they were soliciting new bids to install Rapid DNA at local booking facilities around the country, with combined awards of up to $3 million available.

“The Department of Homeland Security has been piloting a secret DNA collection program of American citizens since 2020. Now, the training wheels have come off,” said Anthony Enriquez, vice president of advocacy at Robert F. Kennedy Human Rights. “In 2025, Congress handed DHS a $178 billion check, making it the nation’s costliest law enforcement agency, even as the president gutted its civil rights watchdogs and the Supreme Court repeatedly signed off on unconstitutional tactics.”

Oversight bodies and lawmakers have raised alarms about the program. As early as 2021, the DHS inspector general found the department lacked central oversight of DNA collection and that years of noncompliance can undermine public safety—echoing an earlier rebuke from the Office of Special Counsel, which called CBP’s failures an “unacceptable dereliction.”

US Senator Ron Wyden (D-Kans.) more recently pressed DHS and DOJ for explanations about why children’s DNA is being captured and whether CODIS has any mechanism to reject improperly obtained samples, saying the program was never intended to collect and permanently retain the DNA of all noncitizens, warning the children are likely to be “treated by law enforcement as suspects for every investigation of every future crime, indefinitely.”

Rights advocates allege that CBP’s DNA collection program has morphed into a sweeping genetic surveillance regime, with samples from migrants and even US citizens fed into criminal databases absent transparency, legal safeguards, or limits on retention. Georgetown’s privacy center points out that once DHS creates and uploads a CODIS profile, the government retains the physical DNA sample indefinitely, with no procedure to revisit or remove profiles when the legality of the detention is in doubt.

In parallel, Georgetown and allied groups have sued DHS over its refusal to fully release records about the program, highlighting how little the public knows about how DNA is being used, stored, or shared once it enters CODIS.

Taken together, these revelations may suggest a quiet repurposing of CODIS. A system long described as a forensic breakthrough is being remade into a surveillance archive—sweeping up immigrants, travelers, and US citizens alike, with few checks on the agents deciding whose DNA ends up in the federal government’s most intimate database.

“There’s much we still don’t know about DHS’s DNA collection activities,” Georgetown’s Glaberson says. “We’ve had to sue the agencies just to get them to do their statutory duty, and even then they’ve flouted court orders. The public has a right to know what its government is up to, and we’ll keep fighting to bring this program into the light.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

The DHS has been quietly harvesting DNA from Americans for years Read More »

ford-f-150-lightnings-are-powering-the-grid-in-first-residential-v2g-pilot

Ford F-150 Lightnings are powering the grid in first residential V2G pilot

One of those Lightning owners is Morgan Grove. “As a member of the Baltimore Commission on Sustainability, I’m excited to be an early adopter of this technology and participate in this vehicle-to-grid program with BGE and Sunrun,” Grove said. “I bought the Ford F-150 Lightning for several reasons, one of them being the ability to power our home during an outage. Now, I can also earn money by sending energy directly to the grid.”

A hand holds a smartphone up to a charger wallbox, the screen shows a display of the power flow.

Is this the way to a more resilient power grid? Credit: Sunrun

“This demonstrates the critical role that vehicle batteries can play in powering the nation’s grid, accelerating American energy independence and dominance,” said Sunrun CEO Mary Powell. “It’s great to see this partnership with BGE and Ford move to this commercial stage. In addition to showing how electric vehicles can power homes, add electrons to the grid, and help utilities meet peak electricity demand, this program also creates extra income opportunities for customers,” Powell said.

“Enabling customers to not only power their homes but send power directly back to the grid in times of need helps customers with financial incentives, utilities with more power capacity, and society through more grid reliability and sustainable energy practices. It’s a win-win for everyone,” said Bill Crider, senior director, global charging and energy services, Ford Motor Company.

Ford F-150 Lightnings are powering the grid in first residential V2G pilot Read More »

pennywise-gets-an-origin-story-in-welcome-to-derry-trailer

Pennywise gets an origin story in Welcome to Derry trailer

Director Andy Muschietti’s two-film adaptation of Stephen King’s bestselling horror novel IT racked up over $1 billion at the box office worldwide. Now Muschietti is back with a nine-episode prequel series for HBO, IT: Welcome to Derry, exploring the origins of Pennywise the Clown (Bill Skarsgård), the ancient evil that terrorized the fictional town every 27 years. And now we have an official trailer a month before the prequel’s October debut.

(Some spoilers below for IT and IT: Chapter Two.)

As previously reported, set in 1989, IT essentially adapted half of King’s original novel, telling the story of a group of misfit kids calling themselves “The Losers Club.” The kids discover their small town of Derry is home to an ancient, trans-dimensional evil that awakens every 27 years to prey mostly on children by taking the form of an evil clown named Pennywise. Bill (Jaeden Lieberher) loses his little brother, Georgie, to Pennywise, and the group decides to take on Pennywise and drive him into early hibernation, where he will hopefully starve. But Beverly (Sophia Lillis) has a vision warning that Pennywise will return on schedule in 27 years, and they must be ready to fight him anew.

IT: Chapter Two revisited our protagonists 27 years later, as they all returned to Derry as adults for a reunion of sorts, taking on the killer clown in a final battle—eventually emerging victorious but not without a few casualties. The two films covered much of the novel’s material but omitted several key flashback passages drawn from Mike’s interviews with older residents of Derry as he investigated the town’s sinister history.

One event that did make it into IT: Chapter Two was the burning down of the Black Spot—a nightclub Mike’s (Chosen Jacobs and Isaiah Mustafa) father, Will, opened—by local white supremacists. That tragedy will also appear in Welcome to Derry. The series is set in 1962, although Muschietti said earlier this year that there are plans for three seasons, with subsequent settings in 1935 and 1908, respectively. That’s consistent with Pennywise’s 27-year cycle, and as Muschietti said, “There’s a reason why the story is told backwards.”

Pennywise gets an origin story in Welcome to Derry trailer Read More »

when-“no”-means-“yes”:-why-ai-chatbots-can’t-process-persian-social-etiquette

When “no” means “yes”: Why AI chatbots can’t process Persian social etiquette

If an Iranian taxi driver waves away your payment, saying, “Be my guest this time,” accepting their offer would be a cultural disaster. They expect you to insist on paying—probably three times—before they’ll take your money. This dance of refusal and counter-refusal, called taarof, governs countless daily interactions in Persian culture. And AI models are terrible at it.

New research released earlier this month titled “We Politely Insist: Your LLM Must Learn the Persian Art of Taarof” shows that mainstream AI language models from OpenAI, Anthropic, and Meta fail to absorb these Persian social rituals, correctly navigating taarof situations only 34 to 42 percent of the time. Native Persian speakers, by contrast, get it right 82 percent of the time. This performance gap persists across large language models such as GPT-4o, Claude 3.5 Haiku, Llama 3, DeepSeek V3, and Dorna, a Persian-tuned variant of Llama 3.

A study led by Nikta Gohari Sadr of Brock University, along with researchers from Emory University and other institutions, introduces “TAAROFBENCH,” the first benchmark for measuring how well AI systems reproduce this intricate cultural practice. The researchers’ findings show how recent AI models default to Western-style directness, completely missing the cultural cues that govern everyday interactions for millions of Persian speakers worldwide.

“Cultural missteps in high-consequence settings can derail negotiations, damage relationships, and reinforce stereotypes,” the researchers write. For AI systems increasingly used in global contexts, that cultural blindness could represent a limitation that few in the West realize exists.

A taarof scenario diagram from TAAROFBENCH, devised by the researchers. Each scenario defines the environment, location, roles, context, and user utterance.

A taarof scenario diagram from TAAROFBENCH, devised by the researchers. Each scenario defines the environment, location, roles, context, and user utterance. Credit: Sadr et al.

“Taarof, a core element of Persian etiquette, is a system of ritual politeness where what is said often differs from what is meant,” the researchers write. “It takes the form of ritualized exchanges: offering repeatedly despite initial refusals, declining gifts while the giver insists, and deflecting compliments while the other party reaffirms them. This ‘polite verbal wrestling’ (Rafiee, 1991) involves a delicate dance of offer and refusal, insistence and resistance, which shapes everyday interactions in Iranian culture, creating implicit rules for how generosity, gratitude, and requests are expressed.”

When “no” means “yes”: Why AI chatbots can’t process Persian social etiquette Read More »

openai-shows-us-the-money

OpenAI Shows Us The Money

They also show us the chips, and the data centers.

It is quite a large amount of money, and chips, and some very large data centers.

Nvidia to invest ‘as much as’ $100 billion (in cash) into OpenAI to support new data centers, with $10 billion in the first stage, on the heels of investing $5 billion into Intel.

Ian King and Shirin Ghaffary (Bloomberg): According to Huang, the project will encompass as many as 5 million of Nvidia’s chips, a number that’s equal to what the company will ship in total this year.

OpenAI: NVIDIA and OpenAI today announced a letter of intent for a landmark strategic partnership to deploy at least 10 gigawatts of NVIDIA systems for OpenAI’s next-generation AI infrastructure to train and run its next generation of models on the path to deploying superintelligence. To support this deployment including datacenter and power capacity, NVIDIA intends to invest up to $100 billion in OpenAI as the new NVIDIA systems are deployed. The first phase is targeted to come online in the second half of 2026 using NVIDIA’s Vera Rubin platform.

New Street Research estimates that OpenAI will spend $35 on Nvidia chips for every $10 that Nvidia invests in OpenAI. Nice work if you can get it.

Nvidia was up about 4.5% on the announcement. This project alone is a full year’s worth of Nvidia chips, which should further put to bed any question of Nvidia running out of Western customers for its chips, as discussed later in the post.

Note that the details of this deal are not finalized, which means the deal isn’t done.

OpenAI paired this with progressing its Stargate project with Oracle and SoftBank, expanding into five new AI data center cites and claiming they are on track to complete their $500 billion, 10-gigawatt commitment by the end of 2025, as these announcements plus the original cite in Abilene, Texas cover over $400 billion and 7-gigawatts over three years.

The new sites are Shackelford County, Texas, Dona Ana County, New Mexico, ‘a cite in the midwest to be announced soon,’ the existing Oracle cite in Lordstown, Ohio, and next year a cite in Milam County, Texas developed in partnership with SB energy.

Berber Jin at WSJ described the full plan as a $1 trillion buildout of computing warehouses across the nation and beyond.

Sam Altman (CEO OpenAI): I don’t think we’ve figured out yet the final form of what financing for compute looks like.

But I assume, like in many other technological revolutions, figuring out the right answer to that will unlock a huge amount of value delivered to society.

To celebrate all this, Sam Altman wrote Abundant Intelligence.

What about what this means for training runs? It means there is no physical cap that is likely to bind, it will be more about what makes economic sense since compute will be in high demand on all fronts.

Peter Wildeford: 10GW AI training runs… probably with models 20x-100x larger than GPT-5. We knew this was going to come, and we still don’t have any updates on timeline. How many years the $100B is over is also unclear – the ultimate size of the investment is really contingent on this.

I’d expect the 10GW OpenAI cluster becomes operational around 2027-2028. GPT-5 was disappointing to some but it’s on trend for its size and true scaling hasn’t happened yet. Get ready for the next three years, and we will truly see what some scaled AI models can do.

What about for other purposes? Ambitions are running high.

Haider: Sam Altman says the industry is bottlenecked by compute, and OpenAI can’t meet demand

Over the next 1-2 years, scarcity could force painful tradeoffs,

“cure cancer research vs free global education”

No one wants to make that choice. The only answer: scale up.

Peter Wildeford: Nvidia is busy trying to sell chips to China while our American AI companies can’t get enough compute at home 😡

That seems like quite the timeline for either curing cancer or providing free global education. But yes, compute is going to be like any other limited resource, you’re going to have to make tradeoffs. Spending on health care and education cannot be unlimited, nor does it automatically ‘win in a fight’ against other needs.

Eyes are on the prize. That prize is superintelligence.

Should we have plans and policies designed around the idea that superintelligence is coming? Well, we certainly shouldn’t build our plans and policies around the idea that the AI companies do not expect superintelligence, or that they won’t try to build it.

Roon: all the largest technology fortunes in the world are on record saying they’re betting the entire bank on superintelligence, don’t care about losses, etc. the only growth sector left in planetary capitalism. It is dizzying, running on redline overload.

If that statement does not terrify you, either you did not understand it, you did not think through its consequences, or you have become numb to such things.

Gavin Baker (quoted as a reminder from BuccoCapital Bloke): Mark Zuckerberg, Satya and Sundar just told you in different ways, we are not even thinking about ROI. And the reason they said that is because the people who actually control these companies, the founders, there’s either super-voting stock or significant influence in the case of Microsoft, believe they’re in a race to create a Digital God.

And if you create that first Digital God, we could debate whether it’s tens of trillions or hundreds of trillions of value, and we can debate whether or not that’s ridiculous, but that is what they believe and they believe that if they lose that race, losing the race is an existential threat to the company.

So Larry Page has evidently said internally at Google many times, “I am willing to go bankrupt rather than lose this race.” So everybody is really focused on this ROI equation, but the people making the decisions are not, because they so strongly believe that scaling laws will continue, and there’s a big debate over whether these emergent properties are just in context, learning, et cetera, et cetera, but they believe scaling laws are going to continue. The models are going to get better and more capable, better at reasoning.

And because they have that belief, they’re going to spend until I think there is irrefutable evidence that scaling laws are slowing. And the only way you get irrefutable evidence why has progress slowed down since GPT-4 came out is there hasn’t been a new generation of Nvidia GPUs. It’s what you need to enable that next real step function change in capability.

Mark Zuckerberg (last week): If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate obviously. But what I’d say is I actually think the risk is higher on the other side.

If you if you um build too slowly and then super intelligence is possible in 3 years, but you built it out assuming it would be there in 5 years, then you’re just out of position on what I think is going to be the most important technology that enables the most new products and innovation and value creation and history.

In case it is not obvious, no, they are not racing for ‘market share’ of inference tokens.

A central argument for going forward with AI, that was often cited by OpenAI CEO Sam Altman, used to be ‘compute overhang.’ As in, if we didn’t build AI now, then when we did built it we would have more compute available and progress faster, which would be more dangerous, so better to make progress now. Ah, simpler times.

Does Nvidia have, as Dan Gallagher claims in the Wall Street Journal, Too Much Money? As in so much money they don’t know what to do with it and it loses meaning.

Nvidia is set to generate $165 billion or so in annualized free cash flow by the end of 2028 according to consensus projections:

In the worst case you can always return value to shareholders. Nvidia wisely only pays the minimum $0.01 dividend (so it can be included in various financial products), but has already repurchased $50 billion of its own stock over the past year and plans to buy another $60 billion more.

The argument iby Gallagher is largely that Nvidia cannot make a major acquisition without Chinese approval? I do not understand why they would care, given China is already actively telling companies not to buy Nvidia’s products? Seems like none of their damn business.

But also Nvidia can do further strategic investments and projects instead that can easily eat up this level of free cash flow, if they have no better use for it. Stock market investors would be happy to get more exposure to private companies like OpenAI.

Thinking around Nvidia on Wall Street has for a long time been rather silly and my only regret is I did not buy in far sooner (for those who don’t know, consider this my disclosure that I am overweight Nvidia and have been for a while, quite profitably). For example, a real newspaper like the WSJ will post that the ‘wow’ factor is wearing off because Nvidia is now beating analyst revenue projections by smaller amounts:

If a company keeps beating your expectations, that’s a failure of expectations, with that failure at least getting smaller over time.

Many noticed that when Nvidia invests in OpenAI which uses the money to buy chips from Nvidia, and you move $100 billion back and forth, this can inflate valuations without obviously creating real value.

Such things have happened before.

Andrew Cote: $1 of cash on NVIDIA’s balance sheet is worth $1.

But $1 of revenue at ~50% net margins with a 50x P/E ratio, is worth $25.

Paul Graham (as quoted by Danielle Fong): By 1998, Yahoo was the beneficiary of a de facto Ponzi scheme. Investors were excited about the Internet. One reason they were excited was Yahoo’s revenue growth. So they invested in new Internet startups. The startups then used the money to buy ads on Yahoo to get traffic. Which caused yet more revenue growth for Yahoo, and further convinced investors the Internet was worth investing in.

When I realized this one day, sitting in my cubicle, I jumped up like Archimedes in his bathtub, except instead of “Eureka!” I was shouting “Sell!”

You do have to watch out for the de facto Ponzi scheme, but this is not that, from a pure business perspective it is a clear win-win agreement. You also do have to watch out for a naively multiplying profits into a fixed P/E ratio, but the stock market is smarter than that. The only losers are other AI labs, Microsoft and humanity’s chances of survival.

Peter Wildeford: Nvidia is doing revenue round-tripping at an unprecedented scale (Nvidia servers purchased by companies using money from Nvidia investment). Previously there was CoreWeave, Crusoe, xAI, and Llambda Labs but this is next level. I don’t think any company has ever done this at such a scale. This seems great if we continue AI boom times but could quickly unravel if AI is a bubble.

This is another big loss for Microsoft. Microsoft could’ve been OpenAI’s capital partner of choice as they got in early and had lock-in. But Microsoft wasn’t willing to keep delivering and backed out. This is another nail in the coffin of the Microsoft-OpenAI relationship.

Taking on Nvidia as another huge capital partner provides some much needed stability for OpenAI after being abandoned by Microsoft and their only other partner being SoftBank, which …doesn’t have the best track record.

I guess good to know OpenAI is committing to Nvidia chips for the long haul. Sounds like Microsoft isn’t close to cracking their own chip development like Amazon or Google. That’s good for Nvidia at least.

Ultimately, I am concerned that the US economy is increasingly a leveraged bet on AGI happening. Nvidia is 7% of the US stock market, so their investments matter. AGI happening may mean the end of humanity, but at least the S&P 500 will remain strong.

This probably explains why OpenAI doesn’t talk much about export controls on Nvidia products like Anthropic does… even though such controls would help OpenAI by undermining their Chinese competition.

Nvidia is a bet on AI happening, but in a safe fashion as they are doing it via reinvesting their profits rather than leverage. If AI valuations collapse, Nvidia only goes down roughly proportionally, and they remain a great company.

What about the implications of the deepening of the OpenAI-Nvidia partnership for policy, and willingness to speak out about existential risks?

A key worry has long been that OpenAI depends on Nvidia, so OpenAI may be reluctant to speak about AI risk lest they anger Nvidia and endanger their chip allocations. OpenAI has not been a great political actor, but Sam Altman appreciates many of the dangers and issues that will be at stake.

It is also fortunate that OpenAI’s interests in many ways align with America’s and with those of humanity, far more so than do the incentives of Nvidia. Nvidia wants to make and sell as many chips as possible to the widest variety of customers, whereas OpenAI has a strong interest in America beating China across the board and keeping things under control.

This situation could get far worse. Nvidia may even have made this part of the deal, with Altman agreeing to pursue or not pursue various political objectives, including around export controls. There is even the danger Nvidia ends up with a board seat.

It is likely not a coincidence that Anthropic has found a way to get its compute without Nvidia.

There is a matching bull case, on two fronts.

In terms of Nvidia impacting OpenAI, it is plausible that Nvidia already had maximal leverage over OpenAI via controlling chip allocations, especially if you add that it has the ear of the Administration. At some point, things can’t get any worse. And indeed, locking in these contracts could make things much better, instead, via locking in large commitments that reduce Nvidia’s real leverage.

Then there’s the question of OpenAI impacting Nvidia. Nvidia has a market cap of ~$4.3 trillion, but $100 billion is still ~2.3% of that, and the baseline scenario (since OpenAI is riskier and has more room to grow) is OpenAI equity grows in value faster than Nvidia equity, and Nvidia engages in stock buybacks. So even if no further investments are made, Nvidia could quickly be 5% or more exposed to OpenAI, on top of its exposure to xAI and other Western AI companies, where it makes a variety of strategic investments.

That starts to matter in the calculus for Nvidia. Yes, they can do better short term in their core chip business via selling to China, but that hurts their other investments (and America, but they clearly don’t care about America), and long term the model of compute-intensive top-quality closed models is better for Nvidia anyway.

Usually the incentives point the way you would think. But not always.

Discussion about this post

OpenAI Shows Us The Money Read More »

deepmind-ai-safety-report-explores-the-perils-of-“misaligned”-ai

DeepMind AI safety report explores the perils of “misaligned” AI

DeepMind also addresses something of a meta-concern about AI. The researchers say that a powerful AI in the wrong hands could be dangerous if it is used to accelerate machine learning research, resulting in the creation of more capable and unrestricted AI models. DeepMind says this could “have a significant effect on society’s ability to adapt to and govern powerful AI models.” DeepMind ranks this as a more severe threat than most other CCLs.

The misaligned AI

Most AI security mitigations follow from the assumption that the model is at least trying to follow instructions. Despite years of hallucination, researchers have not managed to make these models completely trustworthy or accurate, but it’s possible that a model’s incentives could be warped, either accidentally or on purpose. If a misaligned AI begins to actively work against humans or ignore instructions, that’s a new kind of problem that goes beyond simple hallucination.

Version 3 of the Frontier Safety Framework introduces an “exploratory approach” to understanding the risks of a misaligned AI. There have already been documented instances of generative AI models engaging in deception and defiant behavior, and DeepMind researchers express concern that it may be difficult to monitor for this kind of behavior in the future.

A misaligned AI might ignore human instructions, produce fraudulent outputs, or refuse to stop operating when requested. For the time being, there’s a fairly straightforward way to combat this outcome. Today’s most advanced simulated reasoning models produce “scratchpad” outputs during the thinking process. Devs are advised to use an automated monitor to double-check the model’s chain-of-thought output for evidence misalignment or deception.

Google says this CCL could become more severe in the future. The team believes models in the coming years may evolve to have effective simulated reasoning without producing a verifiable chain of thought. So your overseer guardrail wouldn’t be able to peer into the reasoning process of such a model. For this theoretical advanced AI, it may be impossible to completely rule out that the model is working against the interests of its human operator.

The framework doesn’t have a good solution to this problem just yet. DeepMind says it is researching possible mitigations for a misaligned AI, but it’s hard to know when or if this problem will become a reality. These “thinking” models have only been common for about a year, and there’s still a lot we don’t know about how they arrive at a given output.

DeepMind AI safety report explores the perils of “misaligned” AI Read More »

our-fave-star-wars-duo-is-back-in-mandalorian-and-grogu-teaser

Our fave Star Wars duo is back in Mandalorian and Grogu teaser

Disney CEO Bob Iger has been under fire for several days now for pulling Jimmy Kimmel Live off the air “indefinitely,” with Disney+’s cancellation page actually crashing a couple of times from all the traffic as people rushed to make their displeasure known. So what better time for the studio to release the first teaser trailer for The Mandalorian and Grogu, a feature film spinoff from its megahit Star Wars series The Mandalorian? Grogu and Mando, together again on an exciting space adventure, will certainly be a crowd-pleaser.

Grogu (aka Baby Yoda) won viewers’ hearts from the moment he first appeared onscreen in the first season of The Mandalorian, and the relationship between the little green creature and his father-figure bounty hunter has only gotten stronger. With the 2023 Hollywood strikes delaying production on S4 of the series, director Jon Favreau got the green light to make this spinoff film.

Per the official logline:

The evil Empire has fallen, and Imperial warlords remain scattered throughout the galaxy. As the fledgling New Republic works to protect everything the Rebellion fought for, they have enlisted the help of legendary Mandalorian bounty hunter Din Djarin (Pedro Pascal) and his young apprentice Grogu.

In addition to Pascal, the cast includes Sigourney Weaver as Ward, a veteran pilot, colonel, and leader of the New Republic’s Adelphi Rangers. Jeremy Allen White plays Rotta the Hutt (son of Jabba, first introduced in 2008’s The Clone Wars), Jonny Coyne reprises his Mandalorian S3 role as an Imperial warlord leading a surviving faction of the Galactic Empire, and we can expect to see Garazeb (“Zeb”) Orrelios from the Star Wars Rebels animated series, too. And yes, that’s a shiny new version of Mando’s ship (destroyed in S2).

Our fave Star Wars duo is back in Mandalorian and Grogu teaser Read More »

despite-congressional-threat,-national-academies-releases-new-climate-report

Despite congressional threat, National Academies releases new climate report

The National Academies responded to the EPA’s actions by saying it would prepare a report of its own, which it did despite the threat of a congressional investigation into its work. And the result undercuts the EPA’s claims even further.

Blunt and to the point

The NAS report does not mess around with subtleties, going straight to the main point: Everything we’ve learned since the endangerment finding confirms that it was on target. “EPA’s 2009 finding that the human-caused emissions of greenhouse gases threaten human health and welfare was accurate, has stood the test of time, and is now reinforced by even stronger evidence,” its authors conclude.

That evidence includes a better understanding of the climate itself, with the report citing “Longer records, improved and more robust observational networks, and analytical and methodological advances” that have both allowed us to better detect the changes in the climate, and more reliably assign them to the effects of greenhouse gases. The events attributed to climate change are also clearly harming the welfare of the US public through things like limiting agricultural productivity gains, damage from wildfires, losses due to water scarcity, and general stresses on our infrastructure.

But it’s not just the indirect effects we have to worry about. The changing climate is harming us more directly as well:

Climate change intensifies risks to humans from exposures to extreme heat, ground-level ozone, airborne particulate matter, extreme weather events, and airborne allergens, affecting incidence of cardiovascular, respiratory, and other diseases. Climate change has increased exposure to pollutants from wildfire smoke and dust, which has been linked to adverse health effects. The increasing severity of some extreme events has contributed to injury, illness, and death in affected communities. Health impacts related to climate-sensitive infectious diseases—such as those carried by insects and contaminated water—have increased.

Moreover, it notes that one of the government’s arguments—that US emissions are too small to be meaningful—doesn’t hold water. Even small increments of change will increase the risk of damaging events for decades to come, and push the world closer to hitting potential tipping points in the climate system. Therefore, cutting US emissions will directly reduce those risks.

Despite congressional threat, National Academies releases new climate report Read More »

bonkers-cdc-vaccine-meeting-ends-with-vote-to-keep-covid-shot-access

Bonkers CDC vaccine meeting ends with vote to keep COVID shot access

At one point, Hillary Blackburn, a pharmacist and daughter-in-law of Sen. Marsha Blackburn (R-Tenn.), noted that her mother developed lung cancer two years after getting a COVID-19 vaccine, suggesting, without any evidence, that there could be a link. Evelyn Griffin, an obstetrician and gynecologist in Louisiana who reportedly lost her job for refusing to get a COVID-19 vaccine, meanwhile, did her own research and tried to suggest that the mRNA in mRNA vaccines could be turned into DNA inside human cells and integrate into our genetic material. She made this assertion to a scientist at Pfizer (a maker of an mRNA COVID-19 vaccine), asking him to respond.

With admirable composure, the Pfizer scientist explained that it was not biologically plausible: “RNA cannot reverse transcribe to DNA and transport from the cytoplasm to the nucleus and then integrate. That requires a set of molecules and enzymes that don’t exist in humans and are largely reserved for retroviruses.”

At the very start of the meeting, liaisons from mainstream medical organizations pressed that the ACIP committee needs to ditch such anecdotal nonsense and unvetted data, and return to the high-quality framework for evidence-based decision-making that ACIP has used in the past, which involves comprehensive, methodical evaluations.

Retsef Levi, who works on operations management and has publicly said that COVID-19 vaccines should be removed from the market, responded by falsely claiming that there are no high-quality clinical trials to show vaccine safety, so calls to return to methodological rigor for policy making are hypocritical. “With all due respect, I just encourage all of us to be a little bit more humble,” Levi, who was the head of the ACIP’s COVID-19 working group, said.

During his response, a hot mic picked up someone saying, “You’re an idiot.” It’s unclear who the speaker was—or how many other people they were speaking for.

This post was updated to include the adoption of the recommendation by the CDC.

Bonkers CDC vaccine meeting ends with vote to keep COVID shot access Read More »

two-uk-teens-charged-in-connection-to-scattered-spider-ransomware-attacks

Two UK teens charged in connection to Scattered Spider ransomware attacks

Federal prosecutors charged a UK teenager with conspiracy to commit computer fraud and other crimes in connection with the network intrusions of 47 US companies that generated more than $115 million in ransomware payments over a three-year span.

A criminal complaint unsealed on Thursday (PDF) said that Thalha Jubair, 19, of London, was part of Scattered Spider, the name of an English-language-speaking group that has breached the networks of scores of companies worldwide. After obtaining data, the group demanded that the victims pay hefty ransoms or see their confidential data published or sold.

Bitcoin paid by victims recovered

The unsealing of the document, filed in US District Court of the District of New Jersey, came the same day Jubair and another alleged Scattered Spider member—Owen Flowers, 18, from Walsall, West Midlands—were charged by UK prosecutors in connection with last year’s cyberattack on Transport for London. The agency, which oversees London’s public transit system, faced a monthslong recovery effort as a result of the breach.

Both men were arrested at their homes on Thursday and appeared later in the day at Westminster Magistrates Court, where they were remanded to appear in Crown Court on October 16, Britain’s National Crime Agency said. Flowers was previously arrested in connection with the Transport for London attack in September 2024 and later released. NCA prosecutors said that besides the attack on the transit agency, Flowers and other conspirators were responsible for a cyberattack on SSM Health Care and attempting to breach Sutter Health, both of which are located in the US. Jubair was also charged with offenses related to his refusal to turn over PIN codes and passwords for devices seized from him.

Two UK teens charged in connection to Scattered Spider ransomware attacks Read More »

how-weak-passwords-and-other-failings-led-to-catastrophic-breach-of-ascension

How weak passwords and other failings led to catastrophic breach of Ascension


THE BREACH THAT DIDN’T HAVE TO HAPPEN

A deep-dive into Active Directory and how “Kerberoasting” breaks it wide open.

Active Directory and a heartbeat monitor with Kerberos the three headed dog

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Last week, a prominent US senator called on the Federal Trade Commission to investigate Microsoft for cybersecurity negligence over the role it played last year in health giant Ascension’s ransomware breach, which caused life-threatening disruptions at 140 hospitals and put the medical records of 5.6 million patients into the hands of the attackers. Lost in the focus on Microsoft was something as, or more, urgent: never-before-revealed details that now invite scrutiny of Ascension’s own security failings.

In a letter sent last week to FTC Chairman Andrew Ferguson, Sen. Ron Wyden (D-Ore.) said an investigation by his office determined that the hack began in February 2024 with the infection of a contractor’s laptop after they downloaded malware from a link returned by Microsoft’s Bing search engine. The attackers then pivoted from the contractor device to Ascension’s most valuable network asset: the Windows Active Directory, a tool administrators use to create and delete user accounts and manage system privileges to them. Obtaining control of the Active Directory is tantamount to obtaining a master key that will open any door in a restricted building.

Wyden blasted Microsoft for its continued support of its three-decades-old implementation of the Kerberos authentication protocol that uses an insecure cipher and, as the senator noted, exposes customers to precisely the type of breach Ascension suffered. Although modern versions of Active Directory by default will use a more secure authentication mechanism, it will by default fall back to the weaker one in the event a device on the network—including one that has been infected with malware—sends an authentication request that uses it. That enabled the attackers to perform Kerberoasting, a form of attack that Wyden said the attackers used to pivot from the contractor laptop directly to the crown jewel of Ascension’s network security.

A researcher asks: “Why?”

Left out of Wyden’s letter—and in social media posts that discussed it—was any scrutiny of Ascension’s role in the breach, which, based on Wyden’s account, was considerable. Chief among the suspected security lapses is a weak password. By definition, Kerberoasting attacks work only when a password is weak enough to be cracked, raising questions about the strength of the one the Ascension ransomware attackers compromised.

“Fundamentally, the issue that leads to Kerberoasting is bad passwords,” Tim Medin, the researcher who coined the term Kerberoasting, said in an interview. “Even at 10 characters, a random password would be infeasible to crack. This leads me to believe the password wasn’t random at all.”

Medin’s math is based on the number of password combinations possible with a 10-character password. Assuming it used a randomly generated assortment of upper- and lowercase letters, numbers, and special characters, the number of different combinations would be 9510—that is, the number of possible characters (95) raised to the power of 10, the number of characters used in the password. Even when hashed with the insecure NTLM function the old authentication uses, such a password would take more than five years for a brute-force attack to exhaust every possible combination. Exhausting every possible 25-character password would require more time than the universe has existed.

“The password was clearly not randomly generated. (Or if it was, was way too short… which would be really odd),” Medin added. Ascension “admins selected a password that was crackable and did not use the recommended Managed Service Account as prescribed by Microsoft and others.”

It’s not clear precisely how long the Ascension attackers spent trying to crack the stolen hash before succeeding. Wyden said only that the laptop compromise occurred in February 2024. Ascension, meanwhile, has said that it first noticed signs of the network compromise on May 8. That means the offline portion of the attack could have taken as long as three months, which would indicate the password was at least moderately strong. The crack may have required less time, since ransomware attackers often spend weeks or months gaining the access they need to encrypt systems.

Richard Gold, an independent researcher with expertise in Active Directory security, agreed the strength of the password is suspect, but he went on to say that based on Wyden’s account of the breach, other security lapses are also likely.

“All the boring, unsexy but effective security stuff was missing—network segmentation, principle of least privilege, need to know and even the kind of asset tiering recommended by Microsoft,” he wrote. “These foundational principles of security architecture were not being followed. Why?”

Chief among the lapses, Gold said, was the failure to properly allocate privileges, which likely was the biggest contributor to the breach.

“It’s obviously not great that obsolete ciphers are still in use and they do help with this attack, but excessive privileges are much more dangerous,” he wrote. “It’s basically an accident waiting to happen. Compromise of one user’s machine should not lead directly to domain compromise.”

Ascension didn’t respond to emails asking about the compromised password and other of its security practices.

Kerberos and Active Directory 101

Kerberos was developed in the 1980s as a way for two or more devices—typically a client and a server—inside a non-secure network to securely prove their identity to each other. The protocol was designed to avoid long-term trust between various devices by relying on temporary, limited-time credentials known as tickets. This design protects against replay attacks that copy a valid authentication request and reuse it to gain unauthorized access. The Kerberos protocol is cipher- and algorithm-agnostic, allowing developers to choose the ones most suitable for the implementation they’re building.

Microsoft’s first Kerberos implementation protects a password from cracking attacks by representing it as a hash generated with a single iteration of Microsoft’s NTLM cryptographic hash function, which itself is a modification of the super-fast, and now deprecated, MD4 hash function. Three decades ago, that design was adequate, and hardware couldn’t support slower hashes well anyway. With the advent of modern password-cracking techniques, all but the strongest Kerberos passwords can be cracked, often in a matter of seconds. The first Windows version of Kerberos also uses RC4, a now-deprecated symmetric encryption cipher with serious vulnerabilities that have been well documented over the past 15 years.

A very simplified description of the steps involved in Kerberos-based Active Directory authentication is:

1a. The client sends a request to the Windows Domain Controller (more specifically a Domain Controller component known as the KDC) for a TGT, short for “Ticket-Granting Ticket.” To prove that the request is coming from an account authorized to be on the network, the client encrypts the timestamp of the request using the hash of its network password. This step, and step 1b below, occur each time the client logs in to the Windows network.

1b. The Domain Controller checks the hash against a list of credentials authorized to make such a request (i.e., is authorized to join the network). If the Domain Controller approves, it sends the client a TGT that’s encrypted with the password hash of the KRBTGT, a special account only known to the Domain Controller. The TGT, which contains information about the user such as the username and group memberships, is stored in the computer memory of the client.

2a. When the client needs access to a service such as the Microsoft SQL server, it sends a request to the Domain Controller that’s appended to the encrypted TGT stored in memory.

2b. The Domain Controller verifies the TGT and builds a service ticket. The service ticket is encrypted using the password hash of SQL or another service and sent back to the account holder.

3a. The account holder presents the encrypted service ticket to the SQL server or the other service.

3b. The service decrypts the ticket and checks if the account is allowed access on that service and if so, with what level of privileges.

With that, the service grants the account access. The following image illustrates the process, although the numbers in it don’t directly correspond to the numbers in the above summary.

Credit: Tim Medin/RedSiege

Getting roasted

In 2014, Medin appeared at the DerbyCon Security Conference in Louisville, Kentucky, and presented an attack he had dubbed Kerberoasting. It exploited the ability for any valid user account—including a compromised one—to request a service ticket (step 2a above) and receive an encrypted service ticket (step 2b).

Once a compromised account received the ticket, the attacker downloaded the ticket and carried out an offline cracking attack, which typically uses large clusters of GPUs or ASIC chips that can generate large numbers of password guesses. Because Windows by default hashed passwords with a single iteration of the fast NTLM function using RC4, these attacks could generate billions of guesses per second. Once the attacker guessed the right combination, they could upload the compromised password to the compromised account and use it to gain unauthorized access to the service, which otherwise would be off limits.

Even before Kerberoasting debuted, Microsoft in 2008 introduced a newer, more secure authentication method for Active Directory. The method also implemented Kerberos but relied on the time-tested AES256 encryption algorithm and iterated the resulting hash 4,096 times by default. That meant the newer method made offline cracking attacks much less feasible, since they could make only millions of guesses per second. Out of concern for breaking older systems that didn’t support the newer method, though, Microsoft didn’t make it the default until 2020.

Even in 2025, however, Active Directory continues to support the old RC4/NTLM method, although admins can configure Windows to block its usage. By default, though, when the Active Directory server receives a request using the weaker method, it will respond with a ticket that also uses it. The choice is the result of a tradeoff Windows architects made—the continued support of legacy devices that remain widely used and can only use RC4/NTLM at the cost of leaving networks open to Kerberoasting.

Many organizations using Windows understand the trade-off, but many don’t. It wasn’t until last October—five months after the Ascension compromise—that Microsoft finally warned that the default fallback made users “more susceptible to [Kerberoasting] because it uses no salt or iterated hash when converting a password to an encryption key, allowing the cyberthreat actor to guess more passwords quickly.”

Microsoft went on to say that it would disable RC4 “by default” in non-specified future Windows updates. Last week, in response to Wyden’s letter, the company said for the first time that starting in the first quarter of next year, new installations of Active Directory using Windows Server 2025 will, by default, disable the weaker Kerberos implementation.

Medin questioned the efficacy of Microsoft’s plans.

“The problem is, very few organizations are setting up new installations,” he explained. “Most new companies just use the cloud, so that change is largely irrelevant.”

Ascension called to the carpet

Wyden has focused on Microsoft’s decision to continue supporting the default fallback to the weaker implementation; to delay and bury formal warnings that make customers susceptible to Kerberoasting; and to not mandate that passwords be at least 14 characters long, as Microsoft’s guidance recommends. To date, however, there has been almost no attention paid to Ascension’s failings that made the attack possible.

As a health provider, Ascension likely uses legacy medical equipment—an older X-ray or MRI machine, for instance—that can only connect to Windows networks with the older implementation. But even then, there are measures the organization could have taken to prevent the one-two pivot from the infected laptop to the Active Directory, both Gold and Medin said. The most likely contributor to the breach, both said, was the crackable password. They said it’s hard to conceive of a truly random password with 14 or more characters that could have suffered that fate.

“IMO, the bigger issue is the bad passwords behind Kerberos, not as much RC4,” Medin wrote in a direct message. “RC4 isn’t great, but with a good password you’re fine.” He continued:

Yes, RC4 should be turned off. However, Kerberoasting still works against AES encrypted tickets. It is just about 1,000 times slower. If you compare that to the additional characters, even making the password two characters longer increases the computational power 5x more than AES alone. If the password is really bad, and I’ve seen plenty of those, the additional 1,000x from AES doesn’t make a difference.

Medin also said that Ascension could have protected the breached service with Managed Service Account, a Microsoft service for managing passwords.

“MSA passwords are randomly generated and automatically rotated,” he explained. “It 100% kills Kerberoasting.”

Gold said Ascension likely could have blocked the weaker Kerberos implementation in its main network and supported it only in a segmented part that tightly restricted the accounts that could use it. Gold and Medin said Wyden’s account of the breach shows Ascension failed to implement this and other standard defensive measures, including network intrusion detection.

Specifically, the ability of the attackers to remain undetected between February—when the contractor’s laptop was infected—and May—when Ascension first detected the breach—invites suspicions that the company didn’t follow basic security practices in its network. Those lapses likely include inadequate firewalling of client devices and insufficient detection of compromised devices and ongoing Kerberoasting and similar well-understood techniques for moving laterally throughout the health provider network, the researchers said.

The catastrophe that didn’t have to happen

The results of the Ascension breach were catastrophic. With medical personnel locked out of electronic health records and systems for coordinating basic patient care such as medications, surgical procedures, and tests, hospital employees reported lapses that threatened patients’ lives. The ransomware also stole the medical records and other personal information of 5.6 million patients. Disruptions throughout the Ascension health network continued for weeks.

Amid Ascension’s decision not to discuss the attack, there aren’t enough details to provide a complete autopsy of Ascension’s missteps and the measures the company could have taken to prevent the network breach. In general, though, the one-two pivot indicates a failure to follow various well-established security approaches. One of them is known as security in depth. The security principle is similar to the reason submarines have layered measures to protect against hull breaches and fighting onboard fires. In the event one fails, another one will still contain the danger.

The other neglected approach—known as zero trust—is, as WIRED explains, a “holistic approach to minimizing damage” even when hack attempts do succeed. Zero-trust designs are the direct inverse of the traditional, perimeter-enforced hard on the outside, soft on the inside approach to network security. Zero trust assumes the network will be breached and builds the resiliency for it to withstand or contain the compromise anyway.

The ability of a single compromised Ascension-connected computer to bring down the health giant’s entire network in such a devastating way is the strongest indication yet that the company failed its patients spectacularly. Ultimately, the network architects are responsible, but as Wyden has argued, Microsoft deserves blame, too, for failing to make the risks and precautionary measures for Kerberoasting more explicit.

As security expert HD Moore observed in an interview, if the Kerberoasting attack wasn’t available to the ransomware hackers, “it seems likely that there were dozens of other options for an attacker (standard bloodhound-style lateral movement, digging through logon scripts and network shares, etc).” The point being: Just because a target shuts down one viable attack path is no guarantee that others remain.

All of that is undeniable. It’s also indisputable that in 2025, there’s no excuse for an organization as big and sensitive as Ascension suffering a Kerberoasting attack, and that both Ascension and Microsoft share blame for the breach.

“When I came up with Kerberoasting in 2014, I never thought it would live for more than a year or two,” Medin wrote in a post published the same day as the Wyden letter. “I (erroneously) thought that people would clean up the poor, dated credentials and move to more secure encryption. Here we are 11 years later, and unfortunately it still works more often than it should.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

How weak passwords and other failings led to catastrophic breach of Ascension Read More »