Author name: Paul Patrick

is-the-dream-chaser-space-plane-ever-going-to-launch-into-orbit?

Is the Dream Chaser space plane ever going to launch into orbit?

“We wanted to have a fuel system that was green instead of using hypergolics, so we could land it on a runway and we could walk up to the vehicle without being in hazmat suits,” Tom Vice, then Sierra’s chief executive, told Ars in late 2023. “That was hard, I have to say.”

Apparently it still is because, according to Weigel, the process to finish testing of the propulsion system and certify it for an uncrewed spaceflight remains ongoing.

“We still have some of our integrated safety reviews to do, and we’re in the process with updating both of our schedules to try to understand where does that really put us,” she said. “And so Sierra’s working on that, and so I need to wait and just get information back from them to see where they think some of that work lines out.”

First mission may not berth with ISS

According to one source, Sierra is considering a modification to its first mission to shorten the certification period.

The company had planned to fly the vehicle close enough to the space station such that it could be captured and berthed to the orbiting laboratory. One option now under consideration is a mission that would bring Dream Chaser close enough to the station to test key elements of the vehicle in flight but not have it berth.

This would increase confidence in the spacecraft’s propulsion system and provide the data NASA and partner space agencies need to clear the vehicle to approach and berth with the station on its second flight. However, this would require a modification of the company’s contract with NASA, and a final decision has not yet been reached on whether to perform a flyby mission before an actual berthing.

It appears highly unlikely that Dream Chaser will be ready for its debut spaceflight this year for these technical reasons. Another challenge is the availability of its Vulcan launch vehicle. After years of delays, Vulcan is finally due to make its first national security launch as early as this coming Sunday. Assuming this launch is successful, Vulcan has a busy manifest in the coming months for the US Space Force.

Given this, it is uncertain when a Vulcan launch vehicle will become available for Dream Chaser, which was initially designated to fly on Vulcan’s second flight. However, because Dream Chaser was not ready last fall, that rocket flew with a mass simulator on this second launch, back in October 2024.

Is the Dream Chaser space plane ever going to launch into orbit? Read More »

four-radioactive-wasp-nests-found-on-south-carolina-nuclear-facility

Four radioactive wasp nests found on South Carolina nuclear facility

According to the DOE, the site produced 165 million gallons of radioactive liquid waste, which has been evaporated to 34 million gallons. The site has 51 waste tanks, eight of which have been operationally closed, with the remaining 43 in various states of the closure process.

Outside experts have been quick to point out critical information missing from the DOE’s nest report, including the absolute level of radioactivity found in the nest, the specific isotopes that were found, and the type of wasps that built the nest. Some wasps build their nests from mud, while others might use chewed-up pulp from wood.

Timothy Mousseau, a biologist at the University of South Carolina who studies organisms and ecosystems in radioactive regions, told the Times that the DOE’s explanation that the wasps gathered legacy contamination for their homes is not unreasonable. “There’s some legacy radioactive contamination sitting around in the mud in the bottom of the lakes, or, you know, here and there,” he said.

“The main concern relates to whether or not there are large areas of significant contamination that have escaped surveillance in the past,” Mousseau said. “Alternatively, this could indicate that there is some new or old radioactive contamination that is coming to the surface that was unexpected.”

The DOE report of the first wasp nest said that the nest was sprayed to kill wasps, then bagged as radioactive waste. The ground and area around where the nest had been did not have any further contamination.

In a statement to the Aiken Standard, officials working at the DOE site noted that the wasps themselves pose little risk to the community—they likely have lower contamination on them and generally don’t stray more than a few hundred yards from their nests.

However, the Times pointed out a report from 2017, when officials at SRS found radioactive bird droppings on the roof of a building at the site. Birds can carry radioactive material long distances, Mousseau said.

Four radioactive wasp nests found on South Carolina nuclear facility Read More »

idaho-has-become-the-wild-frontier-of-vaccination-policy-and-public-health

Idaho has become the wild frontier of vaccination policy and public health


Idaho charts a new path as trust in public health craters.

Some 280,000 people live in the five northernmost counties of Idaho. One of the key public officials responsible for their health is Thomas Fletcher, a retired radiologist who lives on a 160-acre farm near Sandpoint.

Fletcher grew up in Texas and moved to Idaho in 2016, looking for a place where he could live a rural life alongside likeminded conservatives. In 2022, he joined the seven-member board of health of the Panhandle Health District, the regional public health authority, and he was appointed chairman last summer.

PHD handles everything from cancer screenings to restaurant hygiene inspections, and the business of the board is often mundane, almost invisible. Then, this February, Fletcher issued a short announcement online. Parents, he wrote, should be informed of the potential harms of common childhood vaccines. It was time for the board to discuss how best to communicate those risks, rather than “withholding information contra the CDC narrative.” Fletcher invited everyone who believes in “full disclosure and transparency when providing informed consent on childhood vaccines” to attend the next monthly meeting of the board, on a Thursday afternoon.

PHD board meetings tend to be sparsely attended. This one was standing-room only—the start of a monthslong debate over vaccine safety and the question of what, exactly, it means to provide informed consent.

Versions of that debate are playing out across the United States in the aftermath of the COVID-19 pandemic, which many Americans believe was badly mismanaged. The backlash has upended longstanding norms in public health: The nation’s top health official, Robert F. Kennedy Jr., publicly questions the value of common vaccines. Prominent vaccine skeptics now sit on a key advisory committee that shapes immunization practices nationwide. Polls suggest that trust in health authorities is politically polarized — and perhaps historically low. Immunization rates are dropping across the country. And many advocates are promoting a vision of public health that’s less dependent on mandates and appeals to authority, and more deferent to individuals’ beliefs.

Much of that energy has been reflected in Kennedy’s Make American Healthy Again, or MAHA, movement. The coalition is diverse — and has sometimes fractured over vaccination issues—but often channels a long-running argument that Americans should have more freedom to choose or reject vaccines and other health measures.

The backlash against traditional health authorities, said Columbia University medical historian James Colgrove, is unprecedented in recent US history. “It’s been a very, very long time since we’ve been in a place like this,” he said.

Perhaps more than anywhere else in the country, Idaho has experienced these shifts—an ongoing experiment that shows what it looks like to put a vision of individual health freedom into practice. And places like the Panhandle Health District have become testing grounds for big questions: What happens when communities move away from widespread and mandated vaccination? And what does it mean to turn MAHA principles into local public health policy?

During a recent visit to Idaho, Kennedy described the state as “the home of medical freedom.” In April, Gov. Brad Little signed the Idaho Medical Freedom Act, which bans schools, businesses, and government agencies from requiring people to participate in medical interventions, such as mask-wearing or vaccination, in order to receive services. It’s the first legislation of its kind in the country. The bill has a carveout that keeps school vaccine requirements in place, but those requirements are already mostly symbolic: The state’s exemption policy is so broad that, as one Idaho pediatrician told Undark, “you can write on a napkin, ‘I don’t want my kids to get shots because of philosophical reasons,’ and they can go to kindergarten.” Overall, reported vaccination rates for kindergarteners in Idaho are now lower than in any other state that reported data to the federal government—especially in the Panhandle Health District, where fewer than two-thirds arrive with records showing that they are up-to-date on common shots.

“It’s really kind of like watching a car accident in slow motion,” said Ted Epperly, a physician and the CEO of Full Circle Health, which operates a network of clinics in the Boise area.

Photo of Idaho countryside

A view of Sandpoint, Idaho, which sits on the shores of Lake Pend Oreille. The city, a part of Bonner County, is served by the Panhandle Health District.

Credit: Kirk Fisher/iStock/Getty Images Plus

A view of Sandpoint, Idaho, which sits on the shores of Lake Pend Oreille. The city, a part of Bonner County, is served by the Panhandle Health District. Credit: Kirk Fisher/iStock/Getty Images Plus

Public health leaders often ascribe the low vaccination rates to the work of bad-faith actors who profit from falsehoods, to the spread of misinformation, or to failures of communication: If only leaders could better explain the benefits of vaccination, this thinking goes, more people would get shots.

In interviews and public statements, health freedom advocates in Idaho describe a far deeper rift: They do not believe that public health institutions are competent or trustworthy. And restoring that trust, they argue, will require radical changes.

Fletcher, for his part, describes himself as an admirer of RFK Jr. and the Make America Healthy Again movement. With the recent appointment of a new member, he said, MAHA supporters now hold a majority on the board, where they are poised to reimagine public health work in the district.

Local public health

In the US, public health is mostly local. Agencies like the Centers for Disease Control and Prevention conduct research and issue influential recommendations. But much of the actual power rests with the country’s thousands of state, local, and tribal public health authorities—with institutions, in other words, like the Panhandle Health District, and with leaders like Fletcher and his fellow PHD board of health member Duke Johnson.

Johnson says he grew up in Coeur d’Alene, Idaho, in the 1960s, the descendant of homesteaders who arrived in the 19th century. He attended medical school at the University of California, Los Angeles and eventually returned to Idaho, where he runs a family medical practice and dietary supplement business in the town of Hayden.

In Idaho, health boards are appointed by elected county commissioners. The commissioners of Kootenai County gave Johnson the nod in July 2023. Johnson took the role, he said, in order to restore trust in a medical system that he characterized as beholden to rigid dogmas and protocols rather than independent thinking.

In interviews and public statements, health freedom advocates in Idaho describe a far deeper rift: They do not believe that public health institutions are competent or trustworthy.

Last winter, Johnson took a tour of one of the PHD clinics. Among other services, it provides routine childhood immunizations, especially for families with limited access to health care. As is standard in pediatrics practices, the clinic hands out flyers from the CDC that review the potential side effects of common vaccines, including “a very remote chance” of severe outcomes. Johnson was unimpressed with the CDC writeup. “I thought: This isn’t completely covering all of the risk-benefit ratio,” Johnson said. He felt families could be better informed about what he sees as the substantial risks of common shots.

Johnson is an outlier among physicians. The overwhelming majority of laboratory scientists, epidemiologists, and pediatricians who have devoted their lives to the study of childhood disease say that routine immunizations are beneficial, and that serious side effects are rare. Large-scale studies have repeatedly failed to find purported links between the measles-mumps-rubella, or MMR, vaccine and autism, or to identify high rates of severe side effects for other routine childhood immunizations. The introduction of mass vaccinations in the US in the 1950s and 1960s was followed by dramatic declines in the rates of childhood diseases like polio and measles that once killed hundreds of American children each year, and sent tens of thousands more to the hospital. Similar declines have been recorded around the world.

Children can suffer side effects from common shots like the MMR vaccine, ranging from mild symptoms like a rash or fever to rare fatal complications. Public health agencies and vaccine manufacturers study and track those side effects. But today, many Americans simply do not trust that those institutions are being transparent about the risks of vaccination.

Johnson shares some of those concerns. The website for his clinic, Heart of Hope Health, describes offering services for “injection-injured” patients, encouraging them to receive a $449 heart scan, and advertises “no forced masks or vaccinations.” (During a PHD board meeting, Johnson said that one of his own children suffered an apparent bad reaction to a vaccine many years ago.) “The lack of trust in established medicine is probably 10 times bigger than the people at Harvard Medical School realize,” Johnson told Undark during an evening phone call, after a long day seeing patients. Top medical institutions have brilliant scientists on staff, he continued. But, he suggested, those experts have lost touch with how they’re seen by much of the public: “I think sometimes you can spend so much time talking to the same people who agree with you that you’re not reaching the people on the street who are the ones who need the care. And I’m in the trenches.”

Many public health experts agree that restoring trust is an urgent priority, and they are convinced that it will come through better communication, a reduction in the circulation of misinformation, and a re-building of relationships. Johnson and others in the health freedom movement frequently adopt the language of restoring trust, too. But for them, the process tends to mean something different: an overhaul of public health institutions and a frank accounting of their perceived failures.

At the board meeting in February, Johnson laid out the proposal for a change in policy: What if the board wrote up its own document for parents, explaining the evidence behind specific vaccines, and laying out the risks and benefits of the shots? The goal, he told Undark, was “to make sure that the people that we’re responsible for in our in our district can make an informed decision.”

Fletcher was also hoping to change the way PHD communicated about vaccines. Why did a push for informed consent appeal to him? “I can summarize the answer to that question with one word,” Fletcher said. “COVID.”

Nobody’s telling me what to do

Idaho is ideologically diverse, with blue pockets in cities like Boise, and texture to its overwhelming Republican majority. (Latter-Day Saint conservatives in East Idaho, for example, may not always be aligned with government-skeptical activists clustered in the north.) Parts of the state have a reputation for libertarian politics—and for resistance to perceived excesses of government authority.

People came West because “they wanted to get out to a place where nobody would tell them what to do,” said Epperly, the Boise-area physician and administrator. That libertarian ethos, he said, can sometimes translate into a skepticism of things like school vaccination requirements, even as plenty of Idahoans, including Epperly, embrace them.

Like all US states, Idaho technically requires vaccination for children to attend school. But it is relatively easy to opt out of the requirement. In 2021, Idaho lawmakers went further, instructing schools to be proactive and notify parents they had the option to claim an exemption.

“Idaho has some of the strongest languages in the US when it comes to parental rights and vaccine exemptions,” the vaccine-skeptical advocacy group Health Freedom Idaho wrote in 2021. In the 2024–2025 school year, more than 15 percent of kindergarten parents in the state claimed a non-medical exemption, the highest percentage, by far, of the states that reported data.

The pandemic, Epperly and other Idaho health care practitioners said, accelerated many of these trends. In his view, much of that backlash was about authority and control. “The pandemic acted as a catalyst to increase this sense of governmental overreach, if you will,” he said. The thinking, he added, was: “‘How dare the federal government mandate that we wear masks, that we socially distance, that we hand-wash?’”

Recently, advocates have pushed to remove medical mandates in the state altogether through the Idaho Medical Freedom Act, which curtails the ability of local governments, businesses, and schools to impose things like mask mandates or vaccine requirements.

The author of the original bill is Leslie Manookian, an Idaho activist who has campaigned against what she describes as the pervasive dangers of some vaccines, and who leads a national nonprofit, the Health Freedom Defense Fund. In testimony to an Idaho state Senate committee this February, she described feeling shocked by mitigation measures during the COVID-19 pandemic. “Growing up, I could have never, ever imagined that Idaho would become a place that locked its people down, forced citizens to cover their faces, stand on floor markers 6 feet apart, or produce proof of vaccination in order to enter a venue or a business,” Manookian told the senators.

“Idaho has some of the strongest languages in the US when it comes to parental rights and vaccine exemptions.”

Where some public health officials saw vital interventions for the public’s well-being, Manookian saw a form of government overreach, based on scant evidence. Her home state, she argued, could be a leader in building a post-COVID vision of public health. “Idaho wants to be the shining light on the Hill, that leads the way for the rest of the nation in understanding that we and we alone are sovereign over our bodies, and that our God-given rights belong to us and to no one else,” Manookian said during the hearing. A modified version of the bill passed both houses with large majorities, and became law in April.

Epperly, like many physicians and public health workers in the state, has watched these changes with concern. The family medicine specialist grew up in Idaho. During the pandemic, he was a prominent local figure advocating for masking and COVID-19 vaccinations. When the pandemic began, he had been serving on the board of the Boise-area Central District Health department for more than a decade. Then, in 2021, Ada County commissioners declined to renew his appointment, selecting a physician and vocal opponent of COVID-19 vaccines instead.

A transformative experience

For Thomas Fletcher, the Panhandle Health District board of health chair, the experience of the pandemic was transformative. Fletcher has strong political views; he moved away from Texas, in part, over concerns that the culture there was growing too liberal, and out of a desire to live in a place that was, as he put it, “more representative of America circa 1950.” But before the pandemic, he said, although he was a practicing physician, he rarely thought about public health.

Then COVID-19 arrived, and it felt to him that official messaging was disconnected from reality. In early 2020, the World Health Organization said that COVID-19 was not an airborne virus. (There’s a scientific consensus today that it actually is.) Prominent scientists argued that it was a conspiracy theory to say that COVID-19 emerged from a lab. (The issue is still hotly debated, but many scientists now acknowledge that a lab leak is a real possibility.) The World Health Organization appeared to indicate that the fatality rate of COVID-19 was upwards of 3 percent. (It’s far lower.)

Many people today understand these reversals as the results of miscommunications, evolving evidence, or good-faith scientific error. Fletcher came to believe that Anthony Fauci—a member of the White House Coronavirus Task Force during the pandemic—and other public health leaders were intentionally, maliciously misleading the public. Fletcher reads widely on the platform Substack, particularly writers who push against the medical establishment, and he concluded that COVID-19 vaccines were dangerous, too—a toxic substance pushed by pharma, and backed knowingly by the medical elite. “They lied to us,” he said.

That shift ultimately led the retired physician to question foundational ideas in his field. “Once you realize they’re lying to us, then you ask the question, ‘Well, where else are they lying?’” Fletcher said during one of several lengthy phone conversations with Undark. “I was a card-carrying allopathic physician,” he said. “I believed in the gospel.” But he soon began to question the evidence behind cholesterol medication, and then antidepressants, and then the childhood vaccination schedule.

In 2022, lawmakers in Bonner County appointed Fletcher to the board of health. Last year, he took the helm of the board, which oversees an approximately 90-person agency with a $12 million budget.

“As Chairman of Panhandle Health, I feel a certain urge to restore the trust—public trust in public health—because that trust has been violated,” he said.

The informed consent measure seemed like one way to get there.

Conversations around informed consent

On a February afternoon, in a conference room at the health district office in Hayden, a few dozen attendees and board members gathered to discuss vaccination policy and informed consent in the district.

During the lengthy public comment periods, members of the public spoke about their experiences with vaccination. One woman described witnessing the harms of diseases that have been suppressed by vaccination, noting that her mother has experienced weakness in her limbs as the result of a childhood polio infection. Several attendees reported firsthand encounters with what they understood to be vaccine side effects; one cited rising autism rates. They wanted parents to hear more about those possibilities before getting shots.

In response, some local pediatrics providers insisted they already facilitated informed consent, through detailed conversations with caregivers. They also stressed the importance of routine shots; one brought up the measles outbreak emerging in Texas, which would go on to be implicated in the deaths of two unvaccinated children.

“Once you realize they’re lying to us, then you ask the question, ‘Well, where else are they lying?’”

Johnson, defending the measure, proposed a document that listed both pros and cons for vaccination. The PHD Board, he argued, “would have a much better chance of providing good information than the average person on the Internet.”

The conversation soon bogged down over what, exactly, the document should look like. “If the vote is yay or nay for informed consent, I’m all in with two hands,” said board member Jessica Jameson, an anesthesiologist who ultimately voted against the measure. “But my concern is that we have to be very careful about the information we present and the way that it’s presented.” The board members, she added, were neither “the subject matter experts nor the stakeholders,” and studies that seemed strong on first-glance could be subject to critique.

Marty Williams, a nurse practitioner in Coeur d’Alene who works in pediatrics, had heard about the meeting that morning, as materials about the measure circulated online.

Williams is a former wildland firefighter, a father of five, and a Christian; he snowboards and bowhunts in his free time, and speaks with the laid-back affect of someone who has spent years coaching anxious parents through childhood scrapes and illnesses. A document associated with the proposal looked to him less like an attempt at informed consent, and more like a bid to talk parents out of giving their children immunizations. “If you read this, you would be like, ‘Well, I would never vaccinate my child,’” he recalled. “It was beyond informed consent. It seemed to be full of bias.”

He and his practice partner, Jeanna Padilla, canceled appointments in order to attend the meeting and speak during a public comment period. “The thought of it coming from our public health department made me sick,” Williams said. “We’re in the business of trying to prevent disease, and I had a strong feeling that this was going to bring more fear onto an already anxiety-provoking subject.” The issue felt high-stakes to him: That winter, he had seen more cases of pertussis, a vaccine-preventable illness, than at any point in his 18-year career.

Williams has always encountered some parents who are hesitant about vaccination. But those numbers began to rise during the COVID-19 pandemic. Trust in public health was dropping, and recommendations to vaccinate children against COVID-19, in particular, worried him. “Is this going to push people over the edge, where they just withdraw completely from vaccines?” he wondered at the time. Something did shift, he said: “We have families that historically have vaccinated their children, and now they have a new baby, and they’re like, ‘Nope, we’re not doing it. Nope, nope, nope.’”

In his practice, Williams described a change in how he’s approached parents. “I don’t say, ‘Well, you know, it’s time for Junior’s two months shots. Here’s what we’re going to do.’ I don’t approach it that way anymore, because greater than 40 or 50 percent of people are going to say, ‘Well, no, I’m not doing vaccines. And they get defensive right away,’” he said. Instead, he now opens up a conversation, asking families whether they’ve thought about vaccination, answering their questions, providing resources, talking about his personal experiences treating illness—even inviting them to consider the vaccine schedules used in Denmark or Sweden, which recommend shots for fewer diseases, if they are adamant about not following CDC guidelines.

The approach can be effective, he said, but also time-consuming and draining. “It’s emotional for me too, because there’s a piece of this that being questioned every single day in regards to the standard of care, as if you’re harming children,” he said.

“If you read this, you would be like, ‘Well, I would never vaccinate my child.’ It was beyond informed consent. It seemed to be full of bias.”

Williams doubts his comments at the February meeting achieved much. “I was shocked by what I was hearing, because it was so one-sided,” he said. What seemed to be missing, he said, was an honest account of the alternatives: “There was no discussion of, OK, then, if we don’t vaccinate children, what is our option? How else are we going to protect them from diseases that our grandparents dealt with that we don’t have to deal with in this country?”

The board punted: They’d discuss the issue again down the road.

This isn’t new

Versions of this debate have played out across Idaho—and across the country — since the end of COVID-19’s emergency phase. In an apparent national first, one Idaho health district banned COVID-19 vaccines altogether. In Louisiana, Surgeon General Ralph Abraham told public health departments to stop recommending specific vaccines. “Government should admit the limitations of its role in people’s lives and pull back its tentacles from the practice of medicine,” Abraham and his deputy wrote in a statement explaining the decision. “The path to regaining public trust lies in acknowledging past missteps, refocusing on unbiased data collection, and providing transparent, balanced information for people to make their own health decisions.”

In several states, Republican lawmakers have moved to make it easier for people to opt out of vaccines. Not all those efforts have been successful: In West Virginia this past March, for example, the Republican-dominated legislature rejected a bill that would have made it easier to obtain exemptions. Keith Marple, a Republican lawmaker who voted against the measure, cited his personal experiences with people who had been left disabled by polio. “West Virginia needs to look after its children,” he said, according to the news site West Virginia Watch.

In an apparent national first, one Idaho health district banned COVID-19 vaccines altogether.

In Idaho, like many states, vaccination rates have dropped. In the 2023-2024 school year, a bit more than 65 percent of kindergarten families in the Panhandle Health District furnished records showing they’ve received the MMR vaccine and five other common immunizations, down from just over 69 percent in the 2019-2020 school year. (State officials note that some children may have received shots, but their parents did not submit the paperwork to prove it.) Such figures, infectious disease modelers say, leave the area vulnerable to outbreaks of measles and other illnesses.

During an interview with Undark earlier this year, Sarah Leeds, who directs the immunization program for the Idaho Department of Health and Welfare, noted her colleagues across the country are reporting resistance to their work. “Sometimes it’s hard when you might be feeling like people think we’re the villain,” she said. “But I know our team and our leadership knows we do good work, and it’s based on sound science, and it’s important work for the community. And we just keep that at the front of our minds.”

When the board reconvened in early March, more advocates for the informed consent policy came out to back it. Among them was Rick Kirschner, a retired naturopathic doctor, author, and speaker. (His best-known book is titled “Dealing With People You Can’t Stand.”) Kirschner lived for decades in Ashland, Oregon. Early in 2020, he began to diverge from his neighbors over COVID-19 policies. He and his wife visited north Idaho that summer, and bought a home there weeks later. Compared to pandemic-conscious Oregon, it felt like a different reality. That Thanksgiving, he said during a recent Zoom interview, they attended a celebration “with 10 families and all their kids running around. It just was, ‘Oh, we’re Americans again.’ And it was just terrific.”

At the meeting in March, several people said that it was necessary to restore trust in public health institutions. But what, exactly, did that mean? Kirschner argued that it required more information, including more detailed accountings of all the ways public health interventions like vaccination could cause harm, and more detail on where the scientific literature falls short. “Denying information risks backfiring when risks that were hidden become known and trust in authorities craters,” he said during the hearing.

“I find that people are smarter than these public health people give them credit for,” he said during his call with Undark. There was a tendency in public health, he felt, to treat people like cattle. “The mindset of public health is, ‘They’re dummies, and we need to direct them and to what we think is in their interest,’” he said.

Others at the meeting pushed back against suggestions that public health workers and clinicians were not already providing detailed information to patients. “It’s not like Panhandle Health is against informed consent, or does not have that as part of the process” said Peggy Cuvala, a member of the board. Cuvala has personal experience with the issue: She spent more than three decades as a public health nurse and nurse practitioner with the Panhandle Health District. “I would never force anyone into vaccination,” she said in a phone interview.

Cuvala is well aware that vaccine side effects happen—one of her own children, she said, suffered an adverse reaction to a shot—but she’s also seen transformative benefits. For years, she had to fill out reports on cases of Haemophilus influenzae that had caused meningitis in young children, including one case in which an infant died. Then a vaccine arrived. “Within a year of that vaccine coming out, I didn’t have to do those reports anymore,” she told Undark.

Cuvala describes herself as feeling perplexed by the recent direction of the board. “I think protecting and promoting the health and well being of the residents in North Idaho is critical,” she wrote in an email. “This work should be directed by the board collectively without political bias.”

During the meeting, legal questions came up, too: What were the liability implications of drawing up a custom PHD vaccine safety document?

In a previous meeting, Fletcher had pushed for a document that just gave basic details on the duration and scope of the randomized controlled trials that common vaccines had been subjected to. Such information, he argued, would demonstrate how poorly vetted the shots were—and show how they could be dangerous, even fatal. After that, he said in an interview, it was the parent’s choice. “If some mom wants her kid to get it, fine, give it to him,” Fletcher said. The ultimate arbiter of who was correct would be the brutal process of natural selection: “Let Darwin figure it out.”

In the March meeting, the board voted against creating a subcommittee to explore how to draft the document. “It’s dead,” said Fletcher during a phone call in early May.

A matter of trust

The discussion around the informed consent measure, though, was not entirely gone. On a Saturday morning in early May, the board held a lengthy public planning session at a government building in Coeur d’Alene. During a visioning session, attendees put stickers on pieces of paper next to words describing opportunities for the district. At the bottom of the page, someone wrote, in large, all-caps: “TRUST.”

Kirschner spoke again at the meeting, urging the board to revive the measure. So did a handful of other attendees, including Ron Korn, a county commissioner.

In a short interview at the meeting, PHD spokesperson Katherine Hoyer expressed some uncertainty about what substantive differences, precisely, the measure would offer over what’s already taking place in clinics. “What they’re proposing is that we provide patients with information on medical practices and vaccines,” she said. “That is happening.”

Fletcher sees opportunities ahead. In July, the board unanimously reelected him as chair. And, he said, he has a new ally in the push for an informed consent policy. Jessica Jameson, one of the board members who opposed the measure, recently resigned. Fletcher described her successor, a naturopathic doctor who was appointed to the board last month, as aligned with the MAHA movement. That brings the total MAHA-aligned members, by his count, to four — securing a majority on the seven-member board. “My plan is unfolding just as I wanted,” he said during a call in late July.

During an earlier conversation, Fletcher had reflected on the strange position of RFK Jr., who is perched atop the Department of Health and Human Services, which is staffed by many of the people he spent his career opposing. “He has hundreds of thousands of employees; 99.99 percent of them think he’s full of shit,” Fletcher said. Fletcher, in some ways, has his own miniature version of that problem: An antagonist of institutional public health, overseeing a public health organization.

The precise informed consent measure, he acknowledged, may not come to pass. But the debate itself has merit, he said: “Even if we lose, whatever lose means, even if we don’t make any positive forward motion — you never know. Every time you talk about this, you maybe change someone’s sentiment. You maybe move things forward a little bit. Which is why I do it.”

Fletcher’s role is small. But, he suggested, added together, the cumulative efforts of local politicking could amount to a revolution. “Robert Kennedy needs as many people putting their oar in the water and stroking in the same direction,” Fletcher said. “He can’t do it alone. So if there are 10,000 Thomas Fletchers out there, all going in the same direction, then maybe we can have hope.”

Rajah Bose contributed reporting from Idaho.

This article was originally published on Undark. Read the original article.

Idaho has become the wild frontier of vaccination policy and public health Read More »

bmw’s-next-ev-is-its-most-sustainable-car-yet—here’s-why

BMW’s next EV is its most sustainable car yet—here’s why

Sadly, the US is unlikely to get the Econeer trim, which uses a seat fabric made entirely from recycled PET bottles (instead, we should be getting an eco vinyl option).

Of course, you need to do more than just pick better materials, some of which have been recycled, if you want to seriously dent the carbon footprint of your new vehicle. That’s especially true if it’s electric—for all an EV’s benefits, they remain significantly more energy-intensive to build than a new internal combustion engine vehicle. And automakers do need to make serious dents in their carbon footprints: BMW has to slash its carbon emissions from a 2019 level of 150 million tons down to 109 million tons in 2030. For 2024, it was down to 135 million tons, the company told us.

Fishing nets are turned into plastic granules, then used to make bits of the car.

The Neue Klasse is essential to meeting that goal. The factory in Debrecen, Hungary, is powered entirely by renewable energy, including an entirely electric paint shop, and it generates two-thirds the amount of CO2 as one of BMW’s established factories. And the battery pack, which uses an all-new BMW cylindrical cell, has a 42 percent smaller carbon footprint per kWh than the prismatic cells used in BMW’s current 5th-generation EVs.

We can’t say much about the expected efficiency of the new 6th-gen powertrain until later this month, but we can say that BMW calculates that the iX3 can reach its break-even point with an ICE vehicle within just a year. Charge the car with entirely renewable electricity, and within just 10,900 miles (17,500 km), it’s on par with an ICE vehicle; using the normal European energy generation mix, that crossover comes at a little more than 13,300 miles (21,000 km).

At 124,000 miles (200,000 km), the iX3 should have a lifetime carbon footprint of 23 tons (or 14.6 tons exclusively using renewable energy); by contrast, a conventionally powered BMW X3 crossover would have a footprint of 52.8 tons.

Check back on August 25, when we can tell you what else we learned about BMW’s next EV crossover.

BMW’s next EV is its most sustainable car yet—here’s why Read More »

chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results

ChatGPT users shocked to learn their chats were in Google search results

Faced with mounting backlash, OpenAI removed a controversial ChatGPT feature that caused some users to unintentionally allow their private—and highly personal—chats to appear in search results.

Fast Company exposed the privacy issue on Wednesday, reporting that thousands of ChatGPT conversations were found in Google search results and likely only represented a sample of chats “visible to millions.” While the indexing did not include identifying information about the ChatGPT users, some of their chats did share personal details—like highly specific descriptions of interpersonal relationships with friends and family members—perhaps making it possible to identify them, Fast Company found.

OpenAI’s chief information security officer, Dane Stuckey, explained on X that all users whose chats were exposed opted in to indexing their chats by clicking a box after choosing to share a chat.

Fast Company noted that users often share chats on WhatsApp or select the option to save a link to visit the chat later. But as Fast Company explained, users may have been misled into sharing chats due to how the text was formatted:

“When users clicked ‘Share,’ they were presented with an option to tick a box labeled ‘Make this chat discoverable.’ Beneath that, in smaller, lighter text, was a caveat explaining that the chat could then appear in search engine results.”

At first, OpenAI defended the labeling as “sufficiently clear,” Fast Company reported Thursday. But Stuckey confirmed that “ultimately,” the AI company decided that the feature “introduced too many opportunities for folks to accidentally share things they didn’t intend to.” According to Fast Company, that included chats about their drug use, sex lives, mental health, and traumatic experiences.

Carissa Veliz, an AI ethicist at the University of Oxford, told Fast Company she was “shocked” that Google was logging “these extremely sensitive conversations.”

OpenAI promises to remove Google search results

Stuckey called the feature a “short-lived experiment” that OpenAI launched “to help people discover useful conversations.” He confirmed that the decision to remove the feature also included an effort to “remove indexed content from the relevant search engine” through Friday morning.

ChatGPT users shocked to learn their chats were in Google search results Read More »

citing-“market-conditions,”-nintendo-hikes-prices-of-original-switch-consoles

Citing “market conditions,” Nintendo hikes prices of original Switch consoles

Slowed tech progress, inflation, and global trade wars are doing a number on game console pricing this year, and the bad news keeps coming. Nintendo delayed preorders of the Switch 2 in the US and increased accessory prices, and Microsoft gave its Series S and X consoles across-the-board price hikesin May.

Today, Nintendo is back for more, increasing prices on the original Switch hardware, as well as some Amiibo, the Alarmo clock, and some Switch and Switch 2 accessories. The price increases will formally take effect on August 3.

The company says that there are currently no price increases coming for the Switch 2 console, Nintendo Online memberships, and physical and digital Switch 2 games. But it didn’t take future price increases off the table, noting that “price adjustments may be necessary in the future.”

Nintendo didn’t announce how large the price increases would be, but some retailers were already listing higher prices as of Friday. Target now lists the Switch Lite for $229.99, up from $199.99; the original Switch for $339.99, up from $299.99; and the OLED model of the Switch for a whopping $399.99, up from $349.99 and just $50 less than the price of the much more powerful Switch 2 console.

Citing “market conditions,” Nintendo hikes prices of original Switch consoles Read More »

the-military’s-squad-of-satellite-trackers-is-now-routinely-going-on-alert

The military’s squad of satellite trackers is now routinely going on alert


“I hope this blows your mind because it blows my mind.”

A Long March 3B rocket carrying a new Chinese Beidou navigation satellite lifts off from the Xichang Satellite Launch Center on May 17, 2023. Credit: VCG/VCG via Getty Images

This is Part 2 of our interview with Col. Raj Agrawal, the former commander of the Space Force’s Space Mission Delta 2.

If it seems like there’s a satellite launch almost every day, the numbers will back you up.

The US Space Force’s Mission Delta 2 is a unit that reports to Space Operations Command, with the job of sorting out the nearly 50,000 trackable objects humans have launched into orbit.

Dozens of satellites are being launched each week, primarily by SpaceX to continue deploying the Starlink broadband network. The US military has advance notice of these launches—most of them originate from Space Force property—and knows exactly where they’re going and what they’re doing.

That’s usually not the case when China or Russia (and occasionally Iran or North Korea) launches something into orbit. With rare exceptions, like human spaceflight missions, Chinese and Russian officials don’t publish any specifics about what their rockets are carrying or what altitude they’re going to.

That creates a problem for military operators tasked with monitoring traffic in orbit and breeds anxiety among US forces responsible for making sure potential adversaries don’t gain an edge in space. Will this launch deploy something that can destroy or disable a US satellite? Will this new satellite have a new capability to surveil allied forces on the ground or at sea?

Of course, this is precisely the point of keeping launch details under wraps. The US government doesn’t publish orbital data on its most sensitive satellites, such as spy craft collecting intelligence on foreign governments.

But you can’t hide in low-Earth orbit, a region extending hundreds of miles into space. Col. Raj Agrawal, who commanded Mission Delta 2 until earlier this month, knows this all too well. Agrawal handed over command to Col. Barry Croker as planned after a two-year tour of duty at Mission Delta 2.

Col. Raj Agrawal, then-Mission Delta 2 commander, delivers remarks to audience members during the Mission Delta 2 redesignation ceremony in Colorado Springs, Colorado, on October 31, 2024. Credit: US Space Force

Some space enthusiasts have made a hobby of tracking US and foreign military satellites as they fly overhead, stringing together a series of observations over time to create fairly precise estimates of an object’s altitude and inclination.

Commercial companies are also getting in on the game of space domain awareness. But most are based in the United States or allied nations and have close partnerships with the US government. Therefore, they only release information on satellites owned by China and Russia. This is how Ars learned of interesting maneuvers underway with a Chinese refueling satellite and suspected Russian satellite killers.

Theoretically, there’s nothing to stop a Chinese company, for example, from taking a similar tack on revealing classified maneuvers conducted by US military satellites.

The Space Force has an array of sensors scattered around the world to detect and track satellites and space debris. The 18th and 19th Space Defense Squadrons, which were both under Agrawal’s command at Mission Delta 2, are the units responsible for this work.

Preparing for the worst

One of the most dynamic times in the life of a Space Force satellite tracker is when China or Russia launches something new, according to Agrawal. His command pulls together open source information, such as airspace and maritime warning notices, to know when a launch might be scheduled.

This is not unlike how outside observers, like hobbyist trackers and space reporters, get a heads-up that something is about to happen. These notices tell you when a launch might occur, where it will take off from, and which direction it will go. What’s different for the Space Force is access to top-secret intelligence that might clue military officials in on what the rocket is actually carrying. China, in particular, often declares that its satellites are experimental, when Western analysts believe they are designed to support military activities.

That’s when US forces swing into action. Sometimes, military forces go on alert. Commanders develop plans to detect, track, and target the objects associated with a new launch, just in case they are “hostile,” Agrawal said.

We asked Agrawal to take us through the process his team uses to prepare for and respond to one of these unannounced, or “non-cooperative,” launches. This portion of our interview is published below, lightly edited for brevity and clarity.

Ars: Let’s say there’s a Russian or Chinese launch. How do you find out there’s a launch coming? Do you watch for NOTAMs (Notices to Airmen), like I do, and try to go from there?

Agrawal: I think the conversation starts the same way that it probably starts with you and any other technology-interested American. We begin with what’s available. We certainly have insight through intelligence means to be able to get ahead of some of that, but we’re using a lot of the same sources to refine our understanding of what may happen, and then we have access to other intel.

The good thing is that the Space Force is a part of the Intelligence Community. We’re plugged into an entire Intelligence Community focused on anything that might be of national security interest. So we’re able to get ahead. Maybe we can narrow down NOTAMs; maybe we can anticipate behavior. Maybe we have other activities going on in other domains or on the Internet, the cyber domain, and so on, that begin to tip off activity.

Certainly, we’ve begun to understand patterns of behavior. But no matter what, it’s not the same level of understanding as those who just cooperate and work together as allies and friends. And if there’s a launch that does occur, we’re not communicating with that launch control center. We’re certainly not communicating with the folks that are determining whether or not the launch will be safe, if it’ll be nominal, how many payloads are going to deploy, where they’re going to deploy to.

I certainly understand why a nation might feel that they want to protect that. But when you’re fielding into LEO [low-Earth orbit] in particular, you’re not really going to hide there. You’re really just creating uncertainty, and now we’re having to deal with that uncertainty. We eventually know where everything is, but in that meantime, you’re creating a lot of risk for all the other nations and organizations that have fielded capability in LEO as well.

Find, fix, track, target

Ars: Can you take me through what it’s like for you and your team during one of these launches? When one comes to your attention, through a NOTAM or something else, how do you prepare for it? What are you looking for as you get ready for it? How often are you surprised by something with one of these launches?

Agrawal: Those are good questions. Some of it, I’ll be more philosophical on, and others I can be specific on. But on a routine basis, our formation is briefed on all of the launches we’re aware of, to varying degrees, with the varying levels of confidence, and at what classifications have we derived that information.

In fact, we also have a weekly briefing where we go into depth on how we have planned against some of what we believe to be potentially higher threats. How many organizations are involved in that mission plan? Those mission plans are done at a very tactical level by captains and NCOs [non-commissioned officers] that are part of the combat squadrons that are most often presented to US Space Command…

That integrated mission planning involves not just Mission Delta 2 forces but also presented forces by our intelligence delta [Space Force units are called deltas], by our missile warning and missile tracking delta, by our SATCOM [satellite communications] delta, and so on—from what we think is on the launch pad, what we think might be deployed, what those capabilities are. But also what might be held at risk as a result of those deployments, not just in terms of maneuver but also what might these even experimental—advertised “experimental”—capabilities be capable of, and what harm might be caused, and how do we mission-plan against those potential unprofessional or hostile behaviors?

As you can imagine, that’s a very sophisticated mission plan for some of these launches based on what we know about them. Certainly, I can’t, in this environment, confirm or deny any of the specific launches… because I get access to more fidelity and more confidence on those launches, the timing and what’s on them, but the precursor for the vast majority of all these launches is that mission plan.

That happens at a very tactical level. That is now posturing the force. And it’s a joint force. It’s not just us, Space Force forces, but it’s other services’ capabilities as well that are posturing to respond to that. And the truth is that we even have partners, other nations, other agencies, intel agencies, that have capability that have now postured against some of these launches to now be committed to understanding, did we anticipate this properly? Did we not?

And then, what are our branch plans in case it behaves in a way that we didn’t anticipate? How do we react to it? What do we need to task, posture, notify, and so on to then get observations, find, fix, track, target? So we’re fulfilling the preponderance of what we call the kill chain, for what we consider to be a non-cooperative launch, with a hope that it behaves peacefully but anticipating that it’ll behave in a way that’s unprofessional or hostile… We have multiple chat rooms at multiple classifications that are communicating in terms of “All right, is it launching the way we expected it to, or did it deviate? If it deviated, whose forces are now at risk as a result of that?”

A spectator takes photos before the launch of the Long March 7A rocket carrying the ChinaSat 3B satellite from the Wenchang Space Launch Site in China on May 20, 2025. Credit: Meng Zhongde/VCG via Getty Images

Now, we even have down to the fidelity of what forces on the ground or on the ocean may not have capability… because of maneuvers or protective measures that the US Space Force has to take in order to deviate from its mission because of that behavior. The conversation, the way it was five years ago and the way it is today, is very, very different in terms of just a launch because now that launch, in many cases, is presenting a risk to the joint force.

We’re acting like a joint force. So that Marine, that sailor, that special operator on the ground who was expecting that capability now is notified in advance of losing that capability, and we have measures in place to mitigate those outages. And if not, then we let them know that “Hey, you’re not going to have the space capability for some period of time. We’ll let you know when we’re back. You have to go back to legacy operations for some period of time until we’re back into nominal configuration.”

I hope this blows your mind because it blows my mind in the way that we now do even just launch processing. It’s very different than what we used to do.

Ars: So you’re communicating as a team in advance of a launch and communicating down to the tactical level, saying that this launch is happening, this is what it may be doing, so watch out?

Agrawal: Yeah. It’s not as simple as a ballistic missile warning attack, where it’s duck and cover. Now, it’s “Hey, we’ve anticipated the things that could occur that could affect your ability to do your mission as a result of this particular launch with its expected payload, and what we believe it may do.” So it’s not just a general warning. It’s a very scoped warning.

As that launch continues, we’re able to then communicate more specifically on which forces may lose what, at what time, and for how long. And it’s getting better and better as the rest of the US Space Force, as they present capability trained to that level of understanding as well… We train this together. We operate together and we communicate together so that the tactical user—sometimes it’s us at US Space Force, but many times it’s somebody on the surface of the Earth that has to understand how their environment, their capability, has changed as a result of what’s happening in, to, and from space.

Ars: The types of launches where you don’t know exactly what’s coming are getting more common now. Is it normal for you to be on this alert posture for all of the launches out of China or Russia?

Agrawal: Yeah. You see it now. The launch manifest is just ridiculous, never mind the ones we know about. The ones that we have to reach out into the intelligence world and learn about, that’s getting ridiculous, too. We don’t have to have this whole machine postured this way for cooperative launches. So the amount of energy we’re expending for a non-cooperative launch is immense. We can do it. We can keep doing it, but you’re just putting us on alert… and you’re putting us in a position where we’re getting ready for bad behavior with the entire general force, as opposed to a cooperative launch, where we can anticipate. If there’s an anomaly, we can anticipate those and work through them. But we’re working through it with friends, and we’re communicating.

We’re not having to put tactical warfighters on alert every time … but for those payloads that we have more concern about. But still, it’s a very different approach, and that’s why we are actively working with as many nations as possible in Mission Delta 2 to get folks to sign on with Space Command’s space situational awareness sharing agreements, to go at space operations as friends, as allies, as partners, working together. So that way, we’re not posturing for something higher-end as a result of the launch, but we’re doing this together. So, with every nation we can, we’re getting out there—South America, Africa, every nation that will meet with us, we want to meet with them and help them get on the path with US Space Command to share data, to work as friends, and use space responsibly.”

A Long March 3B carrier rocket carrying the Shijian 21 satellite lifts off from the Xichang Satellite Launch Center on October 24, 2021. Credit: Li Jieyi/VCG via Getty Images

Ars: How long does it take you to sort out and get a track on all of the objects for an uncooperative launch?

Agrawal: That question is a tough one to answer. We can move very, very quickly, but there are times when we have made a determination of what we think something is, what it is and where it’s going, and intent; there might be some lag to get it into a public catalog due to a number of factors, to include decisions being made by combatant commanders, because, again, our primary objective is not the public-facing catalog. The primary objective is, do we have a risk or not?

If we have a risk, let’s understand, let’s figure out to what degree do we think we have to manage this within the Department of Defense. And to what degree do we believe, “Oh, no, this can go in the public catalog. This is a predictable elset (element set)”? What we focus on with (the public catalog) are things that help with predictability, with spaceflight safety, with security, spaceflight security. So you sometimes might see a lag there, but that’s because we’re wrestling with the security aspect of the degree to which we need to manage this internally before we believe it’s predictable. But once we believe it’s predictable, we put it in the catalog, and we put it on space-track.org. There’s some nuance in there that isn’t relative to technology or process but more on national security.

On the flip side, what used to take hours and days is now getting down to seconds and minutes. We’ve overhauled—not 100 percent, but to a large degree—and got high-speed satellite communications from sensors to the centers of SDA (Space Domain Awareness) processing. We’re getting higher-end processing. We’re now duplicating the ability to process, duplicating that capability across multiple units. So what used to just be human labor intensive, and also kind of dial-up speed of transmission, we’ve now gone to high-speed transport. You’re seeing a lot of innovation occur, and a lot of data fusion occur, that’s getting us to seconds and minutes.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

The military’s squad of satellite trackers is now routinely going on alert Read More »

the-week-in-ai-governance

The Week in AI Governance

There was enough governance related news this week to spin it out.

Anthropic, Google, OpenAI, Mistral, Aleph Alpha, Cohere and others commit to signing the EU AI Code of Practice. Google has now signed. Microsoft says it is likely to sign.

xAI signed the AI safety chapter of the code, but is refusing to sign the others, citing them as overreach especially as pertains to copyright.

The only company that said it would not sign at all is Meta.

This was the underreported story. All the important AI companies other than Meta have gotten behind the safety section of the EU AI Code of Practice. This represents a considerable strengthening of their commitments, and introduces an enforcement mechanism. Even Anthropic will be forced to step up parts of their game.

That leaves Meta as the rogue state defector that once again gives zero anythings about safety, as in whether we all die, and also safety in its more mundane forms. Lol, we are Meta, indeed. So the question is, what are we going to do about it?

xAI took a middle position. I see the safety chapter as by far the most important, so as long as xAI is signing that and taking it seriously, great. Refusing the other parts is a strange flex, and I don’t know exactly what their problem is since they didn’t explain. They simply called it ‘unworkable,’ which is odd when Google, OpenAI and Anthropic all declared they found it workable.

Then again, xAI finds a lot of things unworkable. Could be a skill issue.

This is a sleeper development that could end up being a big deal. When I say ‘against regulations’ I do not mean against AI regulations. I mean against all ‘regulations’ in general, no matter what, straight up.

From the folks who brought you ‘figure out who we technically have the ability to fire and then fire all of them, and if something breaks maybe hire them back, this is the Elon way, no seriously’ and also ‘whoops we misread something so we cancelled PEPFAR and a whole lot of people are going to die,’ Doge is proud to give you ‘if a regulation is not technically required by law it must be an unbridled bad thing we can therefore remove, I wonder why they put up this fence.’

Hannah Natanson, Jeff Stein, Dan Diamond and Rachel Siegel (WaPo): The tool, called the “DOGE AI Deregulation Decision Tool,” is supposed to analyze roughly 200,000 federal regulations to determine which can be eliminated because they are no longer required by law, according to a PowerPoint presentation obtained by The Post that is dated July 1 and outlines DOGE’s plans.

Roughly 100,000 of those rules would be deemed worthy of trimming, the PowerPoint estimates — mostly through the automated tool with some staff feedback. The PowerPoint also suggests the AI tool will save the United States trillions of dollars by reducing compliance requirements, slashing the federal budget and unlocking unspecified “external investment.”

The conflation here is absolute. There are two categories of regulations: The half ‘required by law,’ and the half ‘worthy of trimming.’ Think of the trillions you can save.

They then try to hedge and claim that’s not how it is going to work.

Asked about the AI-fueled deregulation, White House spokesman Harrison Fields wrote in an email that “all options are being explored” to achieve the president’s goal of deregulating government.

No decisions have been completed on using AI to slash regulations, a HUD spokesperson said.

The spokesperson continued: “The intent of the developments is not to replace the judgment, discretion and expertise of staff but be additive to the process.”

That would be nice. I’m far more ‘we would be better off with a lot less regulations’ than most. I think it’s great to have an AI tool that splits off the half we can consider cutting from the half we are stuck with. I still think that ‘cut everything that a judge wouldn’t outright reverse if you tried cutting it’ is not a good strategy.

I find the ‘no we will totally consider whether this is a good idea’ talk rather hollow, both because of track record and also they keep telling us what the plan is?

“The White House wants us higher on the leader board,” said one of the three people. “But you have to have staff and time to write the deregulatory notices, and we don’t. That’s a big reason for the holdup.”

That’s where the AI tool comes in, the PowerPoint proposes. The tool will save 93 percent of the human labor involved by reviewing up to 500,000 comments submitted by the public in response to proposed rule changes. By the end of the deregulation exercise, humans will have spent just a few hours to cancel each of the 100,000 regulations, the PowerPoint claims.

They then close by pointing out that the AI makes mistakes even on the technical level it is addressing. Well, yeah.

Also, welcome to the future of journalism:

China has its own AI Action Plan and is calling for international cooperation on AI. Wait, what do they mean by that? If you look in the press, that depends who you ask. All the news organizations will be like ‘the Chinese released an AI Action Plan’ and then not link to the actual plan, I had to have o3 dig it up.

Here’s o3’s translation of the actual text. This is almost all general gestures in the direction of capabilities, diffusion, infrastructure and calls for open models. It definitely is not an AI Action Plan in the sense that America offered an AI Action Plan, with had lots of specific actionable proposals. This is more of a general outline of a plan and statement of goals, at best. At least it doesn’t talk about or call for a ‘race’ but a call for everything to be open and accelerated is not obviously better.

  • Seize AI opportunities together. Governments, international organizations, businesses, research institutes, civil groups, and individuals should actively cooperate, accelerate digital‑infrastructure build‑out, explore frontier AI technologies, and spread AI applications worldwide, fully unlocking AI’s power to drive growth, achieve the UN‑2030 goals, and tackle global challenges.

  • Foster AI‑driven innovation. Uphold openness and sharing, encourage bold experimentation, build international S‑and‑T cooperation platforms, harmonize policy and regulation, and remove technical barriers to spur continuous breakthroughs and deep “AI +” applications.

  • Empower every sector. Deploy AI across manufacturing, consumer services, commerce, healthcare, education, agriculture, poverty reduction, autonomous driving, smart cities, and more; share infrastructure and best practices to supercharge the real economy.

  • Accelerate digital infrastructure. Expand clean‑energy grids, next‑gen networks, intelligent compute, and data centers; create interoperable AI infrastructure and unified compute‑power standards; support especially the Global South in accessing and applying AI.

  • Build a pluralistic open‑source ecosystem. Promote cross‑border open‑source communities and secure platforms, open technical resources and interfaces, improve compatibility, and let non‑sensitive tech flow freely.

  • Supply high‑quality data. Enable lawful, orderly, cross‑border data flows; co‑create top‑tier datasets while safeguarding privacy, boosting corpus diversity, and eliminating bias to protect cultural and ecosystem diversity.

  • Tackle energy and environmental impacts. Champion “sustainable AI,” set AI energy‑ and water‑efficiency standards, promote low‑power chips and efficient algorithms, and scale AI solutions for green transition, climate action, and biodiversity.

  • Forge standards and norms. Through ITU, ISO, IEC, and industry, speed up standards on safety, industry, and ethics; fight algorithmic bias and keep standards inclusive and interoperable.

  • Lead with public‑sector adoption. Governments should pioneer reliable AI in public services (health, education, transport), run regular safety audits, respect IP, enforce privacy, and explore lawful data‑trading mechanisms to upgrade governance.

  • Govern AI safety. Run timely risk assessments, create a widely accepted safety framework, adopt graded management, share threat intelligence, tighten data‑security across the pipeline, raise explainability and traceability, and prevent misuse.

  • Implement the Global Digital Compact. Use the UN as the main channel, aim to close the digital divide—especially for the Global South—and quickly launch an International AI Scientific Panel and a Global AI Governance Dialogue under UN auspices.

  • Boost global capacity‑building. Through joint labs, shared testing, training, industry matchmaking, and high‑quality datasets, help developing countries enhance AI innovation, application, and governance while improving public AI literacy, especially for women and children.

  • Create inclusive, multi‑stakeholder governance. Establish public‑interest platforms involving all actors; let AI firms share use‑case lessons; support think tanks and forums in sustaining global technical‑policy dialogue among researchers, developers, and regulators.

What does it have to say about safety or dealing with downsides? We have ‘forge standards and norms’ with a generic call for safety and ethics standards, which seems to mostly be about interoperability and ‘bias.’

Mainly we have ‘Govern AI safety,’ which is directionally nice to see I guess but essentially content free and shows no sign that the problems are being taken seriously on the levels we care about. Most concretely, in the ninth point, we have a call for regular safety audits of AI models. That all sounds like ‘the least you could do.’

Here’s one interpretation of the statement:

Brenda Goh (Reuters): China said on Saturday it wanted to create an organisation to foster global cooperation on artificial intelligence, positioning itself as an alternative to the U.S. as the two vie for influence over the transformative technology.

Li did not name the United States but appeared to refer to Washington’s efforts to stymie China’s advances in AI, warning that the technology risked becoming the “exclusive game” of a few countries and companies.

China wants AI to be openly shared and for all countries and companies to have equal rights to use it, Li said, adding that Beijing was willing to share its development experience and products with other countries, particularly the “Global South”. The Global South refers to developing, emerging or lower-income countries, mostly in the southern hemisphere.

The foreign ministry released online an action plan for global AI governance, inviting governments, international organisations, enterprises and research institutions to work together and promote international exchanges including through a cross-border open source community.

As in, we notice you are ahead in AI, and that’s not fair. You should do everything in the open so you let us catch up in all the ways you are ahead, so we can bury you using the ways in which you are behind. That’s not an unreasonable interpretation.

Here’s another.

The Guardian: Chinese premier Li Qiang has proposed establishing an organisation to foster global cooperation on artificial intelligence, calling on countries to coordinate on the development and security of the fast-evolving technology, days after the US unveiled plans to deregulate the industry.

Li warned Saturday that artificial intelligence development must be weighed against the security risks, saying global consensus was urgently needed.

“The risks and challenges brought by artificial intelligence have drawn widespread attention … How to find a balance between development and security urgently requires further consensus from the entire society,” the premier said.

Li said China would “actively promote” the development of open-source AI, adding Beijing was willing to share advances with other countries, particularly developing ones in the global south.

So that’s a call to keep security in mind, but every concrete reference is mundane and deals with misuse, and then they call for putting everything out into the open, with the main highlighted ‘risk’ to coordinate on being that America might get an advantage, and encouraging us to give it away via open models to ‘safeguard multilateralism.’

A third here, from the Japan Times, frames it as a call for an alliance to take aim at an American AI monopoly.

Director Michael Kratsios: China’s just-released AI Action Plan has a section that drives at a fundamental difference between our approaches to AI: whether the public or private sector should lead in AI innovation.

I like America’s odds of success.

He quotes point nine, which his translation has as ‘the public sector takes the lead in deploying applications.’ Whereas o3’s translation says ‘governments should pioneer reliable AI in public services (health, education, transport), run regular safety audits, respect IP, enforce privacy, and explore lawful data‑trading mechanisms to upgrade governance.’

Even in Michael’s preferred translation, this is saying government should aggressively deploy AI applications to improve government services. The American AI Action Plan, correctly, fully agrees with this. Nothing in the Chinese statement says to hold the private sector back. Quite the contrary.

The actual disagreement we have with point nine is the rest of it, where the Chinese think we should run regular safety audits, respect IP and enforce privacy. Those are not parts of the American AI Action Plan. Do you think we were right not to include those provisions, sir? If so, why?

Suppose in the future, we learned we were in a lot more danger than we think we are in now, and we did want to make a deal with China and others. Right now the two sides would be very far apart but circumstances could quickly change that.

Could we do it in a way that could be verified?

It wouldn’t be easy, but we do have tools.

This is the sort of thing we should absolutely be preparing to be able to do, whether or not we ultimately decide to do it.

Mauricio Baker: For the last year, my team produced the most technically detailed overview so far. Our RAND working paper finds: strong verification is possible—but we need ML and hardware research.

You can find the paper here and on arXiv. It includes a 5-page summary and a list of open challenges.

In the Cold War, the US and USSR used inspections and satellites to verify nuclear weapon limits. If future, powerful AI threatens to escape control or endanger national security, the US and China would both be better off with guardrails.

It’s a tough challenge:

– Verify narrow restrictions, like “no frontier AI training past some capability,” or “no mass-deploying if tests show unacceptable danger”

– Catch major state efforts to cheat

– Preserve confidentiality of models, data, and algorithms

– Keep overhead low

Still, reasons for optimism:

– No need to monitor all computers—frontier AI needs thousands of specialized AI chips.

– We can build redundant layers of verification. A cheater only needs to be caught once.

– We can draw from great work in cryptography and ML/hardware security.

One approach is to use existing chip security features like Confidential Computing, built to securely verify chip activities. But we’d need serious design vetting, teardowns, and maybe redesigns before the US could strongly trust Huawei’s chip security (or frankly, NVIDIA’s).

“Off-chip” mechanisms could be reliable sooner: network taps or analog sensors (vetted, limited use, tamper evident) retrofitted onto AI data centers. Then, mutually secured, airgapped clusters could check if claimed compute uses are reproducible and consistent with sensor data.

Add approaches “simple enough to work”: whistleblower programs, interviews of personnel, and intelligence activities. Whistleblower programs could involve regular in-person contact—carefully set up so employees can anonymously reveal violations, but not much more.

We could have an arsenal of tried-and-tested methods to confidentially verify a US-China AI treaty. But at the current pace, in three years, we’ll just have a few speculative options. We need ML and hardware researchers, new RFPs by funders, and AI company pilot programs.

Jeffrey Ladish: Love seeing this kind of in-depth work on AI treaty verification. A key fact is verification doesn’t have to be bullet proof to be useful. We can ratchet up increasingly robust technical solutions while using other forms of HUMINT and SIGINT to provide some level of assurance.

Remember, the AI race is a mixed-motive conflict, per Schelling. Both sides have an incentive to seek an advantage, but also have an incentive to avoid mutually awful outcomes. Like with nuclear war, everyone loses if any side loses control of superhuman AI.

This makes coordination easier, because even if both sides don’t like or trust each other, they have an incentive to cooperate to avoid extremely bad outcomes.

It may turn out that even with real efforts there are not good technical solutions. But I think it is far more likely that we don’t find the technical solutions due to lack of trying, rather than that the problem is so hard that it cannot be done.

The reaction to the AI Action Plan was almost universally positive, including here from Nvidia and AMD. My own review, focused on the concrete proposals within, also reflected this. It far exceeded my expectations on essentially all fronts, so much so that I would be actively happy to see most of its proposals implemented rather than nothing be done.

I and others focused on the concrete policy, and especially concrete policy relative to expectations and what was possible in context, for which it gets high praise.

But a document like this might have a lot of its impact due to the rhetoric instead, even if it lacks legal force, or cause people to endorse the approach as ideal in absolute terms rather than being the best that could be done at the time.

So, for example, the actual proposals for open models were almost reasonable, but if the takeaway is lots more rhetoric of ‘yay open models’ like it is in this WSJ editorial, where the central theme is very clearly ‘we must beat China, nothing else matters, this plan helps beat China, so the plan is good’ then that’s really bad.

Another important example: Nothing in the policy proposals here makes future international cooperation harder. The rhetoric? A completely different story.

The same WSJ article also noticed the same obvious contradictions with other Trump policies that I did – throttling renewable energy and high-skilled immigration and even visas are incompatible with our goals here, the focus on ‘woke AI’ could have been much worse but remains a distraction, also I would add, what is up with massive cuts to STEM research if we are taking this seriously? If we are serious about winning and worry that one false move would ‘forfeit the race’ then we need to act like it.

Of course, none of that is up to the people who were writing the AI Action Plan.

What the WSJ editorial board didn’t notice, or mention at all, is the possibility that there are other risks or downsides at play here, and it dismisses outright the possibility of any form of coordination or cooperation. That’s a very wrong, dangerous and harmful attitude, one it shares with many in or lobbying the government.

A worry I have on reflection, that I wasn’t focusing on at the time, is that officials and others might treat the endorsements of the good policy proposals here as an endorsement of the overall plan presented by the rhetoric, especially the rhetoric at the top of the plan, or of the plan’s sufficiency and that it is okay to ignore and not speak about what the plan ignores and does not speak about.

That rhetoric was alarmingly (but unsurprisingly) terrible, as it is the general administration plan of emphasizing whenever possible that we are in an ‘AI race’ that will likely go straight to AGI and superintelligence even if those words couldn’t themselves be used in the plan, where ‘winning’ is measured in the mostly irrelevant ‘market share.’

And indeed, the inability to mention AGI or superintelligence in the plan leads to such exactly the standard David Sacks lines that toxically center the situation on ‘winning the race’ by ‘exporting the American tech stack.’

I will keep repeating, if necessary until I am blue in the face, that this is effectively a call (the motivations for which I do not care to speculate) for sacrificing the future and get us all killed in order to maximize Nvidia’s market share.

There is no ‘tech stack’ in the meaningful sense of necessary integration. You can run any most any AI model on most any advanced chip, and switch on an hour’s notice.

It does not matter who built the chips. It matters who runs the chips and for whose benefit. Supply is constrained by manufacturing capacity, so every chip we sell is one less chip we have. The idea that failure to hand over large percentages of the top AI chips to various authoritarians, or even selling H20s directly to China as they currently plan to do, would ‘forfeit’ ‘the race’ is beyond absurd.

Indeed, both the rhetoric and actions discussed here do the exact opposite. It puts pressure on others especially China to push harder towards ‘the race’ including the part that counts, the one to AGI, and also the race for diffusion and AI’s benefits. And the chips we sell arm China and others to do this important racing.

There is later talk acknowledging that ‘we do not intend to ignore the risks of this revolutionary technological power.’ But Sacks frames this as entire about the risk that AI will be misused or stolen by malicious actors. Which is certainly a danger, but far from the primary thing to worry about.

That’s what happens when you are forced to pretend AGI, ASI, potential loss of control and all other existential risks do not exist as possibilities. The good news is that there are some steps in the actual concrete plan to start preparing for those problems, even if they are insufficient and it can’t be explained, but it’s a rough path trying to sustain even that level of responsibility under this kind of rhetorical oppression.

The vibes and rhetoric were accelerationist throughout, especially at the top, and completely ignored the risks and downsides of AI, and the dangers of embracing a rhetoric based on an ‘AI race’ that we ‘must win,’ and where that winning mostly means chip market share. Going down this path is quite likely to get us all killed.

I am happy to make the trade of allowing the rhetoric to be optimistic, and to present the Glorious Transhumanist Future as likely to be great even as we have no idea how to stay alive and in control while getting there, so long as we can still agree to take the actions we need to take in order to tackle that staying alive and in control bit – again, the actions are mostly the same even if you are highly optimistic that it will work out.

But if you dismiss the important dangers entirely, then your chances get much worse.

So I want to be very clear that I hate that rhetoric, I think it is no good, very bad rhetoric both in terms of what is present and what (often with good local reasons) is missing, while reiterating that the concrete particular policy proposals were as good as we could reasonably have hoped for on the margin, and the authors did as well as they could plausibly have done with people like Sacks acting as veto points.

That includes the actions on ‘preventing Woke AI,’ which have convinced even Sacks to frame this as preventing companies from intentionally building DEI into their models. That’s fine, I wouldn’t want that either.

Even outlets like Transformer weighed in positively, with them calling the plan ‘surprisingly okay’ and noting its ability to get consensus support, while ignoring the rhetoric. They correctly note the plan is very much not adequate. It was a missed opportunity to talk about or do something about various risks (although I understand why), and there was much that could have been done that wasn’t.

Seán Ó hÉigeartaigh: Crazy to reflect on the three global AI competitions going on right now:

– 1. US political leadership have made AI a prestige race, echoing the Space Race. It’s cool and important and strategic, and they’re going to Win.

– 2. For Chinese leadership AI is part of economic strength, soft power and influence. Technology is shared, developing economies will be built on Chinese fundamental tech, the Chinese economy and trade relations will grow. Weakening trust in a capricious US is an easy opportunity to take advantage of.

– 3. The AGI companies are racing something they think will out-think humans across the board, that they don’t yet know how to control, and think might literally kill everyone.

Scariest of all is that it’s not at all clear to decision-makers that these three things are happening in parallel. They think they’re playing the same game, but they’re not.

I would modify the US political leadership position. I think to a lot of them it’s literally about market share, primarily chip market share. I believe this because they keep saying, with great vigor, that it is literally about chip market share. But yes, they think this matters because of prestige, and because this is how you get power.

My guess is, mostly:

  1. The AGI companies understand these are three distinct things.

    1. They are using the confusions of political leadership for their own ends.

  2. The Chinese understand there are two distinct things, but not three.

    1. As in, they know what US leadership is doing, and they know what they are doing, and they know these are distinct things.

    2. They do not feel the AGI and understand its implications.

  3. The bulk of the American political class cannot differentiate between the US and Chinese strategies, or strategic positions, or chooses to pretend not to, cannot imagine things other than ordinary prestige, power and money, and cannot feel the AGI.

    1. There are those within the power structure who do feel the AGI, to varying extents, and are trying to sculpt actions (including the action plan) accordingly with mixed success.

    2. An increasing number of them, although still small, do feel the AGI to varying extents but have yet to cash that out into anything except ‘oh ’.

  4. There is of course a fourth race or competition, which is to figure out how to build it without everyone dying.

The actions one would take in each of these competitions are often very similar, especially the first three and often the fourth as well, but sometimes are very different. What frustrates me most is when there is an action that is wise on all levels, yet we still don’t do it.

Also, on the ‘preventing Woke AI’ question, the way the plan and order are worded seems designed to make compliance easy and not onerous, but given other signs from the Trump administration lately, I think we have reason to worry…

Fact Post: Trump’s FCC Chair says he will put a “bias monitor” in place who will “report directly” to Trump as part of the deal for Sky Dance to acquire CBS.

Ari Drennen: The term that the Soviet Union used for this job was “apparatchik” btw.

I was willing to believe that firing Colbert was primarily a business decision. This is very different imagine the headline in reverse: “Harris’s FCC Chair says she will put a “bias monitor” in place who will “report directly” to Harris as part of the deal for Sky Dance to acquire CBS.”

Now imagine it is 2029, and the headline is ‘AOC appoints new bias monitor for CBS.’ Now imagine it was FOX. Yeah. Maybe don’t go down this road?

Director Krastios has now given us his view on the AI Action Plan. This is a chance to see how much it is viewed as terrible rhetoric versus its good policy details, and to what extent overall policy is going to be guided by good details versus terrible rhetoric.

Peter Wildeford offers his takeaway summary.

Peter Wildeford: Winning the Global AI Race

  1. The administration’s core philosophy is a direct repudiation of the previous one, which Kratsios claims was a “fear-driven” policy “manically obsessed” with hypothetical risks that stifled innovation.

  2. The plan is explicitly called an “Action Plan” to signal a focus on immediate execution and tangible results, not another government strategy document that just lists aspirational goals.

  3. The global AI race requires America to show the world a viable, pro-innovation path for AI development that serves as an alternative to the EU’s precautionary, regulation-first model.

He leads with hyperbolic slander, which is par for the course, but yes concrete action plans are highly useful and the EU can go too far in its regulations.

There are kind of two ways to go with this.

  1. You could label any attempt to do anything to ensure we don’t die as ‘fear-driven’ and ‘maniacally obsessed’ with ‘hypothetical’ risks that ‘stifle’ innovation, and thus you probably die.

  2. You could label the EU and Biden Administration as ‘fear-driven’ and ‘manically obsessed’ with ‘hypothetical’ risks that ‘stifle’ innovation, contrasting that with your superior approach, and then having paid this homage do reasonable things.

The AI Action Plan as written was the second one. But you have to do that on purpose, because the default outcome is to shift to the first one.

Executing the ‘American Stack’ Export Strategy

  1. The strategy is designed to prevent a scenario where the world runs on an adversary’s AI stack by proactively offering a superior, integrated American alternative.

  2. The plan aims to make it simple for foreign governments to buy American by promoting a “turnkey solution”—combining chips, cloud, models, and applications—to reduce complexity for the buyer.

  3. A key action is to reorient US development-finance institutions like the DFC and EXIM to prioritize financing for the export of the American AI stack, shifting their focus from traditional hard infrastructure.

The whole ‘export’ strategy is either nonsensical, or an attempt to control capital flow, because I heard a rumor that it is good to be the ones directing capital flow.

Once again, the ‘tech stack’ thing is not, as described here, what’s the word? Real.

The ‘adversary’ does not have a ‘tech stack’ to offer, they have open models people can run on the same chips. They don’t have meaningful chips to even run their own operations, let alone export. And the ‘tech’ does not ‘stack’ in a meaningful way.

Turnkey solutions and package marketing are real. I don’t see any reason for our government to be so utterly obsessed with them, or even involved at all. That’s called marketing and serving the customer. Capitalism solves this. Microsoft and Amazon and Google and OpenAI and Anthropic and so on can and do handle it.

Why do we suddenly think the government needs to be prioritizing financing this? Given that it includes chip exports, how is it different from ‘traditional hard infrastructure’? Why do we need financing for the rest of this illusory stack when it is actually software? Shouldn’t we still be focusing on ‘traditional hard infrastructure’ in the places we want it, and then whenever possible exporting the inference?

Refining National Security Controls

  1. Kratsios argues the biggest issue with export controls is not the rules themselves but the lack of resources for enforcement, which is why the plan calls for giving the Bureau of Industry and Security (BIS) the tools it needs.

  2. The strategy is to maintain strict controls on the most advanced chips and critical semiconductor-manufacturing components, while allowing sales of less-advanced chips under a strict licensing regime.

  3. The administration is less concerned with physical smuggling of hardware and more focused on preventing PRC front companies from using legally exported hardware for large-scale, easily flaggable training runs.

  4. Proposed safeguards against misuse are stringent “Know Your Customer” (KYC) requirements paired with active monitoring for the scale and scope of compute jobs.

It is great to see the emphasis on enforcement. It is great to hear that the export control rules are not the issue.

In which case, can we stop waving them, such as with H20 sales to China? Thank you. There is of course a level at which chips can be safely sold even directly to China, but the experts all agree the H20 is past that level.

The lack of concern about smuggling is a blind eye in the face of overwhelming evidence of widespread smuggling. I don’t much care if they are claiming to be concerned, I care about the actual enforcement, but we need enforcement. Yes, we should stop ‘easily flaggable’ PRC training runs and use KYC techniques, but this is saying we should look for our keys under the streetlight and then if we don’t find the keys assume we can start our car without them.

Championing ‘Light-Touch’ Domestic Regulation

  1. The administration rejects the idea of a single, overarching AI law, arguing that expert agencies like the FDA and DOT should regulate AI within their specific domains.

  2. The president’s position is that a “patchwork of regulations” across 50 states is unacceptable because the compliance burden disproportionately harms innovative startups.

  3. While using executive levers to discourage state-level rules, the administration acknowledges that a durable solution requires an act of Congress to create a uniform federal standard.

Yes, a ‘uniform federal standard’ would be great, except they have no intention of even pretending to meaningfully pursue one. They want each federal agency to do its thing in its own domain, as in a ‘use case’ based AI regime which when done on its own is the EU approach and doomed to failure.

I do acknowledge the step down from ‘kill state attempts to touch anything AI’ (aka the insane moratorium) to ‘discourage’ state-level rules using ‘executive levers,’ at which point we are talking price. One worries the price will get rather extreme.

Addressing AI’s Economic Impact at Home

  1. Kratsios highlights that the biggest immediate labor need is for roles like electricians to build data centers, prompting a plan to retrain Americans for high-paying infrastructure jobs.

  2. The technology is seen as a major productivity tool that provides critical leverage for small businesses to scale and overcome hiring challenges.

  3. The administration issued a specific executive order on K-12 AI education to ensure America’s students are prepared to wield these tools in their future careers.

Ahem, immigration, ahem, also these things rarely work, but okay, sure, fine.

Prioritizing Practical Infrastructure Over Hypothetical Risk

  1. Kratsios asserts that chip supply is no longer a major constraint; the key barriers to the AI build-out are shortages of skilled labor and regulatory delays in permitting.

  2. Success will be measured by reducing the time from permit application to “shovels in the ground” for new power plants and data centers.

  3. The former AI Safety Institute is being repurposed to focus on the hard science of metrology—developing technical standards for measuring and evaluating models, rather than vague notions of “safety.”

It is not the only constraint, but it is simply false to say that chip supply is no longer a major constraint.

Defining success in infrastructure in this way would, if taken seriously, lead to large distortions in the usual obvious Goodhart’s Law ways. I am going to give the benefit of the doubt and presume this ‘success’ definition is local, confined to infrastructure.

If the only thing America’s former AISI can now do are formal measured technical standards, then that is at least a useful thing that it can hopefully do well, but yeah it basically rules out at the conceptual level the idea of actually addressing the most important safety issues, by dismissing them are ‘vague.’

This goes beyond ‘that which is measured is managed’ to an open plan of ‘that which is not measured is not managed, it isn’t even real.’ Guess how that turns out.

Defining the Legislative Agenda

  1. While the executive branch has little power here, Kratsios identifies the use of copyrighted data in model training as a “quite controversial” area that Congress may need to address.

  2. The administration would welcome legislation that provides statutory cover for the reformed, standards-focused mission of the Center for AI Standards and Innovation (CAISI).

  3. Continued congressional action is needed for appropriations to fund critical AI-related R&D across agencies like the National Science Foundation.

TechCrunch: 20 national security experts urge Trump administration to restrict Nvidia H20 sales to China.

The letter says the H20 is a potent accelerator of China’s frontier AI capabilities and could be used to strengthen China’s military.

Americans for Responsible Innovation: The H20 and the AI models it supports will be deployed by China’s PLA. Under Beijing’s “Military-Civil Fusion” strategy, it’s a guarantee that H20 chips will be swiftly adapted for military purposes. This is not a question of trade. It is a question of national security.

It would be bad enough if this was about selling the existing stock of H20s, that Nvidia has taken a writedown on, even though it could easily sell them in the West instead. It is another thing entirely that Nvidia is using its capacity on TSMC machines to make more of them, choosing to create chips to sell directly to China instead of creating chips for us.

Ruby Scanlon: Nvidia placed orders for 300,000 H20 chipsets with contract manufacturer TSMC last week, two sources said, with one of them adding that strong Chinese demand had led the US firm to change its mind about just relying on its existing stockpile.

It sounds like we’re planning on feeding what would have been our AI chips to China. And then maybe you should start crying? Or better yet tell them they can’t do it?

I share Peter Wildeford’s bafflement here:

Peter Wildeford: “China is close to catching up to the US in AI so we should sell them Nvidia chips so they can catch up even faster.”

I never understand this argument from Nvidia.

The argument is also false, Nvidia is lying, but I don’t understand even if it were true.

There is only a 50% premium to buy Nvidia B200 systems within China, which suggests quite a lot of smuggling is going on.

Tao Burga: Nvidia still insists that there’s “no evidence of any AI chip diversion.” Laughable. All while lobbying against the data center chip location verification software that would provide the evidence. Tell me, where does the $1bn [in AI chips smuggled to China] go?

Rob Wiblin: Nvidia successfully campaigning to get its most powerful AI chips into China has such “the capitalists will sell us the rope with which we will hang them” energy.

Various people I follow keep emphasizing that China is smuggling really a lot of advanced AI chips, including B200s and such, and perhaps we should be trying to do something about it, because it seems rather important.

Chipmakers will always oppose any proposal to track chips or otherwise crack down on smuggling and call it ‘burdensome,’ where the ‘burden’ is ‘if you did this they would not be able to smuggle as many chips, and thus we would make less money.’

Reuters Business: Demand in China has begun surging for a business that, in theory, shouldn’t exist: the repair of advanced v artificial intelligence chipsets that the US has banned the export of to its trade and tech rival.

Peter Wildeford: Nvidia position: “datacenters from smuggled products is a losing proposition […] Datacenters require service and support, which we provide only to authorized NVIDIA products.”

Reality: Nvidia AI chip repair industry booms in China for banned products.

Scott Bessent Warns TSMC’s $40 billion Arizona fab that could meet 7% of American chip demand keeps getting delayed, and blames inspectors and red tape. There’s confusion here in the headline that he is warning it would ‘only’ meet 7% of demand, but 7% of demand would be amazing for one plant and the article’s text reflects this.

Bessent criticized regulatory hurdles slowing construction of the $40 billion facility. “Evidently, these chip design plants are moving so quickly, you’re constantly calling an audible and you’ve got someone saying, ‘Well, you said the pipe was going to be there, not there. We’re shutting you down,’” he explained.

It does also mean that if we want to meet 100% or more of demand we will need a lot more plants, but we knew that.

Epoch reports that Chinese hardware is behind American hardware, and is ‘closing the gap’ but faces major obstacles in chip manufacturing capability.

Epoch: Even if we exclude joint ventures with U.S., Australian, or U.K. institutions (where the developers can access foreign silicon), the clear majority of homegrown models relied on NVIDIA GPUs. In fact, it took until January 2024 for the first large language model to reportedly be trained entirely on Chinese hardware, arguably years after the first large language models.

Probably the most important reason for the dominance of Western hardware is that China has been unable to manufacture these AI chips in adequate volumes. Whereas Huawei reportedly manufactured 200,000 Ascend 910B chips in 2024, estimates suggest that roughly one million NVIDIA GPUs were legally delivered to China in the same year.

That’s right. For every top level Huawei chip manufactured, Nvidia sold five to China. No, China is not about to export a ‘full Chinese tech stack’ for free the moment we turn our backs. They’re offering downloads of r1 and Kimi K2, to be run on our chips, and they use all their own chips internally because they still have a huge shortage.

Put bluntly, we don’t see China leaping ahead on compute within the next few years. Not only would China need to overcome major obstacles in chip manufacturing and software ecosystems, they would also need to surpass foreign companies making massive investments into hardware R&D and chip fabrication.

Unless export controls erode or Beijing solves multiple technological challenges in record time, we think that China will remain at least one generation behind in hardware. This doesn’t prevent Chinese developers from training and running frontier AI models, but it does make it much more costly.

Overall, we think these costs are large enough to put China at a substantial disadvantage in AI scaling for at least the rest of the decade.

Beating China may or may not be your number one priority. We do know that taking export controls seriously is the number one priority for ‘beating China.’

Intel will cancel 14A and following nodes, essentially abandoning the technological frontier, if it cannot win a major external customer.

Discussion about this post

The Week in AI Governance Read More »

china-claims-nvidia-built-backdoor-into-h20-chip-designed-for-chinese-market

China claims Nvidia built backdoor into H20 chip designed for Chinese market

The CAC did not specify which experts had found a back door in Nvidia’s products or whether any tests in China had uncovered the same results. Nvidia did not immediately respond to a request for comment.

Lawmakers in Washington have expressed concern about chip smuggling and introduced a bill that would require chipmakers such as Nvidia to embed location tracking into export-controlled hardware.

Beijing has issued informal guidance to major Chinese tech groups to increase purchases of domestic AI chips in order to reduce reliance on Nvidia and support the evolution of a rival domestic chip ecosystem.

Chinese tech giant Huawei and smaller groups including Biren and Cambricon have benefited from the push to localize chip supply chains.

Nvidia said it would take nine months from restarting manufacturing to shipping the H20 to clients. Industry insiders said there was considerable uncertainty among Chinese customers over whether they would be able to take delivery of any orders if the US reversed its decision to allow its sale.

The Trump administration has faced heavy criticism, including from security experts and former officials, who argue that the H20 sales would accelerate Chinese AI development and threaten US national security.

“There are strong factions on both sides of the Pacific that don’t like the idea of renewing H20 sales,” said Triolo. “In the US, the opposition is clear, but also in China voices are saying that it will slow transition to the alternative ecosystem.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

China claims Nvidia built backdoor into H20 chip designed for Chinese market Read More »

google-tool-misused-to-scrub-tech-ceo’s-shady-past-from-search

Google tool misused to scrub tech CEO’s shady past from search

Capital F for “Frustrating”

Upon investigating, FPF found that its article on Blackman was completely absent from Google results, even through a search with the exact title. Poulson later realized that two of his own Substack articles were similarly affected. The Foundation was led to the Refresh Outdated Content tool upon checking its search console.

Google’s tool doesn’t just take anyone’s word for it when they suggest the removal of search results. However, a bug in the tool made it an ideal way to suppress information in search results. When inputting a URL, the tool allowed users to change the capitalization in the URL slug. The Foundation’s article was titled “Anatomy of a censorship campaign: A tech exec’s crusade to stifle journalism,” but the requests logged in Google’s tool included variations like “AnAtomy” and “censorSHip.”

Because the Refresh Outdated Content was seemingly case-insensitive, the crawler would check the URL, encounter a 404 error, and then de-index the working URL. Investigators determined this method was used by Blackman or someone with a suspicious interest in his online profile dozens of times between May and June 2025. Amusingly, since leaving Premise, Blackman has landed in the CEO role at online reputation management firm The Transparency Company.

If you go looking for the Freedom of the Press Foundation article or Poulson’s own reporting, it should appear normally in Google’s search results. The FPF contacted Google about the issue, and the company confirmed the bug. It issued a fix with unusual swiftness, telling the Foundation that the bug affected “a tiny fraction of websites.”

It is unclear whether Google was aware of the bug previously or if its exploitation was widespread. The Internet is vast, and those who seek to maliciously hide information are not prone to publicizing their methods. It’s somewhat unusual for Google to admit fault so readily, but at least it addressed the issue.

The Refresh Outdated Content tool doesn’t log who submits requests, but whoever was behind this disinformation campaign may want to look into the Streisand Effect.

Google tool misused to scrub tech CEO’s shady past from search Read More »

vpn-use-soars-in-uk-after-age-verification-laws-go-into-effect

VPN use soars in UK after age-verification laws go into effect

Also on Friday, the Windscribe VPN service posted a screenshot on X claiming to show a spike in new subscribers. The makers of the AdGuard VPN claimed that they have seen a 2.5X increase in install rates from the UK since Friday.

Nord Security, the company behind the NordVPN app, says it has seen a “1,000 percent increase in purchases” of subscriptions from the UK since the day before the new laws went into effect. “Such spikes in demand for VPNs are not unusual,” Laura Tyrylyte, Nord Security’s head of public relations, tells WIRED. She adds in a statement that “whenever a government announces an increase in surveillance, Internet restrictions, or other types of constraints, people turn to privacy tools.”

People living under repressive governments that impose extensive Internet censorship—like China, Russia, and Iran—have long relied on circumvention tools like VPNs and other technologies to maintain anonymity and access blocked content. But as countries that have long claimed to champion the open Internet and access to information, like the United States, begin considering or adopting age verification laws meant to protect children, the boundaries for protecting digital rights online quickly become extremely murky.

“There will be a large number of people who are using circumvention tech for a range of reasons” to get around age verification laws, the ACLU’s Kahn Gillmor says. “So then as a government you’re in a situation where either you’re obliging the websites to do this on everyone globally, that way legal jurisdiction isn’t what matters, or you’re encouraging people to use workarounds—which then ultimately puts you in the position of being opposed to censorship-circumvention tools.”

This story originally appeared on wired.com.

VPN use soars in UK after age-verification laws go into effect Read More »

tesla-picks-lges,-not-catl,-for-$4.3-billion-storage-battery-deal

Tesla picks LGES, not CATL, for $4.3 billion storage battery deal

Tesla has a new battery cell supplier. Although the automaker is vertically integrated to a degree not seen in the automotive industry for decades, when it comes to battery cells it’s mostly dependent upon suppliers. Panasonic cells can be found in many Teslas, with the cheaper, sturdier lithium iron phosphate (LFP) battery cells being supplied by CATL. Now Tesla has a new source of LFP cells thanks to a deal just signed with LG Energy Solutions.

According to The Korea Economic Daily, the contract between Tesla and LGES is worth $4.3 billion. LGES will begin supplying Tesla with cells next August through until at least the end of July 2030, with provisions to extend the contract if necessary.

The LFP cells probably aren’t destined for life on the road, however. Instead, they’ll likely be used in Tesla’s energy storage products, which both Tesla and LGES hope will soak up demand now that EV sales prospects look so weak in North America.

The deal also reduces Tesla’s reliance on Chinese suppliers. LGES will produce the LFP cells at its factory in Michigan, says Reuters, and so they will not be subject to the Trump trade war tariffs, unlike Chinese-made cells from CATL.

Although Tesla CEO Elon Musk has boasted about the size of the energy storage market, its contribution to Tesla’s financials remains meagre, and actually shrank during the last quarter.

Tesla picks LGES, not CATL, for $4.3 billion storage battery deal Read More »