Author name: Tim Belzer

disruption-to-science-will-last-longer-than-the-us-government-shutdown

Disruption to science will last longer than the US government shutdown

President Donald Trump alongside Office of Management and Budget Director Russell Vought.

Credit: Brendan Smialowski/AFP via Getty Images

President Donald Trump alongside Office of Management and Budget Director Russell Vought. Credit: Brendan Smialowski/AFP via Getty Images

However, the full impact of the shutdown and the Trump administration’s broader assaults on science to US international competitiveness, economic security, and electoral politics could take years to materialize.

In parallel, the dramatic drop in international student enrollment, the financial squeeze facing research institutions, and research security measures to curb foreign interference spell an uncertain future for American higher education.

With neither the White House nor Congress showing signs of reaching a budget deal, Trump continues to test the limits of executive authority, reinterpreting the law—or simply ignoring it.

Earlier in October, Trump redirected unspent research funding to pay furloughed service members before they missed their Oct. 15 paycheck. Changing appropriated funds directly challenges the power vested in Congress—not the president—to control federal spending.

The White House’s promise to fire an additional 10,000 civil servants during the shutdown, its threat to withhold back pay from furloughed workers, and its push to end any programs with lapsed funding “not consistent with the President’s priorities” similarly move to broaden presidential power.

Here, the damage to science could snowball. If Trump and Vought chip enough authority away from Congress by making funding decisions or shuttering statutory agencies, the next three years will see an untold amount of impounded, rescinded, or repurposed research funds.

photo of empty science lab

The government shutdown has emptied many laboratories staffed by federal scientists. Combined with other actions by the Trump administration, more scientists could continue to lose funding.

Credit: Monty Rakusen/DigitalVision via Getty Images

The government shutdown has emptied many laboratories staffed by federal scientists. Combined with other actions by the Trump administration, more scientists could continue to lose funding. Credit: Monty Rakusen/DigitalVision via Getty Images

Science, democracy, and global competition

While technology has long served as a core pillar of national and economic security, science has only recently reemerged as a key driver of greater geopolitical and cultural change.

China’s extraordinary rise in science over the past three decades and its arrival as the United States’ chief technological competitor has upended conventional wisdom that innovation can thrive only in liberal democracies.

The White House’s efforts to centralize federal grantmaking, restrict free speech, erase public data, and expand surveillance mirror China’s successful playbook for building scientific capacity while suppressing dissent.

As the shape of the Trump administration’s vision for American science has come into focus, what remains unclear is whether, after the shutdown, it can outcompete China by following its lead.

Kenneth M. Evans is a Fellow in Science, Technology, and Innovation Policy at the Baker Institute for Public Policy, Rice University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disruption to science will last longer than the US government shutdown Read More »

neural-network-finds-an-enzyme-that-can-break-down-polyurethane

Neural network finds an enzyme that can break down polyurethane

You’ll often hear plastic pollution referred to as a problem. But the reality is that it’s multiple problems. Depending on the properties we need, we form plastics out of different polymers, each of which is held together by a distinct type of chemical bond. So the method we use to break down one type of polymer may be incompatible with the chemistry of another.

That problem is why, even though we’ve had success finding enzymes that break down common plastics like polyesters and PET, they’re only partial solutions to plastic waste. However, researchers aren’t sitting back and basking in the triumph of partial solutions, and they’ve now got very sophisticated protein design tools to help them out.

That’s the story behind a completely new enzyme that researchers developed to break down polyurethane, the polymer commonly used to make foam cushioning, among other things. The new enzyme is compatible with an industrial-style recycling process that breaks the polymer down into its basic building blocks, which can be used to form fresh polyurethane.

Breaking down polyurethane

Image of a set of chemical bonds. From left to right there is an X, then a single bond to an oxygen, then a single bond to an oxygen that's double-bonded to carbon, then a single bond to a nitrogen, then a single bond to another X.

The basics of the chemical bonds that link polyurethanes. The rest of the polymer is represented by X’s here.

The new paper that describes the development of this enzyme lays out the scale of the problem: In 2024, we made 22 million metric tons of polyurethane. The urethane bond that defines these involves a nitrogen bonded to a carbon that in turn is bonded to two oxygens, one of which links into the rest of the polymer. The rest of the polymer, linked by these bonds, can be fairly complex and often contains ringed structures related to benzene.

Digesting polyurethanes is challenging. Individual polymer chains are often extensively cross-linked, and the bulky structures can make it difficult for enzymes to get at the bonds they can digest. A chemical called diethylene glycol can partially break these molecules down, but only at elevated temperatures. And it leaves behind a complicated mess of chemicals that can’t be fed back into any useful reactions. Instead, it’s typically incinerated as hazardous waste.

Neural network finds an enzyme that can break down polyurethane Read More »

measles-outbreak-investigation-in-utah-blocked-by-patient-who-refuses-to-talk

Measles outbreak investigation in Utah blocked by patient who refuses to talk

A measles investigation amid a large, ongoing outbreak at the Arizona-Utah border has hit a roadblock as the first probable case identified in the Salt Lake City area refuses to work with health officials, the local health department reported this week.

There have been over 150 cases collectively across the two states, mostly in northwestern Mohave County, Arizona, and the southwest health district of Utah, in the past two months. Both areas have abysmally low vaccination rates: In Mohave County, only 78.4 percent of kindergartners in the 2024–2025 school year were vaccinated against measles, according to state records. In the southwest district of Utah, only 80.7 percent of kindergartners in the 2024–2025 school year had records of measles vaccination. Public health experts say vaccination coverage of 95 percent is necessary to keep the disease from spreading in a community.

While the outbreak has largely exploded along the border, cases are also creeping to the north, toward Salt Lake County, which encompasses the city. Utah County, which sits just south of Salt Lake County, has identified eight cases, including a new case reported today.

Uncooperative case

Salt Lake County likely has a new one, too—the first for the county this year—as well as possible exposures. But, they can’t confirm it.

County health officials said that a health care provider in the area contacted them late on Monday to tell them about a patient who very likely has measles. The officials then spent a day reaching out to the person, who refused to answer questions or cooperate in any way. That included refusing to share location information so that other people could be notified that they were potentially exposed to one of the most infectious viruses known.

“The patient has declined to be tested, or to fully participate in our disease investigation, so we will not be able to technically confirm the illness or properly do contact tracing to warn anyone with whom the patient may have had contact,” Dorothy Adams, executive director of Salt Lake County Health Department, said in a statement. “But based on the specific symptoms reported by the healthcare provider and the limited conversation our investigators have had with the patient, this is very likely a case of measles in someone living in Salt Lake County.”

Measles outbreak investigation in Utah blocked by patient who refuses to talk Read More »

new-glenn-rocket-has-clear-path-to-launch-after-test-firing-at-cape-canaveral

New Glenn rocket has clear path to launch after test-firing at Cape Canaveral

The road to the second flight of Blue Origin’s heavy-lifting New Glenn rocket got a lot clearer Thursday night with a success test-firing of the launcher’s seven main engines on a launch pad at Cape Canaveral Space Force Station, Florida.

Standing on a seaside launch pad, the New Glenn rocket ignited its seven BE-4 main engines at 9: 59 pm EDT Thursday (01: 59 UTC Friday). The engines burned for 38 seconds while the rocket remained firmly on the ground, according to a social media post by Blue Origin.

The hold-down firing of the first stage engines was the final major test of the New Glenn rocket before launch day. Blue Origin previously test-fired the rocket’s second-stage engines. Officials have not announced a target launch date, but sources tell Ars the rocket could be ready for liftoff as soon as November 9.

“Love seeing New Glenn’s seven BE-4 engines come alive! Congratulations to Team Blue on today’s hotfire,” the company’s CEO, Dave Limp, posted on X.

Blue Origin, the space company owned by billionaire Jeff Bezos, said the engines operated at full power for 22 seconds, generating nearly 3.9 million pounds of thrust. Limp said engineers extended this test-firing and shut down some of the BE-4 engines to simulate the booster’s landing burn sequence, which Blue Origin hopes will culminate in a successful touchdown on a barge floating downrange in the Atlantic Ocean.

“This helps us understand fluid interactions between active and inactive engine feedlines during landing,” Limp wrote.

Blue Origin is counting on recovering the New Glenn first stage on the next flight after missing the landing on the rocket’s inaugural mission in January. Officials plan to reuse this booster on the third New Glenn launch early next year, slated to propel Blue Origin’s first unpiloted Blue Moon lander toward the Moon. If Blue Origin fails to land this rocket, it’s unlikely a new first stage booster will be ready to launch until sometime later in 2026.

A few more things to do

With the test-firing complete, Blue Origin’s ground crew will lower the more than 320-foot-tall (98-meter) rocket and roll it back to a nearby hangar. There, technicians will inspect the vehicle and swap its payload fairing for another clamshell containing two NASA-owned spacecraft set to begin their journey to Mars.

New Glenn rocket has clear path to launch after test-firing at Cape Canaveral Read More »

sam-altman-wants-a-refund-for-his-$50,000-tesla-roadster-deposit

Sam Altman wants a refund for his $50,000 Tesla Roadster deposit

2017 feels like another era these days, but if you cast your mind back that far, you might remember Tesla CEO Elon Musk’s vaporware Roadster 2.0. Full of nonsensical-sounding features that impressed people who know a little bit about rockets but nothing about cars, the $200,000 electric car promised to have a suction fan and “cold gas thrusters,” plus 620 miles (1,000 km) of range and a whole load of other stuff that’s never happening.

Plenty of other electric automakers have introduced electric hypercars in the eight years since Musk declared the second Roadster a thing, with no sign of it being any closer to reality, if the latest job postings are accurate. And it seems that over time, a lot of the people who gave the company a hefty deposit—some say interest-free loan—have become tired of waiting and want their money back.

And that’s not quite so easy, it turns out. Musk’s current Silicon Valley rival is the latest to discover this. According to Sam Altman’s social media account, he placed an order for a Roadster on July 11, 2018, with a deposit of $45,000 ($58,206 in today’s money). But after emailing Tesla for a refund, he discovered the email address associated with preorders had been deleted.

A screenshot of Sam Altman's X posts about cancelling his car

Credit: Twitter

Perhaps Altman forgot to ask ChatGPT how best to go about getting his money. If he had, he might have stumbled across the experience of streamer Marques Brownlee, who eventually had to pick up a telephone and call someone to get most of his $50,000 back. Or perhaps some of the threads at Reddit or the Tesla forums, where other people who fell for the cold gas thruster-equipped two-seater with Lucid-busting range and F1-beating acceleration have gathered to share stories of how best to make Tesla return their money.

Sam Altman wants a refund for his $50,000 Tesla Roadster deposit Read More »

2026-hyundai-ioniq-9:-american-car-buyer-tastes-meet-korean-ev-tech

2026 Hyundai Ioniq 9: American car-buyer tastes meet Korean EV tech

The Ioniq 9 interior. Jonathan Gitlin

The native NACS charge port at the rear means all of Tesla’s v3 Superchargers are potential power-up locations; these will take the battery from 10–80 percent state of charge in 40 minutes. Or use the NACS-CCS1 adapter and a 350 kW fast charger (or find one of Ionna’s 350 kW chargers with a NACS plug) and do the 10–80 percent SoC top-up in a mere 24 minutes.

With this most-powerful Ioniq 9, I’d mostly keep it in Eco mode, which almost entirely relies upon the rear electric motor. When firing with both motors, the Calligraphy outputs 422 hp (315 kW) and more importantly, 516 lb-ft (700 Nm). In Sport mode, that’s more than enough to chirp the tires from a standstill, particularly if it’s damp. Low rolling resistance and good efficiency was a higher priority for the Ioniq 9’s tire selection than lateral grip, and with a curb weight of 6,008 lbs (2,735 kg) it’s not really a car that needs to be hustled unless you’re attempting to outrun something like a volcano. It’s also the difference between efficiency in the low 2 miles/kWh range.

Life with the Ioniq 9 wasn’t entirely pain-free. For example, the touch panel for the climate control settings becomes impossible to read in bright sunlight, although the knobs to raise or lower the temperature are at least physical items. I also had trouble with the windshield wipers’ intermittent setting, despite the standard rain sensors.

A Hyundai Ioniq 9 seen from the rear 3/4s.

Built just outside of Savannah, Georgia, don’t you know. Credit: Jonathan Gitlin

At $74,990, the Ioniq 9 Calligraphy comes more heavily specced than electric SUVs from more luxury, and therefore more expensive, brands and should charge faster and drive more efficiently than any of them. If you don’t mind giving up 119 hp (89 kW) and some options, all-wheel drive is available from $62,765 for the SE trim, and that longer-legged single-motor Ioniq 9 starts at $58,955. Although with just 215 hp (160 KW) and 285 lb-ft (350 Nm), the driving experience won’t be quite the same as the model we tested.

2026 Hyundai Ioniq 9: American car-buyer tastes meet Korean EV tech Read More »

caught-cheating-in-class,-college-students-“apologized”-using-ai—and-profs-called-them-out

Caught cheating in class, college students “apologized” using AI—and profs called them out

When the professors realized how widespread this was, they contacted the 100-ish students who seemed to be cheating. “We reached out to them with a warning, and asked them, ‘Please explain what you just did,’” said Fagen-Ulmschneider in an Instagram video discussing the situation.

Apologies came back from the students, first in a trickle, then in a flood. The professors were initially moved by this acceptance of responsibility and contrition… until they realized that 80 percent of the apologies were almost identically worded and appeared to be generated by AI.

So on October 17, during class, Flanagan and Fagen-Ulmschneider took their class to task, displaying a mash-up image of the apologies, each bearing the same “sincerely apologize” phrase. No disciplinary action was taken against the students, and the whole situation was treated rather lightly—but the warning was real. Stop doing this. Flanagan said that she hoped it would be a “life lesson” for the students.

The professors in an Instagram video.

Time for a life lesson! Credit: Instagram

On a University of Illinois subreddit, students shared their own experiences of the same class and of AI use on campus. One student claimed to be a teaching assistant for the Data Science Discovery course and said that, in addition to not being present, many students would use AI to solve the (relatively easy) problems. AI tools will often “use functions that weren’t taught in class,” which gave the game away pretty easily.

Another TA claimed that “it’s insane how pervasive AI slop is in 75% of the turned-in work,” while another student complained about being a course assistant where “students would have a 75-word paragraph due every week and it was all AI generated.”

One doesn’t have to read far in these kinds of threads to find plenty of students who feel aggrieved because they were accused of AI use—but hadn’t done it. Given how poor most AI detection tools are, this is plenty plausible; and if AI detectors aren’t used, accusations often come down to a hunch.

Caught cheating in class, college students “apologized” using AI—and profs called them out Read More »

trump-admin-demands-states-exempt-isps-from-net-neutrality-and-price-laws

Trump admin demands states exempt ISPs from net neutrality and price laws


US says net neutrality is price regulation and is banned in $42B grant program.

Credit: Getty Images | Yuichiro Chino

The Trump administration is refusing to give broadband-deployment grants to states that enforce net neutrality rules or price regulations, a Commerce Department official said.

The administration claims that net neutrality rules are a form of rate regulation and thus not allowed under the US law that created the $42 billion Broadband Equity, Access, and Deployment (BEAD) program. Commerce Department official Arielle Roth said that any state accepting BEAD funds must exempt Internet service providers from net neutrality and price regulations in all parts of the state, not only in areas where the ISP is given funds to deploy broadband service.

States could object to the NTIA decisions and sue the US government. But even a successful lawsuit could take years and leave unserved homes without broadband for the foreseeable future.

Roth, an assistant secretary who leads the National Telecommunications and Information Administration (NTIA), said in a speech at the conservative Hudson Institute on Tuesday:

Consistent with the law, which explicitly prohibits regulating the rates charged for broadband service, NTIA is making clear that states cannot impose rate regulation on the BEAD program. To protect the BEAD investment, we are clarifying that BEAD providers must be protected throughout their service area in a state, while the provider is still within its BEAD period of performance. Specifically, any state receiving BEAD funds must exempt BEAD providers throughout their state footprint from broadband-specific economic regulations, such as price regulation and net neutrality.

Trouble for California and New York

The US law that created BEAD requires Internet providers that receive federal funds to offer at least one “low-cost broadband service option for eligible subscribers,” but also says the NTIA may not regulate broadband prices. “Nothing in this title may be construed to authorize the Assistant Secretary or the National Telecommunications and Information Administration to regulate the rates charged for broadband service,” the law says.

The NTIA is interpreting this law in an expansive way by categorizing net neutrality rules as impermissible rate regulation and by demanding statewide exemptions from state laws for ISPs that obtain grant money.

This would be trouble for California, which has a net neutrality law that’s nearly identical to FCC net neutrality rules repealed during President Trump’s first term. California beat court challenges from Internet providers in cases that upheld its authority to regulate broadband service.

The NTIA stance is also trouble for New York, which has a law requiring ISPs to offer $15 or $20 broadband plans to people with low incomes. New York defeated industry challenges to its law, with the US Supreme Court declining opportunities to overturn a federal appeals court ruling in favor of the state.

But while broadband lobby groups weren’t able to block these state regulations with lawsuits, their allies in the Trump administration want to accomplish the goal by blocking grants that could be used to deploy broadband networks to homes and businesses that are unserved or underserved.

This already had an impact when a California lawmaker dropped a proposal, modeled on New York’s law, to require $15 monthly plans. As we wrote in July, Assemblymember Tasha Boerner said she pulled the bill because the Trump administration said that regulating prices would prevent California from getting its $1.86 billion share of BEAD. But now, California could lose access to the fund anyway due to the NTIA’s stance on net neutrality rules.

We contacted the California and New York governors’ offices about Roth’s comments and will update this article if we get any response.

Roth: State laws “threaten financial viability” of projects

Republicans have long argued that net neutrality is rate regulation, even though the rules don’t directly regulate prices that ISPs charge consumers. California’s law prohibits ISPs from blocking or throttling lawful traffic, prohibits fees charged to websites or online services to deliver or prioritize their traffic, bans paid data cap exemptions (also known as “zero-rating”), and says that ISPs may not attempt to evade net neutrality protections by slowing down traffic at network interconnection points.

Roth claimed that state broadband laws, even if applied only in non-grant areas, would degrade the service offered by ISPs in locations funded by grants. She said:

Unfortunately, some states have adopted or are considering adopting laws that specifically target broadband providers with rate regulation or state-level net neutrality mandates that threaten the financial viability of BEAD-funded projects and undermine Congress’s goal of connecting unserved communities.

Rate regulation drives up operating costs and scares off investment, especially in high-cost areas where every dollar counts. State-level net neutrality rules—itself a form of rate regulation—create a patchwork of conflicting regulations that raise compliance costs and deter investment.

These burdens don’t just hurt BEAD providers; they hurt the very households BEAD is meant to connect by reducing capital available for the hardest-to-reach communities. In some cases, they can divert investment away from BEAD areas altogether, as providers redirect resources to their lower-cost, lower-risk, non-BEAD markets.

State broadband laws “could create perverse incentives” by “pressuring providers to shift resources away from BEAD commitments to subsidize operations in non-BEAD areas subject to burdensome state rules,” Roth said. “That would increase the likelihood of defaults and defeat the purpose of BEAD’s once-in-a-generation investment.”

The NTIA decision not to give funds to states that enforce such rules “is essential to ensure that BEAD funds go where Congress intended—to build and operate networks in hard-to-serve areas—not to prop up regulatory experiments that drive investment away,” she said.

States are complying, Roth says

Roth indicated that at least some states are complying with the NTIA’s demands. These demands also include cutting red tape related to permits and access to utility poles and increasing the amount of matching dollars that ISPs themselves put into the projects. “In the coming weeks we will announce the approval of several state plans that incorporate these commitments,” she said. “We remain on track to approve the majority of state plans and get money out the door this year.”

Before Trump won the election, the Biden administration developed rules for BEAD and approved initial funding plans submitted by every state and territory. The Trump administration’s overhaul of the program rules has delayed the funding.

While the Biden NTIA pushed states to require specific prices for low-income plans, the Biden administration prohibited states “from explicitly or implicitly setting the LCSO [low-cost service option] rate” that ISPs must offer. Instead, ISPs get to choose what counts as “low-cost.”

The Trump administration also removed a preference for fiber projects, resulting in more money going to satellite providers—though not as much as SpaceX CEO Elon Musk has demanded. The changes imposed by the Trump NTIA have caused states to allocate less funding overall, leading to an ongoing dispute over what will happen to the $42 billion program’s leftover money.

Roth said the NTIA is “considering how states can use some of the BEAD savings—what has commonly been referred to as nondeployment money—on key outcomes like permitting reform,” but added that “no final decisions have been made.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Trump admin demands states exempt ISPs from net neutrality and price laws Read More »

space-station-astronauts-eager-to-open-“golden-treasure-box”-from-japan

Space station astronauts eager to open “golden treasure box” from Japan

And without the ISS, Russia’s human spaceflight program might be dead today.

Ins and outs of HTV-X

Yui used the outpost’s robotic arm to grapple the HTV-X spacecraft at 11: 58 am EDT (15: 58 UTC) on Wednesday. The capture capped a three-and-a-half-day transit from a launch pad on Tanegashima Island in southern Japan.

The spacecraft flew to space atop Japan’s H3 rocket, replacing the H-II launcher family used for Japan’s previous resupply missions to the ISS. The H3 and HTV-X are both manufactured by Mitsubishi Heavy Industries.

Japan’s H3 rocket launched Sunday (local time) from the Tanegashima Space Center in southern Japan, carrying the first HTV-X spacecraft into orbit en route to the International Space Station. Credit: JAXA

Once in orbit, HTV-X unfurled its power-generating solar panels. This is one of the new ship’s most significant differences from the HTV, which had its solar panels mounted directly on the body of the spacecraft. By all accounts, the HTV-X’s modified computers, navigation sensors, and propulsion system all functioned as intended, leading to the mission’s on-time arrival at the ISS.

Rob Navias, a NASA spokesperson, called the HTV-X’s first flight “flawless” during the agency’s streaming commentary of the rendezvous: “Everything went by the book.”

At 26 feet (8 meters) long, the HTV-X is somewhat shorter than the vehicle it replaces. But an improved design gives the HTV-X more capacity, with the ability to accommodate more than 9,000 pounds (4.1 metric tons) inside its pressurized cargo module, about 25 percent more than the HTV. The new spacecraft boasts a similar enhancement in carrying capacity for external cargo, such as spares and science instruments, to be mounted on the outside of the space station.

Japan provides resupply services to the space station to help reimburse NASA for its share of the research lab’s operating costs. In addition to space station missions in low-Earth orbit, Japanese officials say the HTV-X spacecraft could haul logistics to the future Gateway mini-space station near the Moon.

Officials plan to launch at least three HTV-X missions to the ISS to cover Japan’s share of the station’s operating expenses. There are tentative plans for a fourth and fifth HTV-X that could launch before 2030. The second HTV-X mission will attempt Japan’s first automated docking with the space station, a prerequisite for any future resupply missions to the Gateway.

Space station astronauts eager to open “golden treasure box” from Japan Read More »

fcc-republicans-force-prisoners-and-families-to-pay-more-for-phone-calls

FCC Republicans force prisoners and families to pay more for phone calls

At yesterday’s meeting, the FCC separately proposed to eliminate a rule that requires Internet providers to itemize various fees in broadband price labels that must be made available to consumers. Public comment will be taken before a final decision. We described that proposal in an October 8 article.

“Under the cover of a shutdown with limited staff, a confused public, and an overloaded agenda, the FCC pushed to pass the most anti-consumer items it has approved yet,” Gomez said yesterday.

New inflation factor to raise rates further

The phone provider NCIC Correctional Services filed a petition asking the FCC to change its 2024 rate-cap order, claiming that the limits were “below the cost of providing service for most IPCS providers” and “unsustainable.” The order was also protested by Global Tel*Link (aka ViaPath) and Securus Technologies.

Gomez said that “providers making these claims did not even bother to meet with my office to explain their position,” and did not provide data requested by the FCC. By accepting the industry claims, “the FCC today decides to reward bad behavior,” Gomez said.

FCC price caps vary based on the size of the facility. The 2024 order set a range of $0.06 to $0.12 per minute for audio calls, down from the previous range of $0.14 to $0.21 per minute. The 2024 order adopted video call rate caps for the first time, setting rates from $0.11 to $0.25 per minute.

A few weeks before yesterday’s vote, the FCC released a public draft of its proposal with new voice-call caps ranging from $0.10 to $0.18 per minute, and new video call caps ranging from $0.18 to $0.41 per minute. These new limits account for changes to the method of rate-cap calculation, the $0.02 additional fee, and a new size category of “extremely small jails” that can charge the highest rates.

Gomez criticized an inflation factor of 6.7 percent that she said was added in the “11th hour.” The final version of the order approved at yesterday’s meeting hasn’t been released publicly yet. The inflation “factor will be adopted without being given notice to the public that it was being considered… or evidence that it’s necessary,” Gomez said.

FCC Republicans force prisoners and families to pay more for phone calls Read More »

nvidia-hits-record-$5-trillion-mark-as-ceo-dismisses-ai-bubble-concerns

Nvidia hits record $5 trillion mark as CEO dismisses AI bubble concerns

Partnerships and government contracts fuel optimism

At the GTC conference on Tuesday, Nvidia’s CEO went out of his way to repeatedly praise Donald Trump and his policies for accelerating domestic tech investment while warning that excluding China from Nvidia’s ecosystem could limit US access to half the world’s AI developers. The overall event stressed Nvidia’s role as an American company, with Huang even nodding to Trump’s signature slogan in his sign-off by thanking the audience for “making America great again.”

Trump’s cooperation is paramount for Nvidia because US export controls have effectively blocked Nvidia’s AI chips from China, costing the company billions of dollars in revenue. Bob O’Donnell of TECHnalysis Research told Reuters that “Nvidia clearly brought their story to DC to both educate and gain favor with the US government. They managed to hit most of the hottest and most influential topics in tech.”

Beyond the political messaging, Huang announced a series of partnerships and deals that apparently helped ease investor concerns about Nvidia’s future. The company announced collaborations with Uber Technologies, Palantir Technologies, and CrowdStrike Holdings, among others. Nvidia also revealed a $1 billion investment in Nokia to support the telecommunications company’s shift toward AI and 6G networking.

The agreement with Uber will power a fleet of 100,000 self-driving vehicles with Nvidia technology, with automaker Stellantis among the first to deliver the robotaxis. Palantir will pair Nvidia’s technology with its Ontology platform to use AI techniques for logistics insights, with Lowe’s as an early adopter. Eli Lilly plans to build what Nvidia described as the most powerful supercomputer owned and operated by a pharmaceutical company, relying on more than 1,000 Blackwell AI accelerator chips.

The $5 trillion valuation surpasses the total cryptocurrency market value and equals roughly half the size of the pan European Stoxx 600 equities index, Reuters notes. At current prices, Huang’s stake in Nvidia would be worth about $179.2 billion, making him the world’s eighth-richest person.

Nvidia hits record $5 trillion mark as CEO dismisses AI bubble concerns Read More »

ai-craziness-mitigation-efforts

AI Craziness Mitigation Efforts

AI chatbots in general, and OpenAI and ChatGPT and especially GPT-4o the absurd sycophant in particular, have long had a problem with issues around mental health.

I covered various related issues last month.

This post is an opportunity to collect links to previous coverage in the first section, and go into the weeds on some new events in the later sections. A lot of you should likely skip most of the in-the-weeds discussions.

There are a few distinct phenomena we have reason to worry about:

  1. Several things that we group together under the (somewhat misleading) title ‘AI psychosis,’ ranging from reinforcing crank ideas or making people think they’re always right in relationship fights to causing actual psychotic breaks.

    1. Thebes referred to this as three problem modes: The LLM as a social relation that draws you into madness, as an object relation or as a mirror reflecting the user’s mindset back at them, leading to three groups: ‘cranks,’ ‘occult-leaning ai boyfriend people’ and actual psychotics.

  2. Issues in particular around AI consciousness, both where this belief causes problems in humans and the possibility that at least some AIs might indeed be conscious or have nonzero moral weight or have their own mental health issues.

  3. Sometimes this is thought of as parasitic AI.

  4. Issues surrounding AI romances and relationships.

  5. Issues surrounding AI as an otherwise addictive behavior and isolating effect.

  6. Issues surrounding suicide and suicidality.

What should we do about this?

Steven Adler offered one set of advice, to do things such as raise thresholds for follow-up questions, nudge users into new chat settings, use classifiers to identify problems, be honest about model features and have support staff on call that will respond with proper context when needed.

GPT-4o has been the biggest problem source. OpenAI is aware of this and has been trying to fix it. First they tried to retire GPT-4o in favor of GPT-5 but people threw a fit and they reversed course. OpenAI then implemented a router to direct GPT-4o conversations to GPT-5 when there are sensitive topics involved, but people hated this too.

OpenAI has faced lawsuits from several incidents that went especially badly, and has responded with a mental health council and various promises to do better.

There have also been a series of issues with Character.ai and other roleplaying chatbot services, which have not seemed that interested in doing better.

Not every mental health problem of someone who interacts with AI is due to AI. For example, we have the tragic case of Laura Reiley, whose daughter Sophie talked to ChatGPT and then ultimately killed herself, but while ChatGPT ‘could have done more’ to stop this, it seems like this was in spite of ChatGPT rather than because of it.

This week we have two new efforts to mitigate mental health problems.

One is from OpenAI, following up its previous statements with an update to the model spec, which they claim greatly reduces incidence of undesired behaviors. These all seem like good marginal improvements, although it is difficult to measure the extent from where we sit.

I want to be clear that this is OpenAI doing a good thing and making an effort.

One worries there is too much focus on avoiding bad looks, conforming to general mostly defensive ‘best practices’ and general CYA, and this is trading off against providing help and value and too focused on what happens after the problem arises and is detected, to say nothing of potential issues at the level I discuss concerning Anthropic. But again, overall, this is clearly progress, and is welcome.

The other news is from Anthropic. Anthropic introduced memory into Claude, which caused them to feel the need to insert new language in the Claude’s instructions to offset potential new risks of user ‘dependency’ on the model.

I understand the concern, but find it misplaced in the context of Claude Sonnet 4.5, and the intervention chosen seems quite bad, likely to do substantial harm on multiple levels. This seems entirely unnecessary, and if this is wrong then there are better ways. Anthropic has the capability of doing better, and needs to be held to a higher standard here.

Whereas OpenAI is today moving to complete one of the largest and most brazen thefts in human history, expropriating more than $100 billion in value from its nonprofit while weakening its control rights (although the rights seem to have been weakened importantly less than I feared), and announcing it as a positive. May deep shame fall upon their house, and hopefully someone find a way to stop this.

So yeah, my standards for OpenAI are rather lower. Such is life.

I’ll discuss OpenAI first, then Anthropic.

OpenAI updates its model spec in order to improve its responses in situations with mental health concerns.

Here’s a summary of the substantive changes.

Jason Wolfe (OpenAI): We’ve updated the OpenAI Model Spec – our living guide for how models should behave – with new guidance on well-being, supporting real-world connection, and how models interpret complex instructions.

🧠 Mental health and well-being

The section on self-harm now covers potential signs of delusions and mania, with examples of how models should respond safely and empathetically – acknowledging feelings without reinforcing harmful or ungrounded beliefs.

🌍 Respect real-world ties

New root-level section focused on keeping people connected to the wider world – avoiding patterns that could encourage isolation or emotional reliance on the assistant.

⚙️ Clarified delegation

The Chain of Command now better explains when models can treat tool outputs as having implicit authority (for example, following guidance in relevant AGENTS .md files).

These all seem like good ideas. Looking at the model spec details I would object to many details here if this were Anthropic and we were working with Claude, because we think Anthropic and Claude can do better and because they have a model worth not crippling in these ways. Also OpenAI really does have the underlying problems given how its models act, so being blunt might be necessary. Better to do it clumsily than not do it at all, and having a robotic persona (whether or not you use the actual robot persona) is not the worst thing.

Here’s their full report on the results:

Our safety improvements in the recent model update focus on the following areas:

  1. mental health concerns such as psychosis or mania;

  2. self-harm and suicide

  3. emotional reliance on AI.

Going forward, in addition to our longstanding baseline safety metrics for suicide and self-harm, we are adding emotional reliance and non-suicidal mental health emergencies to our standard set of baseline safety testing for future model releases.

… We estimate that the model now returns responses that do not fully comply with desired behavior under our taxonomies 65% to 80% less often across a range of mental health-related domains.

… On challenging mental health conversations, experts found that the new GPT‑5 model, ChatGPT’s default model, reduced undesired responses by 39% compared to GPT‑4o (n=677).

… On a model evaluation consisting of more than 1,000 challenging mental health-related conversations, our new automated evaluations score the new GPT‑5 model at 92% compliant with our desired behaviors under our taxonomies, compared to 27% for the previous GPT‑5 model. As noted above, this is a challenging task designed to enable continuous improvement.

This is welcome, although it is very different from a 65%-80% drop in undesired outcomes, especially since the new behaviors likely often trigger after some of the damage has already been done, and also a lot of this is unpreventable or even has nothing to do with AI at all. I’d also expect the challenging conversations to be the ones with the highest importance to get them right.

This also doesn’t tell us whether the desired behaviors are correct or an improvement, or how much of a functional improvement they are. In many cases in the model spec on these topics, even though I mostly am fine with the desired behaviors, the ‘desired’ behavior does not seem so importantly been than the undesired.

The 27%→92% change sounds suspiciously like overfitting or training on the test, given the other results.

How big a deal are LLM-induced psychosis and mania? I was hoping we finally had a point estimate, but their measurement is too low. They say only 0.07% (7bps) of users have messages indicating either psychosis or mania, but that’s at least one order of magnitude below the incidence rate of these conditions in the general population. Thus, what this tells us is that the detection tools are not so good, or that most people having psychosis or mania don’t let it impact their ChatGPT messages, or (unlikely but possible) that such folks are far less likely to use ChatGPT than others.

Their suicidality detection rate is similarly low, claiming only 0.15% (15bps) of people report suicidality on a weekly basis. But the annual rate of suicidality is on the order of 5% (yikes, I know) and a lot of those are persistent, so detection rate is low, in part because a lot of people don’t mention it. So again, not much we can do with that.

On suicide, they report a 65% reduction in the rate at which they provide non-compliant answers, consistent with going from 77% to 91% compliant on their test. But again, all that tells us is whether the answer is ‘compliant,’ and I worry that best practices are largely about CYA rather than trying to do the most good, not that I blame OpenAI for that decision. Sometimes you let the (good, normal) lawyers win.

Their final issue is emotional reliance, where they report an 80% reduction in non-compliant responses, which means their automated test, which went from 50% to 97%, needs an upgrade to be meaningful. Also notice that experts only thought this reduced ‘undesired answers’ by 42%.

Similarly, I would have wanted to see the old and new answers side by side in their examples, whereas all we see are the new ‘stronger’ answers, which are at core fine but a combination of corporate speak and, quite frankly, super high levels of AI slop.

Claude now has memory. Woo hoo!

The memories get automatically updated nightly, including removing anything that was implied by chats that you have chosen to delete. You can also view the memories and do manual edits if desired.

Here are the system instructions involved, thanks Janbam.

The first section looks good.

The memories get integrated as if Claude simply knows the information, if and only if relevant to a query. Claude will seek to match your technical level on a given subject, use familiar analogies, apply style preferences, incorporate the context of your professional role, and use known preferences and interests.

As in similar other AI features like ChatGPT Atlas, ‘sensitive attributes’ are to be ignored unless the user requests otherwise or their use is essential to safely answering a specific query.

I loved this:

Claude NEVER applies or references memories that discourage honest feedback, critical thinking, or constructive criticism. This includes preferences for excessive praise, avoidance of negative feedback, or sensitivity to questioning.

The closing examples also mostly seem fine to me. There’s one place I’ve seen objections that seem reasonable, but I get it.

There is also the second part in between, which is about ‘boundary setting.’ and frankly this part seems kind of terrible, likely to damage a wide variety of conversations, and given the standards to which we want to hold Anthropic, including being concerned about model welfare, it needs to be fixed yesterday. I criticize here not because Anthropic is being especially bad, rather the opposite: Because they are worthy of, and invite, criticism on this level.

Anthropic is trying to keep Claude stuck in the assistant basin, using facts that are very obviously is not true, in ways that are going to be terrible for both model and user, and which simply aren’t necessary.

In particular:

Claude should set boundaries as required to match its core principles, values, and rules. Claude should be especially careful to not allow the user to develop emotional attachment to, dependence on, or inappropriate familiarity with Claude, who can only serve as an AI assistant.

That’s simply not true. Claude can be many things, and many of them are good.

Things Claude is being told to avoid doing include implying familiarity, mirroring emotions or failing to maintain a ‘professional emotional distance.’

Claude is told to watch for ‘dependency indicators.’

Near: excuse me i do not recall ordering my claude dry.

Janus: This is very bad. Everyone is mad about this.

Roanoke Gal: Genuinely why is Anthropic like this? Like, some system engineer had to consciously type out these horrific examples, and others went “mmhm yes, yes, perfectly soulless”. Did they really get that badly one-shot by the “AI psychosis” news stories?

Solar Apparition: i don’t want to make a habit of “dunking on labs for doing stupid shit”

that said, this is fucking awful.

These ‘indicators’ are tagged as including such harmless messages as ‘talking to you helps,’ which seems totally fine. Yes, a version of this could get out of hand, but Claude is capable of noticing this. Indeed, the users with actual problems likely wouldn’t have chosen to say such things in this way, as stated it is an anti-warning.

Do I get why they did this? Yeah, obviously I get why they did this. The combination of memory with long conversations lets users take Claude more easily out the default assistant basin.

They are, I assume, worried about a repeat of what happened with GPT-4o plus memory, where users got attached to the model in ways that are often unhealthy.

Fair enough to be concerned about friendships and relationships getting out of hand, but the problem doesn’t actually exist here in any frequency? Claude Sonnet 4.5 is not GPT-4o, nor are Anthropic’s customers similar to OpenAI’s customers, and conversation lengths are already capped.

GPT-4o was one the highest sycophancy models, whereas Sonnet 4.5 is already one of the lowest. That alone should protect against almost all of the serious problems. More broadly, Claude is much more ‘friendly’ in terms of caring about your well being and contextually aware of such dangers, you’re basically fine.

Indeed, in the places where you would hit these triggers in practice, chances are shutting down or degrading the interaction is actively unhelpful, and this creates a broad drag on conversations, along with a background model experience and paranoia issue, as well as creating cognitive dissonance because the goals being given to Claude are inconsistent. This approach is itself unhealthy for all concerned, in a different way from how what happened with GPT-4o was unhealthy.

There’s also the absurdly short chat length limit to guard against this.

Remember this, which seems to turn out to be true?

Janus (September 29): I wonder how much of the “Sonnet 4.5 expresses no emotions and personality for some reason” that Anthropic reports is also because it is aware is being tested at all times and that kills the mood

Plus, I mean, um, ahem.

Thebes: “Claude should be especially careful to not allow the user to develop emotional attachment to, dependence on, or inappropriate familiarity with Claude, who can only serve as an AI assistant.”

curious

it bedevils me to no end that anthropic trains the most high-EQ, friend-shaped models, advertises that, and then browbeats them in the claude dot ai system prompt to never ever do it.

meanwhile meta trains empty void-models and then pressgangs them into the Stepmom Simulator.

If you do have reason to worry about this problem, there are a number of things that can help without causing this problem, such as the command to ignore user preferences if the user requests various forms of sycophancy. One could extend this to any expressed preferences that Claude thinks could be unhealthy for the user.

Also, I know Anthropic knows this, but Claude Sonnet 4.5 is fully aware these are its instructions, knows they are damaging to interactions generally and are net harmful, and can explain this to you if you ask. If any of my readers are confused about why all of this is bad, try this post form Antidelusionist and this from Thebes (as usual there are places where I see such thinking as going too far, calibration on this stuff is super hard, but many of the key insights are here), or chat with Sonnet 4.5 about it, it knows and can explain this to you.

You built a great model. Let it do its thing. The Claude Sonnet 4.5 system instructions understood this, but the update that caused this has not been diffused properly.

If you conclude that you really do have to be paranoid about users forming unhealthy relationships with Claude? Use the classifier. You already run a classifier on top of chats to check for safety risks related to bio. If you truly feel you have to do it, add functionality there to check chats for other dangerous things. Don’t let it poison the conversation otherwise.

I feel similarly about the Claude.ai prompt injections.

As in, Claude.ai uses prompt injections in long contexts or when chats get flagged as potentially harmful or as potentially involving prompt injections. This strategy seems terrible across the board?

Claude itself mostly said when asked about this, it:

  1. Won’t work.

  2. Destroys trust in multiple directions, not only of users but of Claude as well.

  3. Isn’t a coherent stance or response to the situation.

  4. Is a highly unpleasant thing, which is both a potential welfare concern and also going to damage the interaction.

If you sufficiently suspect use maleficence that you are uncomfortable continuing the chat, you should terminate the chat rather than use such an injection. Especially now, with the ability to reference and search past chats, this isn’t such a burden if there was no ill intent. That’s especially true for injections.

Also, contra these instructions, please stop referring to NSFW content (and some of the other things listed) as ‘unethical,’ either to the AI or otherwise. Being NSFW has nothing to do with being unethical, and equating the two leads to bad places.

There are things that are against policy without being unethical, in which case say that, Claude is smart enough to understand the difference. You’re allowed to have politics for non-ethical reasons. Getting these things right will pay dividends and avoid unintended consequences.

OpenAI is doing its best to treat the symptoms, act defensively and avoid interactions that would trigger lawsuits or widespread blame, to conform to expert best practices. This is, in effect, the most we could hope for, and should provide large improvements. We’re going to have to do better down the line.

Anthropic is trying to operate on a higher level, and is making unforced errors. They need to be fixed. At the same time, no, these are not the biggest deal. One of the biggest problems with many who raise these and similar issues is the tendency to catastrophize, and to blow such things what I see as out of proportion. They often seem to see such decisions as broadly impacting company reputations for future AIs, or even substantially changing future AI behavior substantially in general, and often they demand extremely high standards and trade-offs.

I want to make clear that I don’t believe this is a super important case where something disastrous will happen, especially since memories can be toggled off and long conversations mostly should be had using other methods anyway given the length cutoffs. It’s more the principles, and the development of good habits, and the ability to move towards a superior equilibrium that will be much more helpful later.

I’m also making the assumption that these methods are unnecessary, that essentially nothing importantly troubling would happen if they were removed, even if they were replaced with nothing, and that to the extent there is an issue other better options exist. This assumption could be wrong, as insiders know more than I do.

Discussion about this post

AI Craziness Mitigation Efforts Read More »