Science

deepmind’s-latest:-an-ai-for-handling-mathematical-proofs

DeepMind’s latest: An AI for handling mathematical proofs


AlphaProof can handle math challenges but needs a bit of help right now.

Computers are extremely good with numbers, but they haven’t gotten many human mathematicians fired. Until recently, they could barely hold their own in high school-level math competitions.

But now Google’s DeepMind team has built AlphaProof, an AI system that matched silver medalists’ performance at the 2024 International Mathematical Olympiad, scoring just one point short of gold at the most prestigious undergrad math competition in the world. And that’s kind of a big deal.

True understanding

The reason computers fared poorly in math competitions is that, while they far surpass humanity’s ability to perform calculations, they are not really that good at the logic and reasoning that is needed for advanced math. Put differently, they are good at performing calculations really quickly, but they usually suck at understanding why they’re doing them. While something like addition seems simple, humans can do semi-formal proofs based on definitions of addition or go for fully formal Peano arithmetic that defines the properties of natural numbers and operations like addition through axioms.

To perform a proof, humans have to understand the very structure of mathematics. The way mathematicians build proofs, how many steps they need to arrive at the conclusion, and how cleverly they design those steps are a testament to their brilliance, ingenuity, and mathematical elegance. “You know, Bertrand Russel published a 500-page book to prove that one plus one equals two,” says Thomas Hubert, a DeepMind researcher and lead author of the AlphaProof study.

DeepMind’s team wanted to develop an AI that understood math at this level. The work started with solving the usual AI problem: the lack of training data.

Math problems translator

Large language models that power AI systems like Chat GPT learn from billions upon billions of pages of text. Because there are texts on mathematics in their training databases—all the handbooks and works of famous mathematicians—they show some level of success in proving mathematical statements. But they are limited by how they operate: They rely on using huge neural nets to predict the next word or token in sequences generated in response to user prompts. Their reasoning is statistical by design, which means they simply return answers that “sound” right.

DeepMind didn’t need the AI to “sound” right—that wasn’t going to cut it in high-level mathematics. They needed their AI to “be” right, to guarantee absolute certainty. That called for an entirely new, more formalized training environment. To provide that, the team used a software package called Lean.

Lean is a computer program that helps mathematicians write precise definitions and proofs. It relies on a precise, formal programming language that’s also called Lean, which mathematical statements can be translated into. Once the translated or formalized statement is uploaded to the program, it can check if it is correct and get back with responses like “this is correct,” “something is missing,” or “you used a fact that is not proved yet.”

The problem was, most mathematical statements and proofs that can be found online are written in natural language like “let X be the set of natural numbers that…”—the number of statements written in Lean was rather limited. “The major difficulty of working with formal languages is that there’s very little data,” Hubert says. To go around it, the researchers trained a Gemini large language model to translate mathematical statements from natural language to Lean. The model worked like an automatic formalizer and produced about 80 million formalized mathematical statements.

It wasn’t perfect, but the team managed to use that to their advantage. “There are many ways you can capitalize on approximate translations,” Hubert claims.

Learning to think

The idea DeepMind had for the AlphaProof was to use the architecture the team used in their chess-, Go-, and shogi-playing AlphaZero AI system. Building proofs in Lean and Mathematics in general was supposed to be just another game to master. “We were trying to learn this game through trial and error,” Hubert says. Imperfectly formalized problems offered great opportunity for making errors. In its learning phase, AlphaProof was simply proving and disproving the problems it had in its database. If something was translated poorly, figuring out that something wasn’t right was a useful form of exercise.

Just like AlphaZero, AlphaProof in most cases used two main components. The first was a huge neural net with a few billion parameters that learned to work in the Lean environment through trial and error. It was rewarded for each proven or disproven statement and penalized for each reasoning step it took, which was a way of incentivizing short, elegant proofs.

It was also trained to use a second component, which was a tree search algorithm. This explored all possible actions that could be taken to push the proof forward at each step. Because the number of possible actions in mathematics can be near infinite, the job of the neural net was to look at the available branches in the search tree and commit computational budget only to the most promising ones.

After a few weeks of training, the system could score well on most math competition benchmarks based on problems sourced from past high school-level competitions, but it still struggled with the most difficult of them. To tackle these, the team added a third component that hadn’t been in AlphaZero. Or anywhere else.

Spark of humanity

The third component, called Test-Time Reinforcement Learning (TTRL), roughly emulated the way mathematicians approach the most difficult problems. The learning part relied on the same combination of neural nets with search tree algorithms. The difference came in what it learned from. Instead of relying on a broad database of auto-formalized problems, AlphaProof working in the TTRL mode started its work by generating an entirely new training dataset based on the problem it was dealing with.

The process involved creating countless variations of the original statement, some simplified a little bit more, some more general, and some only loosely connected to it. The system then attempted to prove or disprove them. It was roughly what most humans do when they’re facing a particularly hard puzzle, the AI equivalent of saying, “I don’t get it, so let’s try an easier version of this first to get some practice.” This allowed AlphaProof to learn on the fly, and it worked amazingly well.

At the 2024 International Mathematics Olympiad, there were 42 points to score for solving six different problems worth seven points each. To win gold, participants had to get 29 points or higher, and 58 out of 609 of them did that. Silver medals were awarded to people who earned between 22 and 28 points (there were 123 silver medalists). The problems varied in difficulty, with the sixth one, acting as a “final boss,” being the most difficult of them all. Only six participants managed to solve it. AlphaProof was the seventh.

But AlphaProof wasn’t an end-all, be-all mathematical genius. Its silver had its price—quite literally.

Optimizing ingenuity

The first problem with AlphaProof’s performance was that it didn’t work alone. To begin with, humans had to make the problems compatible with Lean before the software even got to work. And, among the six Olympic problems, the fourth one was about geometry, and the AI was not optimized for that. To deal with it, AlphaProof had to call a friend called AlphaGeometry 2, a geometry-specialized AI that ripped through the task in a few minutes without breaking a sweat. On its own, AlphaProof scored 21 points, not 28, so technically it would win bronze, not silver. Except it wouldn’t.

Human participants of the Olympiad had to solve their six problems in two sessions, four-and-a-half hours long. AlphaProof, on the other hand, wrestled with them for several days using multiple tensor processing units at full throttle. The most time- and energy-consuming component was TTRL, which battled with the three problems it managed to solve for three days each. If AlphaProof was held up to the same standard as human participants, it would basically run out of time. And if it wasn’t born at a tech giant worth hundreds of billions of dollars, it would run out of money, too.

In the paper, the team admits the computational requirements to run AlphaProof are most likely cost-prohibitive for most research groups and aspiring mathematicians. Computing power in AI applications is often measured in TPU-days, meaning a tensor processing unit working flat-out for a full day. AlphaProof needed hundreds of TPU-days per problem.

On top of that, the International Mathematics Olympiad is a high school-level competition, and the problems, while admittedly difficult, were based on things mathematicians already know. Research-level math requires inventing entirely new concepts instead of just working with existing ones.

But DeepMind thinks it can overcome these hurdles and optimize AlphaProof to be less resource-hungry. “We don’t want to stop at math competitions. We want to build an AI system that could really contribute to research-level mathematics,” Hubert says. His goal is to make AlphaProof available to the broader research community. “We’re also releasing a kind of an AlphaProof tool,” he added. “It would be a small trusted testers program to see if this would be useful to mathematicians.”

Nature, 2025.  DOI: 10.1038/s41586-025-09833-y

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

DeepMind’s latest: An AI for handling mathematical proofs Read More »

how-louvre-thieves-exploited-human-psychology-to-avoid-suspicion—and-what-it-reveals-about ai

How Louvre thieves exploited human psychology to avoid suspicion—and what it reveals about AI

On a sunny morning on October 19 2025, four men allegedly walked into the world’s most-visited museum and left, minutes later, with crown jewels worth 88 million euros ($101 million). The theft from Paris’ Louvre Museum—one of the world’s most surveilled cultural institutions—took just under eight minutes.

Visitors kept browsing. Security didn’t react (until alarms were triggered). The men disappeared into the city’s traffic before anyone realized what had happened.

Investigators later revealed that the thieves wore hi-vis vests, disguising themselves as construction workers. They arrived with a furniture lift, a common sight in Paris’s narrow streets, and used it to reach a balcony overlooking the Seine. Dressed as workers, they looked as if they belonged.

This strategy worked because we don’t see the world objectively. We see it through categories—through what we expect to see. The thieves understood the social categories that we perceive as “normal” and exploited them to avoid suspicion. Many artificial intelligence (AI) systems work in the same way and are vulnerable to the same kinds of mistakes as a result.

The sociologist Erving Goffman would describe what happened at the Louvre using his concept of the presentation of self: people “perform” social roles by adopting the cues others expect. Here, the performance of normality became the perfect camouflage.

The sociology of sight

Humans carry out mental categorization all the time to make sense of people and places. When something fits the category of “ordinary,” it slips from notice.

AI systems used for tasks such as facial recognition and detecting suspicious activity in a public area operate in a similar way. For humans, categorization is cultural. For AI, it is mathematical.

But both systems rely on learned patterns rather than objective reality. Because AI learns from data about who looks “normal” and who looks “suspicious,” it absorbs the categories embedded in its training data. And this makes it susceptible to bias.

The Louvre robbers weren’t seen as dangerous because they fit a trusted category. In AI, the same process can have the opposite effect: people who don’t fit the statistical norm become more visible and over-scrutinized.

It can mean a facial recognition system disproportionately flags certain racial or gendered groups as potential threats while letting others pass unnoticed.

A sociological lens helps us see that these aren’t separate issues. AI doesn’t invent its categories; it learns ours. When a computer vision system is trained on security footage where “normal” is defined by particular bodies, clothing, or behavior, it reproduces those assumptions.

Just as the museum’s guards looked past the thieves because they appeared to belong, AI can look past certain patterns while overreacting to others.

Categorization, whether human or algorithmic, is a double-edged sword. It helps us process information quickly, but it also encodes our cultural assumptions. Both people and machines rely on pattern recognition, which is an efficient but imperfect strategy.

A sociological view of AI treats algorithms as mirrors: They reflect back our social categories and hierarchies. In the Louvre case, the mirror is turned toward us. The robbers succeeded not because they were invisible, but because they were seen through the lens of normality. In AI terms, they passed the classification test.

From museum halls to machine learning

This link between perception and categorization reveals something important about our increasingly algorithmic world. Whether it’s a guard deciding who looks suspicious or an AI deciding who looks like a “shoplifter,” the underlying process is the same: assigning people to categories based on cues that feel objective but are culturally learned.

When an AI system is described as “biased,” this often means that it reflects those social categories too faithfully. The Louvre heist reminds us that these categories don’t just shape our attitudes, they shape what gets noticed at all.

After the theft, France’s culture minister promised new cameras and tighter security. But no matter how advanced those systems become, they will still rely on categorization. Someone, or something, must decide what counts as “suspicious behavior.” If that decision rests on assumptions, the same blind spots will persist.

The Louvre robbery will be remembered as one of Europe’s most spectacular museum thefts. The thieves succeeded because they mastered the sociology of appearance: They understood the categories of normality and used them as tools.

And in doing so, they showed how both people and machines can mistake conformity for safety. Their success in broad daylight wasn’t only a triumph of planning. It was a triumph of categorical thinking, the same logic that underlies both human perception and artificial intelligence.

The lesson is clear: Before we teach machines to see better, we must first learn to question how we see.

Vincent Charles, Reader in AI for Business and Management Science, Queen’s University Belfast, and Tatiana Gherman, Associate Professor of AI for Business and Strategy, University of Northampton.  This article is republished from The Conversation under a Creative Commons license. Read the original article.

How Louvre thieves exploited human psychology to avoid suspicion—and what it reveals about AI Read More »

ucla-faculty-gets-big-win-in-suit-against-trump’s-university-attacks

UCLA faculty gets big win in suit against Trump’s university attacks


Government can’t use funding threats to override the First Amendment.

While UCLA has been most prominently targeted by the Trump Administration, the ruling protects the entire UC system. Credit: Myung J. Chun

On Friday, a US District Court issued a preliminary injunction blocking the United States government from halting federal funding at UCLA or any other school in the University of California system. The ruling came in response to a suit filed by groups representing the faculty at these schools challenging the Trump administration’s attempts to force UCLA into a deal that would substantially revise instruction and policy.

The court’s decision lays out how the Trump administration’s attacks on universities follow a standard plan: use accusations of antisemitism to justify an immediate cut to funding, then use the loss of money to compel an agreement that would result in revisions to university instruction and management. The court finds that this plan was deficient on multiple grounds, violating legal procedures for cutting funding to an illegal attempt and suppressing the First Amendment rights of faculty.

The result is a reprieve for the entire University of California system, as well as a clear pathway for any universities to fight back against the Trump administration’s attacks on research and education.

First Amendment violations

The Judge overseeing this case, Rita Lin, issued separate documents describing the reasoning behind her decision and the sanctions she has placed on the Trump administration. In the first, she lays out the argument that the threats facing the UC system, and most notably UCLA, are part of a scripted campaign deployed against many other universities, one that proceeds through several steps. The Trump administration’s Task Force to Combat Anti-Semitism is central to this effort, which starts with the opening of a civil rights investigation against a university that was the site of anti-Israel protests during the conflict in Gaza.

“Rooting out antisemitism is undisputedly a laudable and important goal,” Judge Lin wrote. But the investigations in many cases take place after those universities have already taken corrective steps, which the Trump administration seemingly never considers. Instead, while the investigations are still ongoing, agencies throughout the federal government cancel funding for research and education meant for that university and announce that there will be no future funding without an agreement.

The final step is a proposed settlement that would include large payments (over $1.2 billion in UCLA’s case) and a set of conditions that alter university governance and instruction. These conditions often have little to no connection with antisemitism.

While all of this was ostensibly meant to combat antisemitism, the plaintiffs in this case presented a huge range of quotes from administration officials, including the head of the Task Force to Combat Anti-Semitism, saying the goal was to suppress certain ideas on campus. “The unrebutted record in this case shows that Defendants have used the threat of investigations and economic sanctions to… coerce the UC to stamp out faculty, staff, and student ‘woke,’ ‘left,’ ‘anti-American,’ ‘anti-Western,’ and ‘Marxist’ speech,” Lin said.

And even before any sort of agreement was reached, there was extensive testimony that people on campus changed their teaching and research to avoid further attention from the administration. “Plaintiffs’ members express fear that researching, teaching, and speaking on disfavored topics will trigger further retaliatory funding cancellations against the UC,” Lin wrote, “and that they will be blamed for the retaliation. They also describe fears that the UC will retaliate against them to avoid further funding cuts or in order to comply with the proposed settlement agreement.”

That’s a problem, given that teaching and research topics are forms of speech, and therefore protected by the First Amendment. “These are classic, predictable First Amendment harms, and exactly what Defendants publicly said that they intended,” Lin concluded.

Beyond speech

But the First Amendment isn’t the only issue here. The Civil Rights Act, most notably Title VI, lays out a procedure for cutting federal funding, including warnings and hearings before any funds are shut off. That level of coercion is also limited to cases where there’s an indication that voluntary compliance won’t work. Any funding cut would need to target the specific programs involved and the money allocated to them. There is nothing in Title VI that enables the sort of financial payments that the government has been demanding (and, in some cases, receiving) from schools.

It’s pretty obvious that none of these procedures are being followed here. And as Lin noted in her ruling, “Defendants conceded at oral argument that, of the billions of dollars of federal university funding suspended across numerous agencies in recent months, not a single agency has followed the procedures required by Title VI and IX.”

She found that the government decided it wasn’t required to follow the Civil Rights Act procedures. (Reading through the decision, it becomes hard to tell where the government offered any defense of its actions at all.)

The decision to ignore all existing procedures, in turn, causes additional problems, including violations of the Tenth Amendment, which limits the actions that the government can take. And it runs afoul of the Administrative Procedures Act, which prohibits the government from taking actions that are “arbitrary and capricious.”

All of this provided Lin with extensive opportunities to determine that the Plaintiffs, largely organizations that represent the faculty at University of California schools, are likely to prevail in their suit, and thus are deserving of a preliminary injunction to block the federal government’s actions. But first, she had to deal with a recent Supreme Court precedent holding that cases involving federal money belong in a different court system. She did so by arguing that this case is largely about First Amendment and federal procedures rather than any sort of contract for federal money; money is being used as a lever here, so they ruling must involve restoring the money to address the free speech issues.

That issue will undoubtedly be picked up on appeal as it makes its way through the courts.

Complete relief

Lin identified a coercive program that is being deployed against many universities and is already suppressing speech throughout the University of California system, including on campuses that haven’t been targeted yet. She is issuing a ruling that targets the program broadly.

“Plaintiffs have shown that Defendants are coercing the [University of California] as a whole, through the Task Force Policy and Funding Cancellation, to stamp out their members’ disfavored speech,” Lin concluded. “Therefore, to afford Plaintiffs complete relief, the entirety of the coercive practice must be enjoined, not just the suspensions that impact Plaintiffs’ members.”

Her ruling indicates that if the federal government decides it wants to cut any grants to any school in the UC system, it has to go through the entire procedure set out in the Civil Rights Act. The government is also prohibited from demanding money from any of these schools as a fine or payment, and it can’t threaten future funding to the schools. The current hold on grants to the school by the government must also be lifted.

In short, the entire UC system should be protected from any of the ways that the government has been trying to use accusations of antisemitism to suppress ideas that it disfavors. And since those primarily involve federal funding, that has to be restored, and any future threats to it must be blocked.

While this case is likely to face a complicated appeals process, Lin’s ruling makes it extremely clear that all of these cases are exactly what they seemed. Just as members of the administration stated in public multiple times, they decided to target some ideas they disfavored and simply made up a process that would let them do so.

While it worked against a number of prominent universities, its legal vulnerabilities have been there from the start.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

UCLA faculty gets big win in suit against Trump’s university attacks Read More »

the-evolution-of-rationality:-how-chimps-process-conflicting-evidence

The evolution of rationality: How chimps process conflicting evidence

In the first step, the chimps got the auditory evidence, the same rattling sound coming from the first container. Then, they received indirect visual evidence: a trail of peanuts leading to the second container. At this point, the chimpanzees picked the first container, presumably because they viewed the auditory evidence as stronger. But then the team would remove a rock from the first container. The piece of rock suggested that it was not food that was making the rattling sound. “At this point, a rational agent should conclude, ‘The evidence I followed is now defeated and I should go for the other option,’” Engelmann told Ars. “And that’s exactly what the chimpanzees did.”

The team had 20 chimpanzees participating in all five experiments, and they followed the evidence significantly above chance level—in about 80 percent of the cases. “At the individual level, about 18 out of 20 chimpanzees followed this expected pattern,” Engelmann claims.

He views this study as one of the first steps to learn how rationality evolved and when the first sparks of rational thought appeared in nature. “We’re doing a lot of research to answer exactly this question,” Engelmann says.

The team thinks rationality is not an on/off switch; instead, different animals have different levels of rationality. “The first two experiments demonstrate a rudimentary form of rationality,” Engelmann says. “But experiments four and five are quite difficult and show a more advanced form of reflective rationality I expect only chimps and maybe bonobos to have.”

In his view, though, humans are still at least one level above the chimps. “Many people say reflective rationality is the final stage, but I think you can go even further. What humans have is something I would call social rationality,” Engelmann claims. “We can discuss and comment on each other’s thinking and in that process make each other even more rational.”

Sometimes, at least in humans, social interactions can also increase our irrationality instead. But chimps don’t seem to have this problem. Engelmann’s team is currently running a study focused on whether the choices chimps make are influenced by the choices of their fellow chimps. “The chimps only followed the other chimp’s decision when the other chimp had better evidence,” Engelmann says. “In this sense, chimps seem to be more rational than humans.”

Science, 2025. DOI: 10.1126/science.aeb7565

The evolution of rationality: How chimps process conflicting evidence Read More »

wyoming-dinosaur-mummies-give-us-a-new-view-of-duck-billed-species

Wyoming dinosaur mummies give us a new view of duck-billed species


Exquisitely preserved fossils come from a single site in Wyoming.

The scaly skin of a crest over the back of the juvenile duck-billed dinosaur Edmontosaurus annectens. Credit: Tyler Keillor/Fossil Lab

Edmontosaurus annectens, a large herbivore duck-billed dinosaur that lived toward the end of the Cretaceous period, was discovered back in 1908 in east-central Wyoming by C.H. Sternberg, a fossil collector. The skeleton, later housed at the American Museum of Natural History in New York and nicknamed the “AMNH mummy,” was covered by scaly skin imprinted in the surrounding sediment that gave us the first approximate idea of what the animal looked like.

More than a century later, a team of paleontologists led by Paul C. Sereno, a professor of organismal biology at the University of Chicago, got back to the same exact place where Sternberg dug up the first Edmontosaurus specimen. The researchers found two more Edmontosaurus mummies with all fleshy external anatomy imprinted in a sub-millimeter layer of clay. For the first time, we uncovered an accurate image of what Edmontosaurus really looked like, down to the tiniest details, like the size of its scales and the arrangement of spikes on its tail. And we were in for at least a few surprises.

Evolving images

Our view of Edmontosaurus changed over time, even before Sereno’s study. The initial drawing of Edmontosaurus was made in 1909 by Charles R. Knight, a famous paleoartist, who based his visualization on the first specimen found by Sternberg. “He was accurate in some ways, but he made a mistake in that he drew the crest extending throughout the entire length of the body,” Sereno says. The mummy Knight based his drawing on had no tail, so understandably, the artist used his imagination to fill in the gaps and made the Edmontosaurus look a little bit like a dragon.

An update to Knight’s image came in 1984 due to Jack Horner, one of the most influential American paleontologists, who found a section of Edmontosaurus tail that had spikes instead of a crest. “The specimen was not prepared very accurately, so he thought the spikes were rectangular and didn’t touch each other,” Sereno explains. “In his reconstruction he extended the spikes from the tail all the way to the head—which was wrong,” Sereno says. Over time, we ended up with many different, competing visions of Edmontosaurus. “But I think now we finally nailed down the way it truly looked,” Sereno claims.

To nail it down, Sereno’s team retraced the route to where Sternberg found the first Edmontosaurus mummy. This was not easy, because the team had to rely on Sternberg’s notes, which often referred to towns and villages that were no longer on the map. But based on interviews with Wyoming farmers, Sereno managed to reach the “mummy zone,” an area less than 10 kilometers in diameter, surprisingly abundant in Cretaceous fossils.

“To find dinosaurs, you need to understand geology,” Sereno says. And in the “mummy zone,” geological processes created something really special.

Dinosaur templating

The fossils are found in part of the Lance Formation, a geological formation that originated in the last three or so million years of the Cretaceous period, just before the dinosaurs’ extinction. It extends through North Dakota, South Dakota, Wyoming, Montana, and even to parts of Canada. “The formation is roughly 200 meters thick. But when you approach the mummy zone—surprise! The formation suddenly goes up to a thousand meters thick,” Sereno says. “The sedimentation rate in there was very high for some reason.”

Sereno thinks the most likely reason behind the high sedimentation rate was frequent and regular flooding of the area by a nearby river. These floods often drowned the unfortunate dinosaurs that roamed there and covered their bodies with mud and clay that congealed against a biofilm which formed at the surface of decaying carcasses. “It’s called clay templating, where the clay sticks to the outside of the skin and preserves a very thin layer, a mask, showing how the animal looked like,” Sereno says.

Clay templating is a process well-known by scientists studying deep-sea invertebrate organisms because that’s the only way they can be preserved. “It’s just no one ever thought it could happen to a large dinosaur buried in a river,” Sereno says. But it’s the best explanation for the Wyoming mummy zone, where Sereno’s team managed to retrieve two more Edmontosaurus skeletons surrounded by clay masks under 1 millimeter thick. These revealed the animal’s appearance with amazing, life-like accuracy.

As a result, the Edmontosaurus image got updated one more time. And some of the updates were rather striking.

Delicate elephants

Sereno’s team analyzed the newly discovered Edmontosaurus mummies with a barrage of modern imaging techniques like CT scans, X-rays, photogrammetry, and more. “We created a detailed model of the skin and wrapped it around the skeleton—some of these technologies were not even available 10 years ago,” Sereno says. The result was an updated Edmontosaurus image that includes changes to the crest, the spikes, and the appearance of its skin. Perhaps most surprisingly, it adds hooves to its legs.

It turned out both Knight and Horner were partially right about the look of Edmontosaurus’ back. The fleshy crest, as depicted by Knight, indeed started at the top of the head and extended rearward along the spine. The difference was that there was a point where this crest changed into a row of spikes, as depicted in the Horner version. The spikes were similar to the ones found on modern chameleons, where each spike corresponds one-to-one with the vertebrae underneath it.

“Another thing that was stunning in Edmontosaurus was the small size of its scales,” Sereno says. Most of the scales were just 1 to 4 millimeters across. They grew slightly larger toward the bottom of the tail, but even there they did not exceed 1 centimeter. “You can find such scales on a lizard, and we’re talking about an animal the size of an elephant,” Sereno adds. The skin covered with these super-tiny scales was also incredibly thin, which the team deduced from the wrinkles they found in their imagery.

And then came the hooves. “In a hoof, the nail goes around the toe and wraps, wedge-shaped, around its bottom,” Sereno explains. The Edmontosaurus had singular, central hooves on its fore legs with a “frog,” a triangular, rubbery structure at the underside. “They looked very much like equine hooves, so apparently these were not invented by mammals,” Sereno says. “Dinosaurs had them.” The hind legs that supported most of the animal’s weight, on the other hand, had three wedge-shaped hooves wrapped around three digits and a fleshy heel toward the back—a structure found in modern-day rhinos.

“There are so many amazing ‘firsts’ preserved in these duck-billed mummies,” Sereno says. “The earliest hooves were documented in a land vertebrate, the first confirmed hooved reptile, and the first hooved four-legged animal with different forelimb and hindlimb posture.” But Edmontosaurus, while first in many aspects, was not the last species Sereno’s team found in the mummy zone.

Looking for wild things

“When I was walking through the grass in the mummy zone for the first time, the first hill I found a T. rex in a concretion. Another mummy we found was a Triceratops,” Sereno says. Both these mummies are currently being examined and will be covered in the upcoming papers published by Sereno’s team. And both are unique in their own way.

The T. rex mummy was preserved in a surprisingly life-like pose, which Sereno thinks indicates the predator might have been buried alive. Edmontosaurus mummies, on the other hand, were positioned in a death pose, which meant the animals most likely died up to a week before the mud covered their carcasses. This, in principle, should make the T. rex clay mask even more true-to-life, since there should be no need to account for desiccation and decay when reconstructing the animal’s image.

Sereno, though, seems to be even more excited about the Triceratops mummy. “We already found Triceratops scales were 10 times larger than the largest scales on the Edmontosaurus, and its skin had no wrinkles, so it was significantly thicker. And we’re talking about animals of similar size living in the same area and in the same time,” Sereno says. To him, this could indicate that the physiology of the Triceratops and Edmontosaurus was radically different.

“We are in the age of discovery. There are so many things to come. It’s just the beginning,” Sereno says. “Anyway, the next two mummies we want to cover are the Triceratops and the T. Rex. And I can already tell you what we have with the Triceratops is wild,” he adds.

Science, 2025. DOI: 10.1126/science.adw3536

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Wyoming dinosaur mummies give us a new view of duck-billed species Read More »

scientist-pleaded-guilty-to-smuggling-fusarium-graminearum-into-us.-but-what-is-it?

Scientist pleaded guilty to smuggling Fusarium graminearum into US. But what is it?

Even with Fusarium graminearum, which has appeared on every continent but Antarctica, there is potential for introducing new genetic material into the environment that may exist in other countries but not the US and could have harmful consequences for crops.

How do you manage Fusarium graminearum infections?

Fusarium graminearum infections generally occur during the plant’s flowering stage or when there is more frequent rainfall and periods of high humidity during early stages of grain production.

How Fusarium graminearum risk progressed in 2025. Yellow is low risk, orange is medium risk, and red is high risk. Fusarium Risk Tool/Penn State

Wheat in the southern US is vulnerable to infection during the spring. As the season advances, the risk from scab progresses north through the US and into Canada as the grain crops mature across the region, with continued periods of conducive weather throughout the summer.

Between seasons, Fusarium graminearum survives on barley, wheat, and corn plant residues that remain in the field after harvest. It reproduces by producing microscopic spores that can then travel long distances on wind currents, spreading the fungus across large geographic areas each season.

In wheat and barley, farmers can suppress the damage by spraying a fungicide onto developing wheat heads when they’re most susceptible to infection. Applying fungicide can reduce scab and its severity, improve grain weight, and reduce mycotoxin contamination.

However, integrated approaches to manage plant diseases are generally ideal, including planting barley or wheat varieties that are resistant to scab and also using a carefully timed fungicide application, rotating crops, and tilling the soil after harvest to reduce residue where Fusarium graminearum can survive the winter.

Even though fungicide applications may be beneficial, fungicides offer only some protection and can’t cure scab. If the environmental conditions are extremely conducive for scab, with ample moisture and humidity during flowering, the disease will still occur, albeit at reduced levels.

Fusarium Head Blight with NDSU’s Andrew Friskop.

Plant pathologists are making progress on early warning systems for farmers. A team from Kansas State University, Ohio State University, and Pennsylvania State University has been developing a computer model to predict the risk of scab. Their wheat disease predictive model uses historic and current environmental data from weather stations throughout the US, along with current conditions, to develop a forecast.

In areas that are most at risk, plant pathologists and commodity specialists encourage wheat growers to apply a fungicide during periods when the fungus is likely to grow to reduce the chances of damage to crops and the spread of mycotoxin.

Tom W. Allen, associate research professor of Plant Pathology, Mississippi State University. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Scientist pleaded guilty to smuggling Fusarium graminearum into US. But what is it? Read More »

blue-origin’s-new-glenn-rocket-came-back-home-after-taking-aim-at-mars

Blue Origin’s New Glenn rocket came back home after taking aim at Mars


“Never before in history has a booster this large nailed the landing on the second try.”

Blue Origin’s 320-foot-tall (98-meter) New Glenn rocket lifts off from Cape Canaveral Space Force Station, Florida. Credit: Blue Origin

The rocket company founded a quarter-century ago by billionaire Jeff Bezos made history Thursday with the pinpoint landing of an 18-story-tall rocket on a floating platform in the Atlantic Ocean.

The on-target touchdown came nine minutes after the New Glenn rocket, built and operated by Bezos’ company Blue Origin, lifted off from Cape Canaveral Space Force Station, Florida, at 3: 55 pm EST (20: 55 UTC). The launch was delayed from Sunday, first due to poor weather at the launch site in Florida, then by a solar storm that sent hazardous radiation toward Earth earlier this week.

“We achieved full mission success today, and I am so proud of the team,” said Dave Limp, CEO of Blue Origin. “It turns out Never Tell Me The Odds (Blue Origin’s nickname for the first stage) had perfect odds—never before in history has a booster this large nailed the landing on the second try. This is just the beginning as we rapidly scale our flight cadence and continue delivering for our customers.”

The two-stage launcher set off for space carrying two NASA science probes on a two-year journey to Mars, marking the first time any operational satellites flew on Blue Origin’s new rocket, named for the late NASA astronaut John Glenn. The New Glenn hit its marks on the climb into space, firing seven BE-4 main engines for nearly three minutes on a smooth ascent through blue skies over Florida’s Space Coast.

Seven BE-4 engines power New Glenn downrange from Florida’s Space Coast. Credit: Blue Origin

The engines consumed super-cold liquified natural gas and liquid oxygen, producing more than 3.8 million pounds of thrust at full power. The BE-4s shut down, and the first stage booster released the rocket’s second stage, with dual hydrogen-fueled BE-3U engines, to continue the mission into orbit.

The booster soared to an altitude of 79 miles (127 kilometers), then began a controlled plunge back into the atmosphere, targeting a landing on Blue Origin’s offshore recovery vessel named Jacklyn. Moments later, three of the booster’s engines reignited to slow its descent in the upper atmosphere. Then, moments before reaching the Atlantic, the rocket again lit three engines and extended its landing gear, sinking through low-level clouds before settling onto the football field-size deck of Blue Origin’s recovery platform 375 miles (600 kilometers) east of Cape Canaveral.

A pivotal moment

The moment of touchdown appeared electric at several Blue Origin facilities around the country, which had live views of cheering employees piped in to the company’s webcast of the flight. This was the first time any company besides SpaceX has propulsively landed an orbital-class rocket booster, coming nearly 10 years after SpaceX recovered its first Falcon 9 booster intact in December 2015.

Blue Origin’s New Glenn landing also came almost exactly a decade after the company landed its smaller suborbital New Shepard rocket for the first time in West Texas. Just like Thursday’s New Glenn landing, Blue Origin successfully recovered the New Shepard on its second-ever attempt.

Blue Origin’s heavy-lifter launched successfully for the first time in January. But technical problems prevented the booster from restarting its engines on descent, and the first stage crashed at sea. Engineers made “propellant management and engine bleed control improvements” to resolve the problems, and the fixes appeared to work Thursday.

The rocket recovery is a remarkable achievement for Blue Origin, which has long lagged dominant SpaceX in the commercial launch business. SpaceX has now logged 532 landings with its Falcon booster fleet. Now, with just a single recovery in the books, Blue Origin sits at second in the rankings for propulsive landings of orbit-class boosters. Bezos’ company has amassed 34 landings of the suborbital New Shepard model, which lacks the size and doesn’t reach the altitude and speed of the New Glenn booster.

Blue Origin landed a New Shepard returning from space for the first time in November 2015, a few weeks before SpaceX first recovered a Falcon 9 booster. Bezos threw shade on SpaceX with a post on Twitter, now called X, after the first Falcon 9 landing: “Welcome to the club!”

Jeff Bezos, Blue Origin’s founder and owner, wrote this message on Twitter following SpaceX’s first Falcon 9 landing on December 21, 2015. Credit: X/Jeff Bezos

Finally, after Thursday, Blue Origin officials can say they are part of the same reusable rocket club as SpaceX. Within a few days, Blue Origin’s recovery vessel is expected to return to Port Canaveral, Florida, where ground crews will offload the New Glenn booster and move it to a hangar for inspections and refurbishment.

“Today was a tremendous achievement for the New Glenn team, opening a new era for Blue Origin and the industry as we look to launch, land, repeat, again and again,” said Jordan Charles, the company’s vice president for the New Glenn program, in a statement. “We’ve made significant progress on manufacturing at rate and building ahead of need. Our primary focus remains focused on increasing our cadence and working through our manifest.”

Blue Origin plans to reuse the same booster next year for the first launch of the company’s Blue Moon Mark 1 lunar cargo lander. This mission is currently penciled in to be next on Blue Origin’s New Glenn launch schedule. Eventually, the company plans to have a fleet of reusable boosters, like SpaceX has with the Falcon 9, that can each be flown up to 25 times.

New Glenn is a core element in Blue Origin’s architecture for NASA’s Artemis lunar program. The rocket will eventually launch human-rated lunar landers to the Moon to provide astronauts with rides to and from the surface of the Moon.

The US Space Force will also examine the results of Thursday’s launch to assess New Glenn’s readiness to begin launching military satellites. The military selected Blue Origin last year to join SpaceX and United Launch Alliance as a third launch provider for the Defense Department.

Blue Origin’s New Glenn booster, 23 feet (7 meters) in diameter, on the deck of the company’s landing platform in the Atlantic Ocean.

Slow train to Mars

The mission wasn’t over with the buoyant landing in the Atlantic. New Glenn’s second stage fired its engines twice to propel itself on a course toward deep space, setting up for deployment of NASA’s two ESCAPADE satellites a little more than a half-hour after liftoff.

The identical satellites were released from their mounts on top of the rocket to begin their nearly two-year journey to Mars, where they will enter orbit to survey how the solar wind interacts with the rarefied uppermost layers of the red planet’s atmosphere. Scientists believe radiation from the Sun gradually stripped away Mars’ atmosphere, driving runaway climate change that transitioned the planet from a warm, habitable world to the global inhospitable desert seen today.

“I’m both elated and relieved to see NASA’s ESCAPADE spacecraft healthy post-launch and looking forward to the next chapter of their journey to help us understand Mars’ dynamic space weather environment,” said Rob Lillis, the mission’s principal investigator from the University of California, Berkeley.

Scientists want to understand the environment at the top of the Martian atmosphere to learn more about what drove this change. With two instrumented spacecraft, ESCAPADE will gather data from different locations around Mars, providing a series of multipoint snapshots of solar wind and atmospheric conditions. Another NASA spacecraft, named MAVEN, has collected similar data since arriving in orbit around Mars in 2014, but it is only a single observation post.

ESCAPADE, short for Escape and Plasma Acceleration and Dynamics Explorers, was developed and launched on a budget of about $80 million, a bargain compared to all of NASA’s recent Mars missions. The spacecraft were built by Rocket Lab, and the project is managed on behalf of NASA by the University of California, Berkeley.

The two spacecraft for NASA’s ESCAPADE mission at Rocket Lab’s factory in Long Beach, California. Credit: Rocket Lab

NASA paid Blue Origin about $20 million for the launch of ESCAPADE, significantly less than it would have cost to launch it on any other dedicated rocket. The space agency accepted the risk of launching on the relatively unproven New Glenn rocket, which hasn’t yet been certified by NASA or the Space Force for the government’s marquee space missions.

The mission was supposed to launch last year, when Earth and Mars were in the right positions to enable a direct trip between the planets. But Blue Origin delayed the launch, forcing a yearlong wait until the company’s second New Glenn was ready to fly. Now, the ESCAPADE satellites, each about a half-ton in mass fully fueled, will loiter in a unique orbit more than a million miles from Earth until next November, when they will set off for the red planet. ESCAPADE will arrive at Mars in September 2027 and begin its science mission in 2028.

Rocket Lab ground controllers established communication with the ESCAPADE satellites late Thursday night.

“The ESCAPADE mission is part of our strategy to understand Mars’ past and present so we can send the first astronauts there safely,” said Nicky Fox, associate administrator of NASA’s Science Mission Directorate. “Understanding Martian space weather is a top priority for future missions because it helps us protect systems, robots, and most importantly, humans, in extreme environments.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Blue Origin’s New Glenn rocket came back home after taking aim at Mars Read More »

three-astronauts-are-stuck-on-china’s-space-station-without-a-safe-ride-home

Three astronauts are stuck on China’s space station without a safe ride home

This view shows a Shenzhou spacecraft departing the Tiangong space station in 2023. Credit: China Manned Space Agency

Swapping spacecraft in low-Earth orbit

With their original spacecraft deemed unsafe, Chen and his crewmates instead rode back to Earth on the newer Shenzhou 21 craft that launched and arrived at the Tiangong station October 31. The three astronauts who launched on Shenzhou 21—Zhang Lu, Wu Fei, and Zhang Hongzhang—remain aboard the nearly 100-metric ton space station with only the damaged Shenzhou 20 craft available to bring them home.

China’s line of Shenzhou spaceships not only provide transportation to and from low-Earth orbit, they also serve as lifeboats to evacuate astronauts from the Chinese space station in the event of an in-flight emergency, such as major failures or a medical crisis. They serve the same role as Russian Soyuz and SpaceX Crew Dragon vehicles flying to and from the International Space Station.

Another Shenzhou spacecraft, Shenzhou 22, “will be launched at a later date,” the China Manned Space Agency said in a statement. Shenzhou 20 will remain in orbit to “continue relevant experiments.” The Tiangong lab is designed to support crews of six for only short periods, with longer stays of three astronauts.

Officials have not disclosed when Shenzhou 22 might launch, but Chinese officials typically have a Long March rocket and Shenzhou spacecraft on standby for rapid launch if required. Instead of astronauts, Shenzhou 22 will ferry fresh food and equipment to sustain the three-man crew on the Tiangong station.

China’s state-run Xinhua news agency called Friday’s homecoming “the first successful implementation of an alternative return procedure in the country’s space station program history.”

The shuffling return schedules and damaged spacecraft at the Tiangong station offer a reminder of the risks of space junk, especially tiny debris fragments that evade detection by tracking telescopes and radars. A minuscule piece of space debris traveling at several miles per second can pack a punch. Crews at the Tiangong outpost ventured outside the station multiple times in the last few years to install space debris shielding to protect the outpost.

Astronaut Tim Peake took this photo of a cracked window on the International Space Station in 2016. The 7-millimeter (quarter-inch) divot on the quadruple-pane window was gouged out by an impact of space debris no larger than a few thousandths of a millimeter across. The damage did not pose a risk to the station. Credit: ESA/NASA

Shortly after landing on Friday, ground teams assisted the Shenzhou astronauts out of their landing module. All three appeared to be in good health and buoyant spirits after completing the longest-duration crew mission for China’s space program.

“Space exploration has never been easy for humankind,” said Chen Dong, the mission commander, according to Chinese state media.

“This mission was a true test, and we are proud to have completed it successfully,” Chen said shortly after landing. “China’s space program has withstood the test, with all teams delivering outstanding performances … This experience has left us a profound impression that astronauts’ safety is really prioritized.”

Three astronauts are stuck on China’s space station without a safe ride home Read More »

world’s-oldest-rna-extracted-from-ice-age-woolly-mammoth

World’s oldest RNA extracted from Ice Age woolly mammoth

A young woolly mammoth now known as Yuka was frozen in the Siberian permafrost for about 40,000 years before it was discovered by local tusk hunters in 2010. The hunters soon handed it over to scientists, who were excited to see its exquisite level of preservation, with skin, muscle tissue, and even reddish hair intact. Later research showed that, while full cloning was impossible, Yuka’s DNA was in such good condition that some cell nuclei could even begin limited activity when placed inside mouse eggs.

Now, a team has successfully sequenced Yuka’s RNA—a feat many researchers once thought impossible. Researchers at Stockholm University carefully ground up bits of muscle and other tissue from Yuka and nine other woolly mammoths, then used special chemical treatments to pull out any remaining RNA fragments, which are normally thought to be much too fragile to survive even a few hours after an organism has died. Scientists go to great lengths to extract RNA even from fresh samples, and most previous attempts with very old specimens have either failed or been contaminated.

A different view

The team used RNA-handling methods adapted for ancient, fragmented molecules. Their scientific séance allowed them to explore information that had never been accessible before, including which genes were active when Yuka died. In the creature’s final panicked moments, its muscles were tensing and its cells were signaling distress—perhaps unsurprising since Yuka is thought to have died as a result of a cave lion attack.

It’s an exquisite level of detail, and one that scientists can’t get from just analyzing DNA. “With RNA, you can access the actual biology of the cell or tissue happening in real time within the last moments of life of the organism,” said Emilio Mármol, a researcher who led the study. “In simple terms, studying DNA alone can give you lots of information about the whole evolutionary history and ancestry of the organism under study. “Obtaining this fragile and mostly forgotten layer of the cell biology in old tissues/specimens, you can get for the first time a full picture of the whole pipeline of life (from DNA to proteins, with RNA as an intermediate messenger).”

World’s oldest RNA extracted from Ice Age woolly mammoth Read More »

dogs-came-in-a-wide-range-of-sizes-and-shapes-long-before-modern-breeds

Dogs came in a wide range of sizes and shapes long before modern breeds

“The concept of ‘breed’ is very recent and does not apply to the archaeological record,” Evin said. People have, of course, been breeding dogs for particular traits for as long as we’ve had dogs, and tiny lap dogs existed even in ancient Rome. However, it’s unlikely that a Neolithic herder would have described his dog as being a distinct “breed” from his neighbor’s hunting partner, even if they looked quite different. Which, apparently, they did.

A big yellow dog, a little gray dog, and a little white dog

Dogs had about half of their modern diversity (at least in skull shapes and sizes) by the Neolithic. Credit: Kiona Smith

Bones only tell part of the story

“We know from genetic models that domestication should have started during the late Pleistocene,” Evin told Ars. A 2021 study suggested that domestic dogs have been a separate species from wolves for more than 23,000 years. But it took a while for differences to build up.

Evin and her colleagues had access to 17 canine skulls that ranged from 12,700 to 50,000 years old—prior to the end of the ice age—and they all looked enough like modern wolves that, as Evin put it, “for now, we have no evidence to suggest that any of the wolf-like skulls did not belong to wolves or looked different from them.” In other words, if you’re just looking at the skull, it’s hard to tell the earliest dogs from wild wolves.

We have no way to know, of course, what the living dog might have looked like. It’s worth mentioning that Evin and her colleagues found a modern Saint Bernard’s skull that, according to their statistical analysis, looked more wolf-like than dog-like. But even if it’s not offering you a brandy keg, there’s no mistaking a live Saint Bernard, with its droopy jowls and floppy ears, for a wolf.

“Skull shape tells us a lot about function and evolutionary history, but it represents only one aspect of the animal’s appearance. This means that two dogs with very similar skulls could have looked quite different in life,” Evin told Ars. “It’s an important reminder that the archaeological record captures just part of the biological and cultural story.”

And with only bones—and sparse ones, at that—to go on, we may be missing some of the early chapters of dogs’ biological and cultural story. Domestication tends to select the friendliest animals to produce the next generation, and apparently that comes with a particular set of evolutionary side effects, whether you’re studying wolves, foxes, cattle, or pigs. Spots, floppy ears, and curved tails all seem to be part of the genetic package that comes with inter-species friendliness. But none of those traits is visible in the skull.

Dogs came in a wide range of sizes and shapes long before modern breeds Read More »

tiny-chips-hitch-a-ride-on-immune-cells-to-sites-of-inflammation

Tiny chips hitch a ride on immune cells to sites of inflammation


Tiny chips can be powered by infrared light if they’re near the brain’s surface.

An immune cell chemically linked to a CMOS chip. Credit: Yadav, et al.

Standard brain implants use electrodes that penetrate the gray matter to stimulate and record the activity of neurons. These typically need to be put in place via a surgical procedure. To go around that need, a team of researchers led by Deblina Sarkar, an electrical engineer and MIT assistant professor, developed microscopic electronic devices hybridized with living cells. Those cells can be injected into the circulatory system with a standard syringe and will travel the bloodstream before implanting themselves in target brain areas.

“In the first two years of working on this technology at MIT, we’ve got 35 grant proposals rejected in a row,” Sarkar says. “Comments we got from the reviewers were that our idea was very impactful, but it was impossible.” She acknowledges that the proposal sounded like something you can find in science fiction novels. But after more than six years of research, she and her colleagues have pulled it off.

Nanobot problems

In 2022, when Sarkar and her colleagues gathered initial data and got some promising results with their cell-electronics hybrids, the team proposed the project for the National Institutes of Health Director’s New Innovator Award. For the first time, after 35 rejections, it made it through peer review. “We got the highest impact score ever,” Sarkar says.

The reason for that score was that her technology solved three extremely difficult problems. The first, obviously, was making functional electronic devices smaller than cells that can circulate in our blood.

“Previous explorations, which had not seen a lot of success, relied on putting magnetic particles inside the bloodstream and then guiding them with magnetic fields,” Sarkar explains. “But there is a difference between electronics and particles.” Electronics made using CMOS technology (which we use for making computer processors) can generate electrical power from incoming light in the same way as photovoltaics, as well as perform computations necessary for more intelligent applications like sensing. Particles, on the other hand, can only be used to stimulate cells to an extent.

If they ever reach those cells, of course, which was the second problem. “Controlling the devices with magnetic fields means you need to go into a machine the size of an MRI,” Sarkar says. Once the subject is in the machine, an operator looks at where the devices are and tries to move them to where they need to be using nothing but magnetic fields. Sarkar said that it’s tough to do anything other than move the particles in straight lines, which is a poor match for our very complex vasculature.

The solution her team found was fusing the electronics with monocytes, immune cells that can home in on inflammation in our bodies. The idea was that the monocytes would carry the electronics through the bloodstream using the cells’ chemical homing mechanism. This also solved the third problem: crossing the blood-brain barrier that protects the brain from pathogens and toxins. Electronics alone could not get through it; monocytes could.

The challenge was making all these ideas work.

Clicking together

Sarkar’s team built electronic devices made of biocompatible polymer and metallic layers fabricated on silicon wafers using a standard CMOS process. “We made the devices this small with lithography, the technique used in making transistors for chips in our computers,” Sarkar explains. They were roughly 200 nanometers thick and 10 microns in diameter—that kept them subcellular, since a monocyte cell usually measures between 12 and 18 microns. The devices were activated and powered by infrared light at a wavelength that could penetrate several centimeters into the brain.

Once the devices were manufactured and taken off the wafer, the next thing to figure out was attaching them to monocytes.

To do this, the team covered the surfaces of the electronic devices with dibezocyclooctyne, a very reactive molecule that can easily link to other chemicals, especially nitrogen compounds called azides. Then Sarkar and her colleagues chemically modified monocytes to place azides on their surfaces. This way, the electronics and cells could quickly snap together, almost like Lego blocks (this approach, called click chemistry, got the 2022 Nobel Prize in chemistry).

The resulting solution of cell-electronics hybrids was designed to be biocompatible and could be injected into the circulatory system. This is why Sarkar called her concept “circulatronics.”

Of course, Sarkar’s “circulatronic” hybrids fall a bit short of sci-fi fantasies, in that they aren’t exactly literal nanobots. But they may be the closest thing we’ve created so far.

Artificial neurons

To test these hybrids in live mice, the researchers prepared a fluorescent version to make them easier to track. Mice were anesthetized first, and the team artificially created inflammation at a specific location in their brains, around the ventrolateral thalamic nucleus. Then the hybrids were injected into the veins of the mice. After roughly 72 hours, the time scientists expected would be needed for the monocytes to reach the inflammation, Sarkar and her colleagues started running tests.

It turned out that most of the injected hybrids reached their destination in one piece—the electronics mostly remained attached to the monocytes. The team’s measurements suggest that around 14,000 hybrids managed to successfully implant themselves near the neurons in the target area of the brain. Then, in response to infrared irradiation, they caused significant neuronal activation, comparable to traditional electrodes implanted via surgery.

The real strength of the hybrids, Sarkar thinks, is the way they can be tuned to specific diseases. “We chose monocytes for this experiment because inflammation spots in the brain are usually the target in many neurodegenerative diseases,” Sarkar says. Depending on the application, though, the hybrids’ performance can be adjusted by manipulating their electronic and cellular components. “We have already tested using mesenchymal stem cells for the Alzheimer’s, or T cells and other neural stem cells for tumors,” Sarkar explains.

She went on to say that her technology one day may help with placing the implants in brain regions that today cannot be safely reached through surgery. “There is a brain cancer called glioblastoma that forms diffused tumor sites. Another example is DIPG [a form of glioma], which is a terminal brain cancer in children that develops in a region where surgery is impossible,” she adds.

But in the more distant future, the hybrids can find applications beyond targeting diseases. Most of the studies that have relied on data from brain implants were limited to participants who suffered from severe brain disorders. The implants were put in their brains for therapeutic reasons, and participating in research projects was something they just agreed to do on the side.

Because the electronics in Sarkar’s hybrids can be designed to fully degrade after a set time, the team thinks this could potentially enable them to gather brain implant data from healthy people—the implants would do their job for the duration of the study and be gone once it’s done. Unless we want them to stay, that is.

“The ease of application can make the implants feasible in brain-computer interfaces designed for healthy people,” Sarkar argues. “Also, the electrodes can be made to work as artificial neurons. In principle, we could enhance ourselves—increase our neuronal density.”

First, though, the team wants to put the hybrids through a testing campaign on larger animals and then get them FDA-approved for clinical trials. Through Cahira Technologies, an MIT spinoff company founded to take the “circulatronics” technology to the market, Sarkar wants to make this happen within the next three years.

Nature Biotechnology, 2025. DOI: 10.1038/s41587-025-02809-3

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Tiny chips hitch a ride on immune cells to sites of inflammation Read More »

what-if-the-aliens-come-and-we-just-can’t-communicate?

What if the aliens come and we just can’t communicate?


Ars chats with particle physicist Daniel Whiteson about his new book Do Aliens Speak Physics?

Science fiction has long speculated about the possibility of first contact with an alien species from a distant world and how we might be able to communicate with them. But what if we simply don’t have enough common ground for that to even be possible? An alien species is bound to be biologically very different, and their language will be shaped by their home environment, broader culture, and even how they perceive the universe. They might not even share the same math and physics. These and other fascinating questions are the focus of an entertaining new book, Do Aliens Speak Physics? And Other Questions About Science and the Nature of Reality.

Co-author Daniel Whiteson is a particle physicist at the University of California, Irvine, who has worked on the ATLAS collaboration at CERN’s Large Hadron Collider. He’s also a gifted science communicator who previously co-authored two books with cartoonist Jorge Cham of PhD Comics fame: 2018’s We Have No Idea and 2021’s Frequently Asked Questions About the Universe. (The pair also co-hosted a podcast from 2018 to 2024, Daniel and Jorge Explain the Universe.) This time around, cartoonist Andy Warner provided the illustrations, and Whiteson and Warner charmingly dedicate their book to “all the alien scientists we have yet to meet.”

Whiteson has long been interested in the philosophy of physics. “I’m not the kind of physicist who’s like, ‘whatever, let’s just measure stuff,’” he told Ars. “The thing that always excited me about physics was this implicit promise that we were doing something universal, that we were learning things that were true on other planets. But the more I learned, the more concerned I became that this might have been oversold. None are fundamental, and we don’t understand why anything emerges. Can we separate the human lens from the thing we’re looking at? We don’t know in the end how much that lens is distorting what we see or defining what we’re looking at. So that was the fundamental question I always wanted to explore.”

Whiteson initially pitched his book idea to his 14-year-old son, who inherited his father’s interest in science. But his son thought Whiteson’s planned discussion of how physics might not be universal was, well, boring. “When you ask for notes, you’ve got to listen to them,” said Whiteson. “So I came back and said, ‘Well, what about a book about when aliens arrive? Will their science be similar to ours? Can we collaborate together?’” That pitch won the teen’s enthusiastic approval: same ideas, but couched in the context of alien contact to make them more concrete.

As for Warner’s involvement as illustrator, “I cold-emailed him and said, ‘Want to write a book about aliens? You get to draw lots of weird aliens,’” said Whiteson. Who could resist that offer?

Ars caught up with Whiteson to learn more.

cartoon exploring possible outcomes of first alien contact

Credit: Andy Warner

Ars Technica: You open each chapter with fictional hypothetical scenarios. Why?

Daniel Whiteson: I sent an early version of the book to my friend, [biologist] Matt Giorgianni, who appears in cartoon form in the book. He said, “The book is great, but it’s abstract. Why don’t you write out a hypothetical concrete scenario for each chapter to show us what it would be like and how we would stumble.”

All great ideas seem obvious once you hear them. I’ve always been a huge science fiction fan. It’s thoughtful and creative and exploratory about the way the universe could be and might be. So I jumped at the opportunity to write a little bit of science fiction. Each one was challenging because you have to come up with a specific example that illustrates the concepts in that chapter, the issues that you might run into, but also be believable and interesting. We went through a lot of drafts. But I had a lot of fun writing them.

Ars Technica: The Voyager Golden Record is perhaps the best known example of humans attempting to communicate with an alien species, spearheaded by the late Carl Sagan, among others. But what are the odds that, despite our best efforts, any aliens will ever be able to decipher our “message in a bottle”?

Daniel Whiteson: I did an informal experiment where I printed out a picture of the Pioneer plaque and showed it to a bunch of grad students who were young enough to not have seen it before. This is Sagan’s audience: biological humans, same brain, same culture, physics grad students—none of them had any idea what any of that was supposed to refer to. NASA gave him two weeks to come up with that design. I don’t know that I would’ve done any better. It’s easy to criticize.

Those folks, they were doing their best. They were trying to step away from our culture. They didn’t use English, they didn’t even use mathematical symbols. They understood that those things are arbitrary, and they were reaching for something they hoped was going to be universal. But in the end, nothing can be universal because language is always symbols, and the choice of those symbols is arbitrary and cultural. It’s impossible to choose a symbol that can only be interpreted in one way.

Fundamentally, the book is trying to poke at our assumptions. It’s so inspiring to me that the history of physics is littered with times when we have had to abandon an assumption that we clung to desperately, until we were shown otherwise with enough data. So we’ve got to be really open-minded about whether these assumptions hold true, whether it’s required to do science, to be technological, or whether there is even a single explanation for reality. We could be very well surprised by what we discover.

Ars Technica: It’s often assumed that math and physics are the closest thing we have to a universal language. You challenge that assumption, probing such questions as “what does it even mean to ‘count’”? 

Daniel Whiteson: At an initial glance, you’re like, well, of course mathematics is required, and of course numbers are universal. But then you dig into it and you start to realize there are fuzzy issues here. So many of the assumptions that underlie our interpretation of what we learned about physics are that way. I had this experience that’s probably very common among physics undergrads in quantum mechanics, learning about those calculations where you see nine decimal places in the theory and nine decimal places in the experiment, and you go, “Whoa, this isn’t just some calculational tool. This is how the universe decides what happens to a particle.”

I literally had that moment. I’m not a religious person, but it was almost spiritual. For many years, I believed that deeply, and I thought it was obvious. But to research this book, I read Science Without Numbers by Hartry Field. I was lucky—here at Irvine, we happen to have an amazing logic and philosophy of science department, and those folks really helped me digest [his ideas] and understand how you can pull yourself away from things like having a number line. It turns out you don’t need numbers; you can just think about relationships. It was really eye-opening, both how essential mathematics seems to human science and how obvious it is that we’re making a bunch of assumptions that we don’t know how to justify.

There are dotted lines humans have drawn because they make sense to us. You don’t have to go to plasma swirls and atmospheres to imagine a scenario where aliens might not have the same differentiation between their identities, me and you, here’s where I end, and here’s where you begin. That’s complicated for aliens that are plasma swirls, but also it’s not even very well-defined for us. How do I define the edge of my body? Is it where my skin ends? What about the dead skin, the hair, my personal space? There’s no obvious definition. We just have a cultural sense for “this is me, this is not me.” In the end, that’s a philosophical choice.

Cartoon of an alien emerging from a spaceship and a human saying

Credit: Andy Warner

Ars Technica: You raise another interesting question in the book: Would aliens even need physics theory or a deeper understanding of how the universe works? Perhaps they could invent, say, warp drive through trial and error. You suggest that our theory is more like a narrative framework. It’s the story we tell, and that is very much prone to bias.

Daniel Whiteson: Absolutely. And not just bias, but emotion and curiosity. We put energy into certain things because we think they’re important. Physicists spend our lives on this because we think, among the many things we could spend our time on, this is an important question. That’s an emotional choice. That’s a personal subjective thing.

Other people find my work dead boring and would hate to have my life, and I would feel the same way about theirs. And that’s awesome. I’m glad that not everybody wants to be a particle physicist or a biologist or an economist. We have a diversity of curiosity, which we all benefit from. People have an intuitive feel for certain things. There’s something in their minds that naturally understands how the system works and reflects it, and they probably can’t explain it. They might not be able to design a better car, but they can drive the heck out of it.

This is maybe the biggest stumbling block for people who are just starting to think about this for the first time. “Obviously aliens are scientific.” Well, how do we know? What do we mean by scientific? That concept has evolved over time, and is it really required? I felt that that was a big-picture thing that people could wrap their minds around but also a shock to the system. They were already a little bit off-kilter and might realize that some things they assumed must be true maybe not have to be.

Ars Technica: You cite the 2016 film Arrival as an example of first contact and the challenge of figuring out how to communicate with an alien species. They had to learn each other’s cultural context before they had any chance of figuring out what their respective symbols meant. 

Daniel Whiteson:  I think that is really crucial. Again, how you choose to represent your ideas is, in a sense, arbitrary. So if you’re on the other side of that, and you have to go from symbols to ideas, you have to know something about how they made those choices in order to reverse-engineer a message, in order to figure out what it means. How do you know if you’ve done it correctly? Say we get a message from aliens and we spend years of supercomputer time cranking on it. Something we rely on for decoding human messages is that you can tell when you’ve done it right because you have something that makes sense.

cartoon about the

Credit: Andy Warner

How do we know when we’ve done that for an alien text? How do you know the difference between nonsense and things that don’t yet make sense to you? It’s essentially impossible. I spoke to some philosophers of language who convinced me that if we get a message from aliens, it might be literally impossible to decode it without their help, without their context. There aren’t enough examples of messages from aliens. We have the “Wow!” signal—who knows what that means? Maybe it’s a great example of getting a message and having no idea how to decode it.

As in many places in the book, I turned to human history because it’s the one example we have. I was shocked to discover how many human languages we haven’t decoded. Not to mention, we haven’t decoded whale. We know whales are talking to each other. Maybe they’re talking to us and we can’t decode it. The lesson, again and again, is culture.

With the Egyptians, the only reason we were able to decode hieroglyphics is because we had the Rosetta Stone, but that still took 20 years there. How does it take 20 years when you have examples and you know what it’s supposed to say? And the answer is that we made cultural assumptions. We assumed pictograms reflected the ideas in the pictures. If there’s a bird in it, it’s about birds. And we were just wrong. That’s maybe why we’ve been struggling with Etruscan and other languages we’ve never decoded.

Even when we do have a lot of culture in common, it’s very, very tricky. So the idea that we would get a message from space and have no cultural clues at all and somehow be able to decode it—I think that only works in the scenario where aliens have been listening to us for a long time and they’re essentially writing to us in something like our language. I’m a big fan of SETI. They host these conferences where they listen to philosophers and linguists and anthropologists. I don’t mean to say that they’ve thought too narrowly, but I think it’s not widely enough appreciated how difficult it is to decode another language with no culture in common.

Ars Technica: You devote a chapter to the possibility that aliens might be able to, say, “taste” electrons. That drives home your point that physiologically, they will probably be different, biologically they will be different, and their experiences are likely to be different. So their notion of culture will also be different.

Daniel Whiteson: Something I really wanted to get across was that perception determines your sense of intuition, which defines in many ways the answers you find acceptable. I read Ed Yong’s amazing book, An Immense World, about animal perception, the diversity of animal experience or animal interaction with the environment. What is it like to be an octopus with a distributed mind? What is it like to be a bird that senses magnetic fields? If you’re an alien and you have very different perceptions, you could also have a different intuitive language in which to understand the universe. What is a photon? Is it a particle? Is it a wave?  Maybe we just struggle with the concept because our intuitive language is limited.

For an alien that can see photons in superposition, this is boring. They could be bored by things we’re fascinated by, or we could explain our theories to them and they could be unsatisfied because to them, it doesn’t translate into their intuitive language. So even though we’ve transcended our biological limitations in many ways, we can detect gravitational waves and infrared photons, we’re still, I think, shaped by those original biological senses that frame our experience, the models we build. What it’s like to be a human in the world could be very, very different from what it’s like to be an alien in the world.

Ars Technica:  Your very last line reads, “Our theories may reveal the patterns of our thoughts as much as the patterns of nature.” Why did you choose to end your book on that note?

Daniel Whiteson: That’s a message to myself. When I started this journey, I thought physics is universal. That’s what makes it beautiful. That implies that if the aliens show up and they do physics in a very different way, it would be a crushing disappointment. Not only is what we’ve learned just another human science, but also it means maybe we can’t benefit from their work or we can’t have that fantasy of a galactic scientific conference. But that actually might be the best-case scenario.

I mean, the reason that you want to meet aliens is the reason you go traveling. You want to expand your horizons and learn new things. Imagine how boring it would be if you traveled around the world to some new country and they just had all the same food. It might be comforting in some way, but also boring. It’s so much more interesting to find out that they have spicy fish soup for breakfast. That’s the moment when you learn not just about what to have for breakfast but about yourself and your boundaries.

So if aliens show up and they’re so weirdly different in a way we can’t possibly imagine—that would be deeply revealing. It’ll give us a chance to separate the reality from the human lens. It’s not just questions of the universe. We are interesting, and we should learn about our own biases. I anticipate that when the aliens do come, their culture and their minds and their science will be so alien, it’ll be a real challenge to make any kind of connection. But if we do, I think we’ll learn a lot, not just about the universe but about ourselves.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

What if the aliens come and we just can’t communicate? Read More »