Science

the-neurons-that-let-us-see-what-isn’t-there

The neurons that let us see what isn’t there

Earlier work had hinted at such cells, but Shin and colleagues show systematically that they’re not rare oddballs—they’re a well-defined, functionally important subpopulation. “What we didn’t know is that these neurons drive local pattern completion within primary visual cortex,” says Shin. “We showed that those cells are causally involved in this pattern completion process that we speculate is likely involved in the perceptual process of illusory contours,” adds Adesnik.

Behavioral tests still to come

That doesn’t mean the mice “saw” the illusory contours when the neurons were artificially activated. “We didn’t actually measure behavior in this study,” says Adesnik. “It was about the neural representation.” All we can say at this point is that the IC-encoders could induce neural activity patterns that matched what imaging shows during normal perception of illusory contours.

“It’s possible that the mice weren’t seeing them,” admits Shin, “because the technique has involved a relatively small number of neurons, for technical limitations. But in the future, one could expand the number of neurons and also introduce behavioral tests.”

That’s the next frontier, Adesnik says: “What we would do is photo-stimulate these neurons and see if we can generate an animal’s behavioral response even without any stimulus on the screen.” Right now, optogenetics can only drive a small number of neurons, and IC-encoders are relatively rare and scattered. “For now, we have only stimulated a small number of these detectors, mainly because of technical limitations. IC-encoders are a rare population, probably distributed through the layers [of the visual system], but we could imagine an experiment where we recruit three, four, five, maybe even 10 times as many neurons,” he says. “In this case, I think we might be able to start getting behavioral responses. We’d definitely very much like to do this test.”

Nature Neuroscience, 2025. DOI: 10.1038/s41593-025-02055-5

Federica Sgorbissa is a science journalist; she writes about neuroscience and cognitive science for Italian and international outlets.

The neurons that let us see what isn’t there Read More »

here’s-the-real-reason-endurance-sank

Here’s the real reason Endurance sank


The ship wasn’t designed to withstand the powerful ice compression forces—and Shackleton knew it.

The Endurance, frozen and keeled over in the ice of the Weddell Sea. Credit: BF/Frank Hurley

In 1915, intrepid British explorer Sir Ernest Shackleton and his crew were stranded for months in the Antarctic after their ship, Endurance, was trapped by pack ice, eventually sinking into the freezing depths of the Weddell Sea. Miraculously, the entire crew survived. The prevailing popular narrative surrounding the famous voyage features two key assumptions: that Endurance was the strongest polar ship of its time, and that the ship ultimately sank after ice tore away the rudder.

However, a fresh analysis reveals that Endurance would have sunk even with an intact rudder; it was crushed by the cumulative compressive forces of the Antarctic ice with no single cause for the sinking. Furthermore, the ship wasn’t designed to withstand those forces, and Shackleton was likely well aware of that fact, according to a new paper published in the journal Polar Record. Yet he chose to embark on the risky voyage anyway.

Author Jukka Tuhkuri of Aalto University is a polar explorer and one of the leading researchers on ice worldwide. He was among the scientists on the Endurance22 mission that discovered the Endurance shipwreck in 2022, documented in a 2024 National Geographic documentary. The ship was in pristine condition partly because of the lack of wood-eating microbes in those waters. In fact, the Endurance22 expedition’s exploration director, Mensun Bound, told The New York Times at the time that the shipwreck was the finest example he’s ever seen; Endurance was “in a brilliant state of preservation.”

As previously reported, Endurance set sail from Plymouth on August 6, 1914, with Shackleton joining his crew in Buenos Aires, Argentina. By the time they reached the Weddell Sea in January 1915, accumulating pack ice and strong gales slowed progress to a crawl. Endurance became completely icebound on January 24, and by mid-February, Shackleton ordered the boilers to be shut off so that the ship would drift with the ice until the weather warmed sufficiently for the pack to break up. It would be a long wait. For 10 months, the crew endured the freezing conditions. In August, ice floes pressed into the ship with such force that the ship’s decks buckled.

The ship’s structure nonetheless remained intact, but by October 25, Shackleton realized Endurance was doomed. He and his men opted to camp out on the ice some two miles (3.2 km) away, taking as many supplies as they could with them. Compacted ice and snow continued to fill the ship until a pressure wave hit on November 13, crushing the bow and splitting the main mast—all of which was captured on camera by crew photographer Frank Hurley. Another pressure wave hit in the late afternoon on November 21, lifting the ship’s stern. The ice floes parted just long enough for Endurance to finally sink into the ocean before closing again to erase any trace of the wreckage.

Once the wreck had been found, the team recorded as much as they could with high-resolution cameras and other instruments. Vasarhelyi, particularly, noted the technical challenge of deploying a remote digital 4K camera with lighting at 9,800 feet underwater, and the first deployment at that depth of photogrammetric and laser technology. This resulted in a millimeter-scale digital reconstruction of the entire shipwreck to enable close study of the finer details.

Challenging the narrative

The ice and wave tank at Aalto University

The ice and wave tank at Aalto University. Credit: Aalto University

It was shortly after the Endurance22 mission found the shipwreck that Tuhkuri realized that there had never been a thorough structural analysis conducted of the vessel to confirm the popular narrative. Was Endurance truly the strongest polar ship of that time, and was a broken rudder the actual cause of the sinking? He set about conducting his own investigation to find out, analyzing Shackleton’s diaries and personal correspondence, as well as the diaries and correspondence of several Endurance crew members.

Tuhkuri also conducted a naval architectural analysis of the vessel under the conditions of compressive ice, which had never been done before. He then compared those results with the underwater images of the Endurance shipwreck. He also looked at comparable wooden polar expedition ships and steel icebreakers built in the late 1800s and early 1900s.

Endurance was originally named Polaris; Shackleton renamed it when he purchased the ship in 1914 for his doomed expedition. Per Tuhkuri, the ship had a lower (tween) deck, a main deck, and a short bridge deck above them that stopped at the machine room in order to make space for the steam engine and boiler. There were no beams in the machine room area, nor any reinforcing diagonal beams, which weakened this significant part of the ship’s hull.

This is because Endurance was originally built for polar tourism and for hunting polar bears and walruses in the Arctic; at the ice edge, ships only needed sufficiently strong planking and frames to withstand the occasional collision from ice floes. However, “In pack ice conditions, where compression from the ice needs to be taken into account, deck beams become of key importance,” Tuhkuri wrote. “It is the deck beams that keep the two ship sides apart and maintain the shape of a ship. Without strong enough deck beams, a vessel gets crushed by compressive ice, more or less irrespective of the thickness of planking and frames.”

The Endurance was nonetheless sturdy enough to withstand five serious ice compression events before her final sinking. On April 4, 1915, one of the scientists on board reported hearing loud rumbling noises from a 3-meter-high ice ridge that formed near the ship, causing the ship to vibrate. Tuhkuri believes this was due to a “compressive failure process” as ice crushed against the hull. On July 14, a violent snowstorm hit, and crew members could hear the ice breaking beneath the ship. The ice ridges that formed over the next few days were sufficiently concerning that Shackleton instituted four-hour watches on deck and insisted on having everything packed in case they had to abandon ship.

Crushed by the ice

Idealized cross sections of early Antarctic ships. Endurance was type (a); Fram and Deutschland were type (b).

Idealized cross sections of early Antarctic ships. Endurance was type (a); Deutschland was type (b). Credit: J. Tuhkuri, 2025

On August 1, an ice floe fractured and grinding noises were heard beneath the ship as the floe piled underneath it, lifting Endurance and causing her to first heel starboard and then heel to port, as several deck beams began to buckle. Similar compression events kept happening until there was a sudden escalation on September 30. The hull began vibrating hard enough to shake the whole rigging as even more ice crushed against the hull. Even the linoleum on the floors buckled; Harry McNish wrote in his diary that it looked like Endurance “was going to pieces.”

Yet another ice compression event occurred on October 17, pushing the vessel one meter into the air as the iron plates on the engine room’s floor buckled and slid over each other. Ship scientist Reginald James wrote that “for a time things were not good as the pressure was mostly along the region of the engine room where there are no beams of any strength,” while Captain Worsley described the engine room as “the weakest part of the ship.”

By the afternoon, Endurance was heeled almost 30 degrees to port, so much so that the keel was visible from the starboard side, per Tuhkuri, although the ice started to fracture in the evening so that the ship could shift upright again. The crew finally abandoned ship on October 27 after an even more severe compression event hit a few days before. Endurance finally sank below the ice on November 21.

Tuhkuri’s analysis of the structural damage to Endurance revealed that the rudder and the stern post were indeed torn off, confirmed by crew correspondence and diaries and by the underwater images taken of the wreck. The keel was also ripped off, with McNish noting in his diary that the ship broke into two halves as a result. The underwater images are less clear on this point, but Tuhkuri writes that there is something “some distance forward from the rudder, on the port side” that “could be the end of a displaced part of the keel sticking up from under the ship.”

All the diaries mentioned the buckling and breaking of deck beams, and there was much structural damage to the ship’s sides; for instance, Worsley writes of “great spikes of ice… forcing their way through the ship’s sides.” There are no visible holes in the wreck’s sides in the underwater images, but Tuhkuri posits that the damage is likely buried in the mud on the sea bed, given that by late October, Endurance “was heavily listed and the bottom was exposed.”

Jukka Tuhkari on the polar ice

Jukka Tuhkuri on the ice. Credit: Aalto University

Based on his analysis, Tuhkuri concluded that the rudder wasn’t the sole or primary reason for the ship’s sinking. “Endurance would have sunk even if it did not have a rudder at all,” Tuhkuri wrote; it was crushed by the ice, with no single reason for its eventual sinking. Shackleton himself described the process as ice floes “simply annihilating the ship.”

Perhaps the most surprising finding is that Shackleton knew of Endurance‘s structural shortcomings even before undertaking the voyage. Per Tuhkuri, the devastating effects of compressive ice on ships were known to shipbuilders in the early 1900s. An early Swedish expedition was forced to abandon its ship Antarctic in February 1903 when it became trapped in the ice. Things progressed much like Endurance: the ice lifted Antarctic up so that the ship heeled over, with ice-crushed sides, buckling beams, broken planking, and a damaged rudder and stern post. The final sinking occurred when an advancing ice floe ripped off the keel.

Shackleton knew of Antarctic‘s fate and had even been involved in the rescue operation. He also helped Wilhelm Filchner make final preparations for Filchner’s 1911–1913 polar expedition with a ship named Deutschland; he even advised his colleague to strengthen the ship’s hull by adding diagonal beams, the better to withstand the Weddell Sea ice. Filchner did so, and as a result, Deutschland survived eight months of being trapped in compressive ice until the ship was finally able to break free and sail home. (It took a torpedo attack in 1917 to sink the good ship Deutschland.)

The same shipyard that modified Deutschland had also just signed a contract to build Endurance (then called Polaris). So both Shackleton and the shipbuilders knew how destructive compressive ice could be and how to bolster a ship against it. Yet Endurance was not outfitted with diagonal beams to strengthen its hull. And knowing this, Shackleton bought Endurance anyway for his 1914–1915 voyage. In a 1914 letter to his wife, he even compared the strength of its construction unfavorably with that of the Nimrod, the ship he used for his 1907–1909 expedition. So Shackleton had to know he was taking a big risk.

“Even simple structural analysis shows that the ship was not designed for the compressive pack ice conditions that eventually sank it,” said Tuhkuri. “The danger of moving ice and compressive loads—and how to design a ship for such conditions—was well understood before the ship sailed south. So we really have to wonder why Shackleton chose a vessel that was not strengthened for compressive ice. We can speculate about financial pressures or time constraints, but the truth is, we may never know. At least we now have more concrete findings to flesh out the stories.”

Polar Record, 2025. DOI: 10.1017/S0032247425100090 (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Here’s the real reason Endurance sank Read More »

pentagon-contract-figures-show-ula’s-vulcan-rocket-is-getting-more-expensive

Pentagon contract figures show ULA’s Vulcan rocket is getting more expensive

A SpaceX Falcon Heavy rocket with NASA’s Psyche spacecraft launches from NASA’s Kennedy Space Center in Florida on October 13, 2023. Credit: Chandan Khanna/AFP via Getty Images

The launch orders announced Friday comprise the second batch of NSSL Phase 3 missions the Space Force has awarded to SpaceX and ULA.

It’s important to remember that these prices aren’t what ULA or SpaceX would charge a commercial satellite customer. The US government pays a premium for access to space. The Space Force, the National Reconnaissance Office, and NASA don’t insure their launches like a commercial customer would do. Instead, government agencies have more insight into their launch contractors, including inspections, flight data reviews, risk assessments, and security checks. Government missions also typically get priority on ULA and SpaceX’s launch schedules. All of this adds up to more money.

A heavy burden

Four of the five launches awarded to SpaceX Friday will use the company’s larger Falcon Heavy rocket, according to Lt. Col. Kristina Stewart at Space Systems Command. One will fly on SpaceX’s workhorse Falcon 9. This is the first time a majority of the Space Force’s annual launch orders has required the lift capability of a Falcon Heavy, with three Falcon 9 booster cores combining to heave larger payloads into space.

All versions of ULA’s Vulcan rocket use a single core booster, with varying numbers of strap-on solid-fueled rocket motors to provide extra thrust off the launch pad.

Here’s a breakdown of the seven new missions assigned to SpaceX and ULA:

USSF-149: Classified payload on a SpaceX Falcon 9 from Florida

USSF-63: Classified payload on a SpaceX Falcon Heavy from Florida

USSF-155: Classified payload SpaceX Falcon Heavy from Florida

USSF-205: WGS-12 communications satellite on a SpaceX Falcon Heavy from Florida

NROL-86: Classified payload on a SpaceX Falcon Heavy from Florida

USSF-88: GPS IIIF-4 navigation satellite on a ULA Vulcan VC2S (two solid rocket boosters) from Florida

NROL-88: Classified payload on a ULA Vulcan VC4S (four solid rocket boosters) from Florida

Pentagon contract figures show ULA’s Vulcan rocket is getting more expensive Read More »

how-different-mushrooms-learned-the-same-psychedelic trick

How different mushrooms learned the same psychedelic trick

Magic mushrooms have been used in traditional ceremonies and for recreational purposes for thousands of years. However, a new study has found that mushrooms evolved the ability to make the same psychoactive substance twice. The discovery has important implications for both our understanding of these mushrooms’ role in nature and their medical potential.

Magic mushrooms produce psilocybin, which your body converts into its active form, psilocin, when you ingest it. Psilocybin rose in popularity in the 1960s and was eventually classed as a Schedule 1 drug in the US in 1970, and as a Class A drug in 1971 in the UK, the designations given to drugs that have high potential for abuse and no accepted medical use. This put a stop to research on the medical use of psilocybin for decades.

But recent clinical trials have shown that psilocybin can reduce depression severity, suicidal thoughts, and chronic anxiety. Given its potential for medical treatments, there is renewed interest in understanding how psilocybin is made in nature and how we can produce it sustainably.

The new study, led by pharmaceutical microbiology researcher Dirk Hoffmeister, from Friedrich Schiller University Jena, discovered that mushrooms can make psilocybin in two different ways, using different types of enzymes. This also helped the researchers discover a new way to make psilocybin in a lab.

Based on the work led by Hoffmeister, enzymes from two types of unrelated mushrooms under study appear to have evolved independently from each other and take different routes to create the exact same compound.

This is a process known as convergent evolution, which means that unrelated living organisms evolve two distinct ways to produce the same trait. One example is that of caffeine, where different plants including coffee, tea, cacao, and guaraná have independently evolved the ability to produce the stimulant.

This is the first time that convergent evolution has been observed in two organisms that belong to the fungal kingdom. Interestingly, the two mushrooms in question have very different lifestyles. Inocybe corydalina, also known as the greenflush fibrecap and the object of Hoffmeister’s study, grows in association with the roots of different kinds of trees. Psilocybe mushrooms, on the other hand, traditionally known as magic mushrooms, live on nutrients that they acquire by decomposing dead organic matter, such as decaying wood, grass, roots, or dung.

How different mushrooms learned the same psychedelic trick Read More »

blue-origin-aims-to-land-next-new-glenn-booster,-then-reuse-it-for-moon-mission

Blue Origin aims to land next New Glenn booster, then reuse it for Moon mission


“We fully intend to recover the New Glenn first stage on this next launch.”

New Glenn lifts off on its debut flight on January 16, 2025. Credit: Blue Origin

There’s a good bit riding on the second launch of Blue Origin’s New Glenn rocket.

Most directly, the fate of a NASA science mission to study Mars’ upper atmosphere hinges on a successful launch. The second flight of Blue Origin’s heavy-lifter will send two NASA-funded satellites toward the red planet to study the processes that drove Mars’ evolution from a warmer, wetter world to the cold, dry planet of today.

A successful launch would also nudge Blue Origin closer to winning certification from the Space Force to begin launching national security satellites.

But there’s more on the line. If Blue Origin plans to launch its first robotic Moon lander early next year—as currently envisioned—the company needs to recover the New Glenn rocket’s first stage booster. Crews will again dispatch Blue Origin’s landing platform into the Atlantic Ocean, just as they did for the first New Glenn flight in January.

The debut launch of New Glenn successfully reached orbit, a difficult feat for the inaugural flight of any rocket. But the booster fell into the Atlantic Ocean after three of the rocket’s engines failed to reignite to slow down for landing. Engineers identified seven changes to resolve the problem, focusing on what Blue Origin calls “propellant management and engine bleed control improvements.”

Relying on reuse

Pat Remias, Blue Origin’s vice president of space systems development, said Thursday that the company is confident in nailing the landing on the second flight of New Glenn. That launch, with NASA’s next set of Mars probes, is likely to occur no earlier than November from Cape Canaveral Space Force Station, Florida.

“We fully intend to recover the New Glenn first stage on this next launch,” Remias said in a presentation at the International Astronautical Congress in Sydney. “Fully intend to do it.”

Blue Origin, owned by billionaire Jeff Bezos, nicknamed the booster stage for the next flight “Never Tell Me The Odds.” It’s not quite fair to say the company’s leadership has gone all-in with their bet that the next launch will result in a successful booster landing. But the difference between a smooth touchdown and another crash landing will have a significant effect on Bezos’ Moon program.

That’s because the third New Glenn launch, penciled in for no earlier than January of next year, will reuse the same booster flown on the upcoming second flight. The payload on that launch will be Blue Origin’s first Blue Moon lander, aiming to become the largest spacecraft to reach the lunar surface. Ars has published a lengthy feature on the Blue Moon lander’s role in NASA’s effort to return astronauts to the Moon.

“We will use that first stage on the next New Glenn launch,” Remias said. “That is the intent. We’re pretty confident this time. We knew it was going to be a long shot [to land the booster] on the first launch.”

A long shot, indeed. It took SpaceX 20 launches of its Falcon 9 rocket over five years before pulling off the first landing of a booster. It was another 15 months before SpaceX launched a previously flown Falcon 9 booster for the first time.

With New Glenn, Blue’s engineers hope to drastically shorten the learning curve. Going into the second launch, the company’s managers anticipate refurbishing the first recovered New Glenn booster to launch again within 90 days. That would be a remarkable accomplishment.

Dave Limp, Blue Origin’s CEO, wrote earlier this year on social media that recovering the booster on the second New Glenn flight will “take a little bit of luck and a lot of excellent execution.”

On September 26, Blue Origin shared this photo of the second New Glenn booster on social media.

Blue Origin’s production of second stages for the New Glenn rocket has far outpaced manufacturing of booster stages. The second stage for the second flight was test-fired in April, and Blue completed a similar static-fire test for the third second stage in August. Meanwhile, according to a social media post written by Limp last week, the body of the second New Glenn booster is assembled, and installation of its seven BE-4 engines is “well underway” at the company’s rocket factory in Florida.

The lagging production of New Glenn boosters, known as GS1s (Glenn Stage 1s), is partly by design. Blue Origin’s strategy with New Glenn has been to build a small number of GS1s, each of which is more expensive and labor-intensive than SpaceX’s Falcon 9. This approach counts on routine recoveries and rapid refurbishment of boosters between missions.

However, this strategy comes with risks, as it puts the booster landings in the critical path for ramping up New Glenn’s launch rate. At one time, Blue aimed to launch eight New Glenn flights this year; it will probably end the year with two.

Laura Maginnis, Blue Origin’s vice president of New Glenn mission management, said last month that the company was building a fleet of “several boosters” and had eight upper stages in storage. That would bode well for a quick ramp-up in launch cadence next year.

However, Blue’s engineers haven’t had a chance to inspect or test a recovered New Glenn booster. Even if the next launch concludes with a successful landing, the rocket could come back to Earth with some surprises. SpaceX’s initial development of Falcon 9 and Starship was richer in hardware, with many boosters in production to decouple successful landings from forward progress.

Blue Moon

All of this means a lot is riding on an on-target landing of the New Glenn booster on the next flight. Separate from Blue Origin’s ambitions to fly many more New Glenn rockets next year, a good recovery would also mean an earlier demonstration of the company’s first lunar lander.

The lander set to launch on the third New Glenn mission is known as Blue Moon Mark 1, an unpiloted vehicle designed to robotically deliver up to 3 metric tons (about 6,600 pounds) of cargo to the lunar surface. The spacecraft will have a height of about 26 feet (8 meters), taller than the lunar lander used for NASA’s Apollo astronaut missions.

The first Blue Moon Mark 1 is funded from Blue Origin’s coffers. It is now fully assembled and will soon ship to NASA’s Johnson Space Center in Houston for vacuum chamber testing. Then, it will travel to Florida’s Space Coast for final launch preparations.

“We are building a series, not a singular lander, but multiple types and sizes and scales of landers to go to the Moon,” Remias said.

The second Mark 1 lander will carry NASA’s VIPER rover to prospect for water ice at the Moon’s south pole in late 2027. Around the same time, Blue will use a Mark 1 lander to deploy two small satellites to orbit the Moon, flying as low as a few miles above the surface to scout for resources like water, precious metals, rare Earth elements, and helium-3 that could be extracted and exploited by future explorers.

A larger lander, Blue Moon Mark 2, is in an earlier stage of development. It will be human-rated to land astronauts on the Moon for NASA’s Artemis program.

Blue Origin’s Blue Moon MK1 lander, seen in the center, is taller than NASA’s Apollo lunar lander, currently the largest spacecraft to have landed on the Moon. Blue Moon MK2 is even larger, but all three landers are dwarfed in size by SpaceX’s Starship. Credit: Blue Origin

NASA’s other crew-rated lander will be derived from SpaceX’s Starship rocket. But Starship and Blue Moon Mark 2 are years away from being ready to accommodate a human crew, and both require orbital cryogenic refueling—something never before attempted in space—to transit out to the Moon.

This has led to a bit of a dilemma at NASA. China is also working on a lunar program, eyeing a crew landing on the Moon by 2030. Many experts say that, as of today, China is on pace to land astronauts on the Moon before the United States.

Of course, 12 US astronauts walked on the Moon in the Apollo program. But no one has gone back since 1972, and NASA and China are each planning to return to the Moon to stay.

One way to speed up a US landing on the Moon might be to use a modified version of Blue Origin’s Mark 1 lander, Ars reported Thursday.

If this is the path NASA takes, the stakes for the next New Glenn launch and landing will soar even higher.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Blue Origin aims to land next New Glenn booster, then reuse it for Moon mission Read More »

a-biological-0-day?-threat-screening-tools-may-miss-ai-designed-proteins.

A biological 0-day? Threat-screening tools may miss AI-designed proteins.


Ordering DNA for AI-designed toxins doesn’t always raise red flags.

Designing variations of the complex, three-dimensional structures of proteins has been made a lot easier by AI tools. Credit: Historical / Contributor

On Thursday, a team of researchers led by Microsoft announced that they had discovered, and possibly patched, what they’re terming a biological zero-day—an unrecognized security hole in a system that protects us from biological threats. The system at risk screens purchases of DNA sequences to determine when someone’s ordering DNA that encodes a toxin or dangerous virus. But, the researchers argue, it has become increasingly vulnerable to missing a new threat: AI-designed toxins.

How big of a threat is this? To understand, you have to know a bit more about both existing biosurveillance programs and the capabilities of AI-designed proteins.

Catching the bad ones

Biological threats come in a variety of forms. Some are pathogens, such as viruses and bacteria. Others are protein-based toxins, like the ricin that was sent to the White House in 2003. Still others are chemical toxins that are produced through enzymatic reactions, like the molecules associated with red tide. All of them get their start through the same fundamental biological process: DNA is transcribed into RNA, which is then used to make proteins.

For several decades now, starting the process has been as easy as ordering the needed DNA sequence online from any of a number of companies, which will synthesize a requested sequence and ship it out. Recognizing the potential threat here, governments and industry have worked together to add a screening step to every order: the DNA sequence is scanned for its ability to encode parts of proteins or viruses considered threats. Any positives are then flagged for human intervention to evaluate whether they or the people ordering them truly represent a danger.

Both the list of proteins and the sophistication of the scanning have been continually updated in response to research progress over the years. For example, initial screening was done based on similarity to target DNA sequences. But there are many DNA sequences that can encode the same protein, so the screening algorithms have been adjusted accordingly, recognizing all the DNA variants that pose an identical threat.

The new work can be thought of as an extension of that threat. Not only can multiple DNA sequences encode the same protein; multiple proteins can perform the same function. To form a toxin, for example, typically requires the protein to adopt the correct three-dimensional structure, which brings a handful of critical amino acids within the protein into close proximity. Outside of those critical amino acids, however, things can often be quite flexible. Some amino acids may not matter at all; other locations in the protein could work with any positively charged amino acid, or any hydrophobic one.

In the past, it could be extremely difficult (meaning time-consuming and expensive) to do the experiments that would tell you what sorts of changes a string of amino acids could tolerate while remaining functional. But the team behind the new analysis recognized that AI protein design tools have now gotten quite sophisticated and can predict when distantly related sequences can fold up into the same shape and catalyze the same reactions. The process is still error-prone, and you often have to test a dozen or more proposed proteins to get a working one, but it has produced some impressive successes.

So, the team developed a hypothesis to test: AI can take an existing toxin and design a protein with the same function that’s distantly related enough that the screening programs do not detect orders for the DNA that encodes it.

The zero-day treatment

The team started with a basic test: use AI tools to design variants of the toxin ricin, then test them against the software that is used to screen DNA orders. The results of the test suggested there was a risk of dangerous protein variants slipping past existing screening software, so the situation was treated like the equivalent of a zero-day vulnerability.

“Taking inspiration from established cybersecurity processes for addressing such situations, we contacted the relevant bodies regarding the potential vulnerability, including the International Gene Synthesis Consortium and trusted colleagues in the protein design community as well as leads in biosecurity at the US Office of Science and Technology Policy, US National Institute of Standards and Technologies, US Department of Homeland Security, and US Office of Pandemic Preparedness and Response,” the authors report. “Outside of those bodies, details were kept confidential until a more comprehensive study could be performed in pursuit of potential mitigations and for ‘patches’… to be developed and deployed.”

Details of that original test are being made available today as part of a much larger analysis that extends the approach to a large range of toxic proteins. Starting with 72 toxins, the researchers used three open source AI packages to generate a total of about 75,000 potential protein variants.

And this is where things get a little complicated. Many of the AI-designed protein variants are going to end up being non-functional, either subtly or catastrophically failing to fold up into the correct configuration to create an active toxin. The only way to know which ones work is to make the proteins and test them biologically; most AI protein design efforts will make actual proteins from dozens to hundreds of the most promising-looking potential designs to find a handful that are active. But doing that for 75,000 designs is completely unrealistic.

Instead, the researchers used two software-based tools to evaluate each of the 75,000 designs. One of these focuses on the similarity between the overall predicted physical structure of the proteins, and another looks at the predicted differences between the positions of individual amino acids. Either way, they’re a rough approximation of just how similar the proteins formed by two strings of amino acids should be. But they’re definitely not a clear indicator of whether those two proteins would be equally functional.

In any case, DNA sequences encoding all 75,000 designs were fed into the software that screens DNA orders for potential threats. One thing that was very clear is that there were huge variations in the ability of the four screening programs to flag these variant designs as threatening. Two of them seemed to do a pretty good job, one was mixed, and another let most of them through. Three of the software packages were updated in response to this performance, which significantly improved their ability to pick out variants.

There was also a clear trend in all four screening packages: The closer the variant was to the original structurally, the more likely the package (both before and after the patches) was to be able to flag it as a threat. In all cases, there was also a cluster of variant designs that were unlikely to fold into a similar structure, and these generally weren’t flagged as threats.

What does this mean?

Again, it’s important to emphasize that this evaluation is based on predicted structures; “unlikely” to fold into a similar structure to the original toxin doesn’t mean these proteins will be inactive as toxins. Functional proteins are probably going to be very rare among this group, but there may be a handful in there. That handful is also probably rare enough that you would have to order up and test far too many designs to find one that works, making this an impractical threat vector.

At the same time, there are also a handful of proteins that are very similar to the toxin structurally and not flagged by the software. For the three patched versions of the software, the ones that slip through the screening represent about 1 to 3 percent of the total in the “very similar” category. That’s not great, but it’s probably good enough that any group that tries to order up a toxin by this method would attract attention because they’d have to order over 50 just to have a good chance of finding one that slipped through, which would raise all sorts of red flags.

One other notable result is that the designs that weren’t flagged were mostly variants of just a handful of toxin proteins. So this is less of a general problem with the screening software and might be more of a small set of focused problems. Of note, one of the proteins that produced a lot of unflagged variants isn’t toxic itself; instead, it’s a co-factor necessary for the actual toxin to do its thing. As such, some of the screening software packages didn’t even flag the original protein as dangerous, much less any of its variants. (For these reasons, the company that makes one of the better-performing software packages decided the threat here wasn’t significant enough to merit a security patch.)

So, on its own, this work doesn’t seem to have identified something that’s a major threat at the moment. But it’s probably useful, in that it’s a good thing to get the people who engineer the screening software to start thinking about emerging threats.

That’s because, as the people behind this work note, AI protein design is still in its early stages, and we’re likely to see considerable improvements. And there’s likely to be a limit to the sorts of things we can screen for. We’re already at the point where AI protein design tools can be used to create proteins that have entirely novel functions and do so without starting with variants of existing proteins. In other words, we can design proteins that are impossible to screen for based on similarity to known threats, because they don’t look at all like anything we know is dangerous.

Protein-based toxins would be very difficult to design, because they have to both cross the cell membrane and then do something dangerous once inside. While AI tools are probably unable to design something that sophisticated at the moment, I would be hesitant to rule out the prospects of them eventually reaching that sort of sophistication.

Science, 2025. DOI: 10.1126/science.adu8578  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

A biological 0-day? Threat-screening tools may miss AI-designed proteins. Read More »

scientists-revive-old-bulgarian-recipe-to-make-yogurt-with-ants

Scientists revive old Bulgarian recipe to make yogurt with ants

Fermenting milk to make yogurt, cheeses, or kefir is an ancient practice, and different cultures have their own traditional methods, often preserved in oral histories. The forests of Bulgaria and Turkey have an abundance of red wood ants, for instance, so a time-honored Bulgarian yogurt-making practice involves dropping a few live ants (or crushed-up ant eggs) into the milk to jump-start fermentation. Scientists have now figured out why the ants are so effective in making edible yogurt, according to a paper published in the journal iScience. The authors even collaborated with chefs to create modern recipes using ant yogurt.

“Today’s yogurts are typically made with just two bacterial strains,” said co-author Leonie Jahn from the Technical University of Denmark. “If you look at traditional yogurt, you have much bigger biodiversity, varying based on location, households, and season. That brings more flavors, textures, and personality.”

If you want to study traditional culinary methods, it helps to go where those traditions emerged, since the locals likely still retain memories and oral histories of said culinary methods—in this case, Nova Mahala, Bulgaria, where co-author Sevgi Mutlu Sirakova’s family still lives. To recreate the region’s ant yogurt, the team followed instructions from Sirakova’s uncle. They used fresh raw cow milk, warmed until scalding, “such that it could ‘bite your pinkie finger,'” per the authors. Four live red wood ants were then collected from a local colony and added to the milk.

The authors secured the milk with cheesecloth and wrapped the glass container in fabric for insulation before burying it inside the ant colony, covering the container completely with the mound material. “The nest itself is known to produce heat and thus act as an incubator for yogurt fermentation,” they wrote. They retrieved the container 26 hours later to taste it and check the pH, stirring it to observe the coagulation. The milk had definitely begun to thicken and sour, producing the early stage of yogurt. Tasters described it as “slightly tangy, herbaceous,” with notes of “grass-fed fat.”

Scientists revive old Bulgarian recipe to make yogurt with ants Read More »

trump-offers-universities-a-choice:-comply-for-preferential-funding

Trump offers universities a choice: Comply for preferential funding

On Wednesday, The Wall Street Journal reported that the Trump administration had offered nine schools a deal: manage your universities in a way that aligns with administration priorities and get “substantial and meaningful federal grants,” along with other benefits. Failure to accept the bargain would result in a withdrawal of federal programs that would likely cripple most universities. The offer, sent to a mixture of state and private universities, would see the government dictate everything from hiring and admissions standards to grading and has provisions that appear intended to make conservative ideas more welcome on campus.

The document was sent to the University of Arizona, Brown University, Dartmouth College, Massachusetts Institute of Technology, the University of Pennsylvania, the University of Southern California, the University of Texas, Vanderbilt University, and the University of Virginia. However, independent reporting indicates that the administration will ultimately extend the deal to all colleges and universities.

Ars has obtained a copy of the proposed “Compact for Academic Excellence in Higher Education,” which makes the scope of the bargain clear in its introduction. “Institutions of higher education are free to develop models and values other than those below, if the institution elects to forego federal benefits,” it suggests, while mentioning that those benefits include access to fundamental needs, like student loans, federal contracts, research funding, tax benefits, and immigration visas for students and faculty.

It is difficult to imagine how it would be possible to run a major university without access to those programs, making this less a compact and more of an ultimatum.

Poorly thought through

The Compact itself would see universities agree to cede admissions standards to the federal government. The government, in this case, is demanding only the use of “objective” criteria such as GPA and standardized test scores as the basis of admissions decisions, and that schools publish those criteria on their websites. They would also have to publish anonymized data comparing how admitted and rejected students did relative to these criteria.

Trump offers universities a choice: Comply for preferential funding Read More »

world-famous-primatologist-jane-goodall-dead-at-91

World-famous primatologist Jane Goodall dead at 91

A sculpture of Jane Goodall and David Greybeard outside the Field Museum of Natural History in Chicago

A sculpture of Jane Goodall and David Greybeard outside the Field Museum of Natural History in Chicago Credit: Geary/CC0

David Greybeard’s behavior also challenged the long-held assumption that chimpanzees were vegetarians. Goodall found that chimps would hunt and eat smaller primates like colobus monkeys as well, sometimes sharing the carcass with other troop members. She also recorded evidence of strong bonds between mothers and infants, altruism, compassion, and aggression and violence. For instance, dominant females would sometimes kill the infants of rival females, and from 1974 to 1978, there was a violent conflict between two communities of chimpanzees that became known as the Gombe Chimpanzee War.

Almost human

One of the more colorful chimps Goodall studied was named Frodo, who grew up to be an alpha male with a temperament very unlike his literary namesake. “As an infant, Frodo proved mischievous, disrupting Jane Goodall’s efforts to record data on mother-infant relationships by grabbing at her notebooks and binoculars,” anthropologist Michael Wilson of the University of Minnesota in Saint Paul recalled on his blog when Frodo died from renal failure in 2013. “As he grew older, Frodo developed a habit of throwing rocks, charging at, hitting, and knocking over human researchers and tourists.” Frodo attacked Wilson twice on Wilson’s first trip to Gombe, even beating Goodall herself in 1989, although he eventually lost his alpha status and “mellowed considerably” in his later years, per Wilson.

Goodall became so renowned around the world that she even featured in one of Gary Larson’s Far Side cartoons, in which two chimps are shown grooming when one finds a blonde hair on the other. “Conducting a little more ‘research’ with that Jane Goodall tramp?” the caption read. The JGI was not amused, sending Larson a letter (without Goodall’s knowledge) calling the cartoon an “atrocity,” but their objections were not shared by Goodall herself, who thought the cartoon was very funny when she heard of it. Goodall even wrote a preface to The Far Side Gallery 5. Larson, for his part, visited Goodall’s research facility in Tanzania in 1988, where he experienced Frodo’s alpha aggressiveness firsthand.

A young Jane Goodall in the field.

A young Jane Goodall in the field. Credit: YouTube/Jane Goodall Institute

Goodall founded the JGI in 1977 and authored more than 27 books, most notably My Friends, the Wild Chimpanzees (1967), In the Shadow of Man (1971), and Through a Window (1990). There was some initial controversy around her 2014 book Seeds of Hope, co-written with Gail Hudson, when portions were found to have been plagiarized from online sources; the publisher postponed publication so that Goodall could revise the book and add 57 pages of endnotes. (She blamed her “chaotic note-taking” for the issue.) National Geographic released a full-length documentary last year about her life’s work, drawing from over 100 hours of previously unseen archival footage.

World-famous primatologist Jane Goodall dead at 91 Read More »

megafauna-was-the-meat-of-choice-for-south-american-hunters

Megafauna was the meat of choice for South American hunters

And that makes perfect sense, because when you reduce hunters’ choices to simple math using what’s called the prey choice model (more on that below), these long-lost species offered bigger returns for the effort of hunting. In other words, giant sloths are extinct because they were delicious and made of meat.

Yup, it’s humanity’s fault—again

As the last Ice Age drew to a close, the large animals that had once dominated the world’s chilly Pleistocene landscapes started to vanish. Mammoths, saber-toothed tigers, and giant armadillos died out altogether. Other species went locally extinct; rhinoceroses no longer stomped around southern Europe, and horses disappeared from the Americas until European colonists brought new species with them thousands of years later.

Scientists have been arguing about how much of that was humanity’s fault for quite a while.

Most of the blame goes to the world’s changing climate; habitats shifted as the world mostly got warmer and wetter. But, at least in some places, humans may have sped the process along, either by hunting the last of the Pleistocene megafauna to extinction or just by shaking up the rest of the ecosystem so much that it was all too ready to collapse, taking the biggest species down with it.

It looks, at first glance, like South America’s late Ice Age hunters are safely not guilty. For one thing, the megafauna didn’t start dying out until thousands of years after humans first set foot in the region. Archaeologists also haven’t found many sites that contain both traces of human activity and the bones of extinct horses, giant armadillos, or other megafauna. And at those few sites, megafauna bones made up only a small percentage of the contents of ancient scrap piles. Not enough evidence places us at the crime scene, in other words—or so it seems.

On the other hand, the Ice Age megafauna began dying out in South America around 13,000 years ago, roughly the same time that a type of projectile point called the fishtail appeared. That may not be a coincidence, argued one study. And late last year, another study showed that farther north, in what’s now the United States, Clovis people’s diets contained mammoth amounts of… well, mammoth.

Megafauna was the meat of choice for South American hunters Read More »

spacex-has-a-few-tricks-up-its-sleeve-for-the-last-starship-flight-of-the-year

SpaceX has a few tricks up its sleeve for the last Starship flight of the year

This particular booster, numbered Booster 15, launched in March and was caught by the launch tower at Starbase after returning from the edge of space. SpaceX said 24 of the 33 methane-fueled Raptor engines launching on the booster next month are “flight-proven.”

The Super Heavy booster flying next month previously launched and was recovered on Flight 8 in March. Credit: SpaceX

Similar to the last Starship flight, the Super Heavy booster will guide itself to a splashdown off the coast of South Texas instead of returning to Starbase.

“Its primary test objective will be demonstrating a unique landing burn engine configuration planned to be used on the next-generation Super Heavy,” SpaceX said.

The new booster landing sequence will initially use 13 of the rocket’s 33 engines, then downshift to five engines before running just the three center engines for the final portion of the burn. The booster previously went directly from 13 engines to three engines. Using five engines for part of the landing sequence provides “additional redundancy for spontaneous engine shutdowns,” according to SpaceX.

“The primary goal on the flight test is to measure the real-world vehicle dynamics as engines shut down while transitioning between the different phases,” SpaceX said.

Stepping stone to Version 3

After Flight 11, SpaceX will focus on the next-generation Starship design: Starship V3. This upgraded configuration will be the version that will actually fly to orbit, allowing SpaceX to begin deploying its new fleet of larger, more powerful Starlink Internet satellites.

Starship V3 will also be used to test orbital refueling, something never before attempted between two spacecraft with cryogenic propellants. Refueling in space is required to give Starship enough energy to propel itself out of Earth’s orbit to the Moon and Mars, destinations it must reach to fulfill the hopes of NASA and SpaceX founder Elon Musk.

The first flight of Starship V3 is likely to occur in early 2026, using a new launch pad undergoing final outfitting and testing a short distance away from SpaceX’s original launch pad at Starbase. Gerstenmaier, SpaceX’s vice president of build and flight reliability, told a crowd at a space industry conference earlier this month that the company will likely attempt one more suborbital flight with Starship V3. If that goes well, Flight 13 could launch all the way to low-Earth orbit sometime later next year.

SpaceX has a few tricks up its sleeve for the last Starship flight of the year Read More »

is-the-“million-year-old”-skull-from-china-a-denisovan-or-something-else?

Is the “million-year-old” skull from China a Denisovan or something else?


Homo longi by any other name

Now that we know what Denisovans looked like, they’re turning up everywhere.

This digital reconstruction makes Yunxian 2 look liess like a Homo erectus and more like a Denisovan (or Homo longi, according to the authors). Credit: Feng et al. 2025

A fossil skull from China that made headlines last week may or may not be a million years old, but it’s probably closely related to Denisovans.

The fossil skull, dubbed Yunxian 2, is one of three unearthed from a terrace alongside the Han River, in central China, in a layer of river sediment somewhere between 600,000 and 1 million years old. Archaeologists originally identified them as Homo erectus, but Hanjiang Normal University paleoanthropologist Xiaobo Feng and his colleagues’ recent digital reconstruction of Yunxian 2 suggests the skulls may actually have belonged to someone a lot more similar to us: a hominin group defined as a species called Homo longi or a Denisovan, depending on who’s doing the naming.

The recent paper adds fuel—and a new twist—to that debate. And the whole thing may hinge on a third skull from the same site, still waiting to be published.

A front and a side view of a digitally reconstructed hominin skull

This digital reconstruction makes Yunxian 2 look less like a Homo erectus and more like a Denisovan (or Homo longi, according to the authors). Credit: Feng et al. 2025

Denisovan or Homo longi?

The Yunxian skull was cracked and broken after hundreds of thousands of years under the crushing weight of all that river mud, but the authors used CT scans to digitally put the pieces back together. (They got some clues from a few intact bits of Yunxian 1, which lay buried in the same layer of mud just 3 meters away.) In the end, Feng and his colleagues found themselves looking at a familiar face; Yunxian 2 bears a striking resemblance to a 146,000-year-old Denisovan skull.

That skull, from Harbin in northeast China, made headlines in 2021 when a team of paleoanthropologists claimed it was part of an entirely new species, which they dubbed Homo longi. According to that first study, Homo longi was a distinct hominin species, separate from us, Neanderthals, and even Denisovans. That immediately became a point of contention because of features the skull shared with some other suspected Denisovan fossils.

Earlier this year, a team of researchers, which included one of the 2021 study’s authors, took samples of ancient proteins preserved in the Harbin skull; of the 95 proteins they found, three of them matched proteins only encoded in Denisovan DNA. While the June 2025 study suggested that Homo longi was a Denisovan all along, the new paper draws a different conclusion: Homo longi is a species that happens to include the population we’ve been calling Denisovans. As study coauthor Xijun Ni, of the Chinese Academy of Sciences, puts it in an email to Ars Technica, “Given their similar age range, distribution areas, and available morphological data, it is likely that Denisovans belong to the Homo longi species. However, little is known about Denisovan morphology.”

Of course, that statement—that we know little about Denisovan morphology (the shapes and features of their bones)—only applies if you don’t accept the results of the June 2025 study mentioned above, which clocked the Harbin skull as a Denisovan and therefore told us what one looks like.

And Feng and his colleagues, in fact, don’t accept those results. Instead, they consider Harbin part of some other group of Homo longi, and they question the earlier study’s methods and results. “The peptide sequences from Harbin, Penghu, and other fossils are too short and provide conflicting information,” Ni tells Ars Technica. Feng and his colleagues also question the results of another study, which used mitochondrial DNA to identify Harbin as a Denisovan.

In other words, Feng and his colleagues are pretty invested in defining Homo longi as a species and Denisovans as just one sub-group of that species. But that’s hard to square with DNA data.

Alas, poor Yunxian 2, I knew him well

Yunxian 2 has a wide face with high, flat cheekbones, a wide nasal opening, and heavy brows. Its cranium is higher and rounder than Homo erectus (and the original reconstruction, done in the 1990s), but it’s still longer and lower than is normal for our species. Overall, it could have held about 1,143 cubic centimeters of brain, which is in the ballpark of modern people. But its shape may have left less room for the frontal lobe (the area where a lot of social skills, logic, motor skills, and executive function happen) than you’d expect in a Neanderthal or a Homo sapiens skull.

Feng and his colleagues measured the distances between 533 specific points on the skull: anatomical landmarks like muscle attachment points or the joints between certain bones. They compared those measurements to ones from 26 fossil hominin skulls and several-dozen modern human skulls, using a computer program to calculate how similar each skull was to all of the others.

Yunxian 2 fits neatly into a lookalike group with the Harbin skull, along with two other skulls that paleoanthropologists have flagged as belonging to either Denisovans or Homo longi. Those two skulls are a 200,000- to 260,000-year-old skull found in Dali County in northwestern China and a 260,000-year-old skull from Jinniushi (sometimes spelled Jinniushan) Cave in China.

Those morphological differences suggest some things about how the individuals who once inhabited these skulls might have been related to each other, but that’s also where things get dicey.

front and side views of 3 skulls.

An older reconstruction of the Yunxian 2 skull gives it a flatter look. Credit: government of Wuhan

Digging into the details

Most of what we know about how we’re related to our closest extinct hominin relatives (Neanderthals and Denisovans) comes from comparing our DNA to theirs and tracking how small changes in the genetic code build up over time. Based on DNA, our species last shared a common ancestor with Neanderthals and Denisovans sometime around 750,000 years ago in Africa. One branch of the family tree led to us; the other branch split again around 600,000 years ago, leading to Neanderthals and Denisovans (or Homo longi, if you prefer).

In other words, DNA tells us that Neanderthals and Denisovans are more closely related to each other than either is to us. (Unless you’re looking at mitochondrial DNA, which suggests that we’re more closely related to Neanderthals than to Denisovans; it’s complicated, and there’s a lot we still don’t understand.)

“Ancient mtDNA and genomic data show different phylogenetic relationships among Denisovans, Neanderthals and Homo sapiens,” says Ni. So depending on which set of data you use and where your hominin tree starts, it can be possible to get different answers about who is most closely related to whom. The fact that all of these groups interbred with each other can explain this complexity, but makes building family trees challenging.

It is very clear, however, that Feng and his colleagues’ picture of the relationships between us and our late hominin cousins, based on similarities among fossil skulls in their study, looks very different from what the genomes tell us. In their model, we’re more closely related to Denisovans, and the Neanderthals are off on their own branch of the family tree. Feng and his colleagues also say those splits happened much earlier, with Neanderthals branching off on their own around 1.38 million years ago; we last shared a common ancestor with Homo longi around 1 million years ago.

That’s a big difference from DNA results, especially when it comes to timing. And the timing is likely to be the biggest controversy here. In a recent commentary on Feng and his colleagues’ study, University of Wisconsin paleoanthropologist John Hawks argues that you can’t just leave genetic evidence out of the picture.

“What this research should have done is to put the anatomical comparisons into context with the previous results from DNA, especially the genomes that enable us to understand the relationships of Denisovan, Neanderthal, and modern human groups,” Hawks writes.

(It’s worth a side note that most news stories describe Yunxian 2 as being a million years old, and so do Feng and his colleagues. But electron spin resonance dating of fossil animal bones from the same sediment layer suggests the skull could be as young as 600,000 years old or as old as 1.1 million. That still needs to be narrowed down to everyone’s satisfaction.)

What’s in a name?

Of course, DNA also tells us that even after all this branching and migrating, the three species were still similar enough to reproduce, which they did several times. Many groups of modern people still carry traces of Neanderthal and Denisovan DNA in their genomes, courtesy of those exchanges. And some ancient Neanderthal populations were carrying around even older chunks of human DNA in the same way. That arguably makes species definitions a little fuzzy at best—and maybe even irrelevant.

“I think all these groups, including Neanderthals, should be recognized within our own species, Homo sapiens,” writes Hawks. Hawks contends that the differences among these hominin groups “were the kind that evolve among the populations of a single species over time, not starkly different groups that tread the landscape in mutually unrecognizeable ways.”

But humans love to classify things (a trait we may have shared with Neanderthals and Denisovans), so those species distinctions are likely to persist even if the lines between them aren’t so solid. As long as that’s the case, names and classifications will be fodder for often heated debate. And Feng’s team is staking out a position that’s very different from Hawks’. “‘Denisovan’ is a label for genetic samples taken from the Denisova Cave. It should not be used everywhere. Homo longi is a formally named species,” says Ni.

Technically, Denisovans don’t have a formal species name, a Latinized moniker like Homo erectus that comes with a clear(ish) spot on the family tree. Homo longi would be a more formal species name, but only if scientists can agree on whether they’re actually a species.

an archaeologist kneels in front of a partially buried skull

An archaeologist comes face to face with the Yunxian 3 skull Credit: government of Wuhan

The third Yunxian skull

Paleoanthropologists unearthed a third skull from the Yunxian site in 2022. It bears a strong resemblance to the other two from the area (and is apparently in better shape than either of them), and it dates to about the same timeframe. A 2022 press release describes it as “the most complete Homo erectus skull found in Eurasia so far,” but if Feng and his colleagues are right, it may actually be a remarkably complete Homo longi (and/or Denisovan) skull. And it could hold the answers to many of the questions anthropologists like Feng and Hawks are currently debating.

“It remains pretty obvious that Yunxian 3 is going to be central to testing the relationships of this sample [of fossil hominins in Feng and colleagues’ paper],” writes Hawks.

The problem is that Yunxian 3 is still being cleaned and prepared. Preparing a fossil is a painstaking, time-consuming process that involves very carefully excavating it from the rocky matrix it’s embedded in, using everything from air-chisels to paintbrushes. And until that’s done and a scientific report on the skull is published, other paleoanthropologists don’t have access to any information about its features—which would be super useful for figuring out how to define whatever group we eventually decide it belongs to.

For the foreseeable future, the relationships between us and our extinct cousins (or at least our ideas about those relationships) will keep changing as we get more data. Eventually, we may have enough data from enough fossils and ancient DNA samples to form a clearer picture of our past. But in the meantime, if you’re drawing a hominin family tree, use a pencil.

Science, 2025.  DOI: 10.1126/science.ado9202  (About DOIs).

Photo of Kiona N. Smith

Kiona is a freelance science journalist and resident archaeology nerd at Ars Technica.

Is the “million-year-old” skull from China a Denisovan or something else? Read More »