Science

wyoming-dinosaur-mummies-give-us-a-new-view-of-duck-billed-species

Wyoming dinosaur mummies give us a new view of duck-billed species


Exquisitely preserved fossils come from a single site in Wyoming.

The scaly skin of a crest over the back of the juvenile duck-billed dinosaur Edmontosaurus annectens. Credit: Tyler Keillor/Fossil Lab

Edmontosaurus annectens, a large herbivore duck-billed dinosaur that lived toward the end of the Cretaceous period, was discovered back in 1908 in east-central Wyoming by C.H. Sternberg, a fossil collector. The skeleton, later housed at the American Museum of Natural History in New York and nicknamed the “AMNH mummy,” was covered by scaly skin imprinted in the surrounding sediment that gave us the first approximate idea of what the animal looked like.

More than a century later, a team of paleontologists led by Paul C. Sereno, a professor of organismal biology at the University of Chicago, got back to the same exact place where Sternberg dug up the first Edmontosaurus specimen. The researchers found two more Edmontosaurus mummies with all fleshy external anatomy imprinted in a sub-millimeter layer of clay. For the first time, we uncovered an accurate image of what Edmontosaurus really looked like, down to the tiniest details, like the size of its scales and the arrangement of spikes on its tail. And we were in for at least a few surprises.

Evolving images

Our view of Edmontosaurus changed over time, even before Sereno’s study. The initial drawing of Edmontosaurus was made in 1909 by Charles R. Knight, a famous paleoartist, who based his visualization on the first specimen found by Sternberg. “He was accurate in some ways, but he made a mistake in that he drew the crest extending throughout the entire length of the body,” Sereno says. The mummy Knight based his drawing on had no tail, so understandably, the artist used his imagination to fill in the gaps and made the Edmontosaurus look a little bit like a dragon.

An update to Knight’s image came in 1984 due to Jack Horner, one of the most influential American paleontologists, who found a section of Edmontosaurus tail that had spikes instead of a crest. “The specimen was not prepared very accurately, so he thought the spikes were rectangular and didn’t touch each other,” Sereno explains. “In his reconstruction he extended the spikes from the tail all the way to the head—which was wrong,” Sereno says. Over time, we ended up with many different, competing visions of Edmontosaurus. “But I think now we finally nailed down the way it truly looked,” Sereno claims.

To nail it down, Sereno’s team retraced the route to where Sternberg found the first Edmontosaurus mummy. This was not easy, because the team had to rely on Sternberg’s notes, which often referred to towns and villages that were no longer on the map. But based on interviews with Wyoming farmers, Sereno managed to reach the “mummy zone,” an area less than 10 kilometers in diameter, surprisingly abundant in Cretaceous fossils.

“To find dinosaurs, you need to understand geology,” Sereno says. And in the “mummy zone,” geological processes created something really special.

Dinosaur templating

The fossils are found in part of the Lance Formation, a geological formation that originated in the last three or so million years of the Cretaceous period, just before the dinosaurs’ extinction. It extends through North Dakota, South Dakota, Wyoming, Montana, and even to parts of Canada. “The formation is roughly 200 meters thick. But when you approach the mummy zone—surprise! The formation suddenly goes up to a thousand meters thick,” Sereno says. “The sedimentation rate in there was very high for some reason.”

Sereno thinks the most likely reason behind the high sedimentation rate was frequent and regular flooding of the area by a nearby river. These floods often drowned the unfortunate dinosaurs that roamed there and covered their bodies with mud and clay that congealed against a biofilm which formed at the surface of decaying carcasses. “It’s called clay templating, where the clay sticks to the outside of the skin and preserves a very thin layer, a mask, showing how the animal looked like,” Sereno says.

Clay templating is a process well-known by scientists studying deep-sea invertebrate organisms because that’s the only way they can be preserved. “It’s just no one ever thought it could happen to a large dinosaur buried in a river,” Sereno says. But it’s the best explanation for the Wyoming mummy zone, where Sereno’s team managed to retrieve two more Edmontosaurus skeletons surrounded by clay masks under 1 millimeter thick. These revealed the animal’s appearance with amazing, life-like accuracy.

As a result, the Edmontosaurus image got updated one more time. And some of the updates were rather striking.

Delicate elephants

Sereno’s team analyzed the newly discovered Edmontosaurus mummies with a barrage of modern imaging techniques like CT scans, X-rays, photogrammetry, and more. “We created a detailed model of the skin and wrapped it around the skeleton—some of these technologies were not even available 10 years ago,” Sereno says. The result was an updated Edmontosaurus image that includes changes to the crest, the spikes, and the appearance of its skin. Perhaps most surprisingly, it adds hooves to its legs.

It turned out both Knight and Horner were partially right about the look of Edmontosaurus’ back. The fleshy crest, as depicted by Knight, indeed started at the top of the head and extended rearward along the spine. The difference was that there was a point where this crest changed into a row of spikes, as depicted in the Horner version. The spikes were similar to the ones found on modern chameleons, where each spike corresponds one-to-one with the vertebrae underneath it.

“Another thing that was stunning in Edmontosaurus was the small size of its scales,” Sereno says. Most of the scales were just 1 to 4 millimeters across. They grew slightly larger toward the bottom of the tail, but even there they did not exceed 1 centimeter. “You can find such scales on a lizard, and we’re talking about an animal the size of an elephant,” Sereno adds. The skin covered with these super-tiny scales was also incredibly thin, which the team deduced from the wrinkles they found in their imagery.

And then came the hooves. “In a hoof, the nail goes around the toe and wraps, wedge-shaped, around its bottom,” Sereno explains. The Edmontosaurus had singular, central hooves on its fore legs with a “frog,” a triangular, rubbery structure at the underside. “They looked very much like equine hooves, so apparently these were not invented by mammals,” Sereno says. “Dinosaurs had them.” The hind legs that supported most of the animal’s weight, on the other hand, had three wedge-shaped hooves wrapped around three digits and a fleshy heel toward the back—a structure found in modern-day rhinos.

“There are so many amazing ‘firsts’ preserved in these duck-billed mummies,” Sereno says. “The earliest hooves were documented in a land vertebrate, the first confirmed hooved reptile, and the first hooved four-legged animal with different forelimb and hindlimb posture.” But Edmontosaurus, while first in many aspects, was not the last species Sereno’s team found in the mummy zone.

Looking for wild things

“When I was walking through the grass in the mummy zone for the first time, the first hill I found a T. rex in a concretion. Another mummy we found was a Triceratops,” Sereno says. Both these mummies are currently being examined and will be covered in the upcoming papers published by Sereno’s team. And both are unique in their own way.

The T. rex mummy was preserved in a surprisingly life-like pose, which Sereno thinks indicates the predator might have been buried alive. Edmontosaurus mummies, on the other hand, were positioned in a death pose, which meant the animals most likely died up to a week before the mud covered their carcasses. This, in principle, should make the T. rex clay mask even more true-to-life, since there should be no need to account for desiccation and decay when reconstructing the animal’s image.

Sereno, though, seems to be even more excited about the Triceratops mummy. “We already found Triceratops scales were 10 times larger than the largest scales on the Edmontosaurus, and its skin had no wrinkles, so it was significantly thicker. And we’re talking about animals of similar size living in the same area and in the same time,” Sereno says. To him, this could indicate that the physiology of the Triceratops and Edmontosaurus was radically different.

“We are in the age of discovery. There are so many things to come. It’s just the beginning,” Sereno says. “Anyway, the next two mummies we want to cover are the Triceratops and the T. Rex. And I can already tell you what we have with the Triceratops is wild,” he adds.

Science, 2025. DOI: 10.1126/science.adw3536

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Wyoming dinosaur mummies give us a new view of duck-billed species Read More »

scientist-pleaded-guilty-to-smuggling-fusarium-graminearum-into-us.-but-what-is-it?

Scientist pleaded guilty to smuggling Fusarium graminearum into US. But what is it?

Even with Fusarium graminearum, which has appeared on every continent but Antarctica, there is potential for introducing new genetic material into the environment that may exist in other countries but not the US and could have harmful consequences for crops.

How do you manage Fusarium graminearum infections?

Fusarium graminearum infections generally occur during the plant’s flowering stage or when there is more frequent rainfall and periods of high humidity during early stages of grain production.

How Fusarium graminearum risk progressed in 2025. Yellow is low risk, orange is medium risk, and red is high risk. Fusarium Risk Tool/Penn State

Wheat in the southern US is vulnerable to infection during the spring. As the season advances, the risk from scab progresses north through the US and into Canada as the grain crops mature across the region, with continued periods of conducive weather throughout the summer.

Between seasons, Fusarium graminearum survives on barley, wheat, and corn plant residues that remain in the field after harvest. It reproduces by producing microscopic spores that can then travel long distances on wind currents, spreading the fungus across large geographic areas each season.

In wheat and barley, farmers can suppress the damage by spraying a fungicide onto developing wheat heads when they’re most susceptible to infection. Applying fungicide can reduce scab and its severity, improve grain weight, and reduce mycotoxin contamination.

However, integrated approaches to manage plant diseases are generally ideal, including planting barley or wheat varieties that are resistant to scab and also using a carefully timed fungicide application, rotating crops, and tilling the soil after harvest to reduce residue where Fusarium graminearum can survive the winter.

Even though fungicide applications may be beneficial, fungicides offer only some protection and can’t cure scab. If the environmental conditions are extremely conducive for scab, with ample moisture and humidity during flowering, the disease will still occur, albeit at reduced levels.

Fusarium Head Blight with NDSU’s Andrew Friskop.

Plant pathologists are making progress on early warning systems for farmers. A team from Kansas State University, Ohio State University, and Pennsylvania State University has been developing a computer model to predict the risk of scab. Their wheat disease predictive model uses historic and current environmental data from weather stations throughout the US, along with current conditions, to develop a forecast.

In areas that are most at risk, plant pathologists and commodity specialists encourage wheat growers to apply a fungicide during periods when the fungus is likely to grow to reduce the chances of damage to crops and the spread of mycotoxin.

Tom W. Allen, associate research professor of Plant Pathology, Mississippi State University. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Scientist pleaded guilty to smuggling Fusarium graminearum into US. But what is it? Read More »

blue-origin’s-new-glenn-rocket-came-back-home-after-taking-aim-at-mars

Blue Origin’s New Glenn rocket came back home after taking aim at Mars


“Never before in history has a booster this large nailed the landing on the second try.”

Blue Origin’s 320-foot-tall (98-meter) New Glenn rocket lifts off from Cape Canaveral Space Force Station, Florida. Credit: Blue Origin

The rocket company founded a quarter-century ago by billionaire Jeff Bezos made history Thursday with the pinpoint landing of an 18-story-tall rocket on a floating platform in the Atlantic Ocean.

The on-target touchdown came nine minutes after the New Glenn rocket, built and operated by Bezos’ company Blue Origin, lifted off from Cape Canaveral Space Force Station, Florida, at 3: 55 pm EST (20: 55 UTC). The launch was delayed from Sunday, first due to poor weather at the launch site in Florida, then by a solar storm that sent hazardous radiation toward Earth earlier this week.

“We achieved full mission success today, and I am so proud of the team,” said Dave Limp, CEO of Blue Origin. “It turns out Never Tell Me The Odds (Blue Origin’s nickname for the first stage) had perfect odds—never before in history has a booster this large nailed the landing on the second try. This is just the beginning as we rapidly scale our flight cadence and continue delivering for our customers.”

The two-stage launcher set off for space carrying two NASA science probes on a two-year journey to Mars, marking the first time any operational satellites flew on Blue Origin’s new rocket, named for the late NASA astronaut John Glenn. The New Glenn hit its marks on the climb into space, firing seven BE-4 main engines for nearly three minutes on a smooth ascent through blue skies over Florida’s Space Coast.

Seven BE-4 engines power New Glenn downrange from Florida’s Space Coast. Credit: Blue Origin

The engines consumed super-cold liquified natural gas and liquid oxygen, producing more than 3.8 million pounds of thrust at full power. The BE-4s shut down, and the first stage booster released the rocket’s second stage, with dual hydrogen-fueled BE-3U engines, to continue the mission into orbit.

The booster soared to an altitude of 79 miles (127 kilometers), then began a controlled plunge back into the atmosphere, targeting a landing on Blue Origin’s offshore recovery vessel named Jacklyn. Moments later, three of the booster’s engines reignited to slow its descent in the upper atmosphere. Then, moments before reaching the Atlantic, the rocket again lit three engines and extended its landing gear, sinking through low-level clouds before settling onto the football field-size deck of Blue Origin’s recovery platform 375 miles (600 kilometers) east of Cape Canaveral.

A pivotal moment

The moment of touchdown appeared electric at several Blue Origin facilities around the country, which had live views of cheering employees piped in to the company’s webcast of the flight. This was the first time any company besides SpaceX has propulsively landed an orbital-class rocket booster, coming nearly 10 years after SpaceX recovered its first Falcon 9 booster intact in December 2015.

Blue Origin’s New Glenn landing also came almost exactly a decade after the company landed its smaller suborbital New Shepard rocket for the first time in West Texas. Just like Thursday’s New Glenn landing, Blue Origin successfully recovered the New Shepard on its second-ever attempt.

Blue Origin’s heavy-lifter launched successfully for the first time in January. But technical problems prevented the booster from restarting its engines on descent, and the first stage crashed at sea. Engineers made “propellant management and engine bleed control improvements” to resolve the problems, and the fixes appeared to work Thursday.

The rocket recovery is a remarkable achievement for Blue Origin, which has long lagged dominant SpaceX in the commercial launch business. SpaceX has now logged 532 landings with its Falcon booster fleet. Now, with just a single recovery in the books, Blue Origin sits at second in the rankings for propulsive landings of orbit-class boosters. Bezos’ company has amassed 34 landings of the suborbital New Shepard model, which lacks the size and doesn’t reach the altitude and speed of the New Glenn booster.

Blue Origin landed a New Shepard returning from space for the first time in November 2015, a few weeks before SpaceX first recovered a Falcon 9 booster. Bezos threw shade on SpaceX with a post on Twitter, now called X, after the first Falcon 9 landing: “Welcome to the club!”

Jeff Bezos, Blue Origin’s founder and owner, wrote this message on Twitter following SpaceX’s first Falcon 9 landing on December 21, 2015. Credit: X/Jeff Bezos

Finally, after Thursday, Blue Origin officials can say they are part of the same reusable rocket club as SpaceX. Within a few days, Blue Origin’s recovery vessel is expected to return to Port Canaveral, Florida, where ground crews will offload the New Glenn booster and move it to a hangar for inspections and refurbishment.

“Today was a tremendous achievement for the New Glenn team, opening a new era for Blue Origin and the industry as we look to launch, land, repeat, again and again,” said Jordan Charles, the company’s vice president for the New Glenn program, in a statement. “We’ve made significant progress on manufacturing at rate and building ahead of need. Our primary focus remains focused on increasing our cadence and working through our manifest.”

Blue Origin plans to reuse the same booster next year for the first launch of the company’s Blue Moon Mark 1 lunar cargo lander. This mission is currently penciled in to be next on Blue Origin’s New Glenn launch schedule. Eventually, the company plans to have a fleet of reusable boosters, like SpaceX has with the Falcon 9, that can each be flown up to 25 times.

New Glenn is a core element in Blue Origin’s architecture for NASA’s Artemis lunar program. The rocket will eventually launch human-rated lunar landers to the Moon to provide astronauts with rides to and from the surface of the Moon.

The US Space Force will also examine the results of Thursday’s launch to assess New Glenn’s readiness to begin launching military satellites. The military selected Blue Origin last year to join SpaceX and United Launch Alliance as a third launch provider for the Defense Department.

Blue Origin’s New Glenn booster, 23 feet (7 meters) in diameter, on the deck of the company’s landing platform in the Atlantic Ocean.

Slow train to Mars

The mission wasn’t over with the buoyant landing in the Atlantic. New Glenn’s second stage fired its engines twice to propel itself on a course toward deep space, setting up for deployment of NASA’s two ESCAPADE satellites a little more than a half-hour after liftoff.

The identical satellites were released from their mounts on top of the rocket to begin their nearly two-year journey to Mars, where they will enter orbit to survey how the solar wind interacts with the rarefied uppermost layers of the red planet’s atmosphere. Scientists believe radiation from the Sun gradually stripped away Mars’ atmosphere, driving runaway climate change that transitioned the planet from a warm, habitable world to the global inhospitable desert seen today.

“I’m both elated and relieved to see NASA’s ESCAPADE spacecraft healthy post-launch and looking forward to the next chapter of their journey to help us understand Mars’ dynamic space weather environment,” said Rob Lillis, the mission’s principal investigator from the University of California, Berkeley.

Scientists want to understand the environment at the top of the Martian atmosphere to learn more about what drove this change. With two instrumented spacecraft, ESCAPADE will gather data from different locations around Mars, providing a series of multipoint snapshots of solar wind and atmospheric conditions. Another NASA spacecraft, named MAVEN, has collected similar data since arriving in orbit around Mars in 2014, but it is only a single observation post.

ESCAPADE, short for Escape and Plasma Acceleration and Dynamics Explorers, was developed and launched on a budget of about $80 million, a bargain compared to all of NASA’s recent Mars missions. The spacecraft were built by Rocket Lab, and the project is managed on behalf of NASA by the University of California, Berkeley.

The two spacecraft for NASA’s ESCAPADE mission at Rocket Lab’s factory in Long Beach, California. Credit: Rocket Lab

NASA paid Blue Origin about $20 million for the launch of ESCAPADE, significantly less than it would have cost to launch it on any other dedicated rocket. The space agency accepted the risk of launching on the relatively unproven New Glenn rocket, which hasn’t yet been certified by NASA or the Space Force for the government’s marquee space missions.

The mission was supposed to launch last year, when Earth and Mars were in the right positions to enable a direct trip between the planets. But Blue Origin delayed the launch, forcing a yearlong wait until the company’s second New Glenn was ready to fly. Now, the ESCAPADE satellites, each about a half-ton in mass fully fueled, will loiter in a unique orbit more than a million miles from Earth until next November, when they will set off for the red planet. ESCAPADE will arrive at Mars in September 2027 and begin its science mission in 2028.

Rocket Lab ground controllers established communication with the ESCAPADE satellites late Thursday night.

“The ESCAPADE mission is part of our strategy to understand Mars’ past and present so we can send the first astronauts there safely,” said Nicky Fox, associate administrator of NASA’s Science Mission Directorate. “Understanding Martian space weather is a top priority for future missions because it helps us protect systems, robots, and most importantly, humans, in extreme environments.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Blue Origin’s New Glenn rocket came back home after taking aim at Mars Read More »

three-astronauts-are-stuck-on-china’s-space-station-without-a-safe-ride-home

Three astronauts are stuck on China’s space station without a safe ride home

This view shows a Shenzhou spacecraft departing the Tiangong space station in 2023. Credit: China Manned Space Agency

Swapping spacecraft in low-Earth orbit

With their original spacecraft deemed unsafe, Chen and his crewmates instead rode back to Earth on the newer Shenzhou 21 craft that launched and arrived at the Tiangong station October 31. The three astronauts who launched on Shenzhou 21—Zhang Lu, Wu Fei, and Zhang Hongzhang—remain aboard the nearly 100-metric ton space station with only the damaged Shenzhou 20 craft available to bring them home.

China’s line of Shenzhou spaceships not only provide transportation to and from low-Earth orbit, they also serve as lifeboats to evacuate astronauts from the Chinese space station in the event of an in-flight emergency, such as major failures or a medical crisis. They serve the same role as Russian Soyuz and SpaceX Crew Dragon vehicles flying to and from the International Space Station.

Another Shenzhou spacecraft, Shenzhou 22, “will be launched at a later date,” the China Manned Space Agency said in a statement. Shenzhou 20 will remain in orbit to “continue relevant experiments.” The Tiangong lab is designed to support crews of six for only short periods, with longer stays of three astronauts.

Officials have not disclosed when Shenzhou 22 might launch, but Chinese officials typically have a Long March rocket and Shenzhou spacecraft on standby for rapid launch if required. Instead of astronauts, Shenzhou 22 will ferry fresh food and equipment to sustain the three-man crew on the Tiangong station.

China’s state-run Xinhua news agency called Friday’s homecoming “the first successful implementation of an alternative return procedure in the country’s space station program history.”

The shuffling return schedules and damaged spacecraft at the Tiangong station offer a reminder of the risks of space junk, especially tiny debris fragments that evade detection by tracking telescopes and radars. A minuscule piece of space debris traveling at several miles per second can pack a punch. Crews at the Tiangong outpost ventured outside the station multiple times in the last few years to install space debris shielding to protect the outpost.

Astronaut Tim Peake took this photo of a cracked window on the International Space Station in 2016. The 7-millimeter (quarter-inch) divot on the quadruple-pane window was gouged out by an impact of space debris no larger than a few thousandths of a millimeter across. The damage did not pose a risk to the station. Credit: ESA/NASA

Shortly after landing on Friday, ground teams assisted the Shenzhou astronauts out of their landing module. All three appeared to be in good health and buoyant spirits after completing the longest-duration crew mission for China’s space program.

“Space exploration has never been easy for humankind,” said Chen Dong, the mission commander, according to Chinese state media.

“This mission was a true test, and we are proud to have completed it successfully,” Chen said shortly after landing. “China’s space program has withstood the test, with all teams delivering outstanding performances … This experience has left us a profound impression that astronauts’ safety is really prioritized.”

Three astronauts are stuck on China’s space station without a safe ride home Read More »

world’s-oldest-rna-extracted-from-ice-age-woolly-mammoth

World’s oldest RNA extracted from Ice Age woolly mammoth

A young woolly mammoth now known as Yuka was frozen in the Siberian permafrost for about 40,000 years before it was discovered by local tusk hunters in 2010. The hunters soon handed it over to scientists, who were excited to see its exquisite level of preservation, with skin, muscle tissue, and even reddish hair intact. Later research showed that, while full cloning was impossible, Yuka’s DNA was in such good condition that some cell nuclei could even begin limited activity when placed inside mouse eggs.

Now, a team has successfully sequenced Yuka’s RNA—a feat many researchers once thought impossible. Researchers at Stockholm University carefully ground up bits of muscle and other tissue from Yuka and nine other woolly mammoths, then used special chemical treatments to pull out any remaining RNA fragments, which are normally thought to be much too fragile to survive even a few hours after an organism has died. Scientists go to great lengths to extract RNA even from fresh samples, and most previous attempts with very old specimens have either failed or been contaminated.

A different view

The team used RNA-handling methods adapted for ancient, fragmented molecules. Their scientific séance allowed them to explore information that had never been accessible before, including which genes were active when Yuka died. In the creature’s final panicked moments, its muscles were tensing and its cells were signaling distress—perhaps unsurprising since Yuka is thought to have died as a result of a cave lion attack.

It’s an exquisite level of detail, and one that scientists can’t get from just analyzing DNA. “With RNA, you can access the actual biology of the cell or tissue happening in real time within the last moments of life of the organism,” said Emilio Mármol, a researcher who led the study. “In simple terms, studying DNA alone can give you lots of information about the whole evolutionary history and ancestry of the organism under study. “Obtaining this fragile and mostly forgotten layer of the cell biology in old tissues/specimens, you can get for the first time a full picture of the whole pipeline of life (from DNA to proteins, with RNA as an intermediate messenger).”

World’s oldest RNA extracted from Ice Age woolly mammoth Read More »

dogs-came-in-a-wide-range-of-sizes-and-shapes-long-before-modern-breeds

Dogs came in a wide range of sizes and shapes long before modern breeds

“The concept of ‘breed’ is very recent and does not apply to the archaeological record,” Evin said. People have, of course, been breeding dogs for particular traits for as long as we’ve had dogs, and tiny lap dogs existed even in ancient Rome. However, it’s unlikely that a Neolithic herder would have described his dog as being a distinct “breed” from his neighbor’s hunting partner, even if they looked quite different. Which, apparently, they did.

A big yellow dog, a little gray dog, and a little white dog

Dogs had about half of their modern diversity (at least in skull shapes and sizes) by the Neolithic. Credit: Kiona Smith

Bones only tell part of the story

“We know from genetic models that domestication should have started during the late Pleistocene,” Evin told Ars. A 2021 study suggested that domestic dogs have been a separate species from wolves for more than 23,000 years. But it took a while for differences to build up.

Evin and her colleagues had access to 17 canine skulls that ranged from 12,700 to 50,000 years old—prior to the end of the ice age—and they all looked enough like modern wolves that, as Evin put it, “for now, we have no evidence to suggest that any of the wolf-like skulls did not belong to wolves or looked different from them.” In other words, if you’re just looking at the skull, it’s hard to tell the earliest dogs from wild wolves.

We have no way to know, of course, what the living dog might have looked like. It’s worth mentioning that Evin and her colleagues found a modern Saint Bernard’s skull that, according to their statistical analysis, looked more wolf-like than dog-like. But even if it’s not offering you a brandy keg, there’s no mistaking a live Saint Bernard, with its droopy jowls and floppy ears, for a wolf.

“Skull shape tells us a lot about function and evolutionary history, but it represents only one aspect of the animal’s appearance. This means that two dogs with very similar skulls could have looked quite different in life,” Evin told Ars. “It’s an important reminder that the archaeological record captures just part of the biological and cultural story.”

And with only bones—and sparse ones, at that—to go on, we may be missing some of the early chapters of dogs’ biological and cultural story. Domestication tends to select the friendliest animals to produce the next generation, and apparently that comes with a particular set of evolutionary side effects, whether you’re studying wolves, foxes, cattle, or pigs. Spots, floppy ears, and curved tails all seem to be part of the genetic package that comes with inter-species friendliness. But none of those traits is visible in the skull.

Dogs came in a wide range of sizes and shapes long before modern breeds Read More »

tiny-chips-hitch-a-ride-on-immune-cells-to-sites-of-inflammation

Tiny chips hitch a ride on immune cells to sites of inflammation


Tiny chips can be powered by infrared light if they’re near the brain’s surface.

An immune cell chemically linked to a CMOS chip. Credit: Yadav, et al.

Standard brain implants use electrodes that penetrate the gray matter to stimulate and record the activity of neurons. These typically need to be put in place via a surgical procedure. To go around that need, a team of researchers led by Deblina Sarkar, an electrical engineer and MIT assistant professor, developed microscopic electronic devices hybridized with living cells. Those cells can be injected into the circulatory system with a standard syringe and will travel the bloodstream before implanting themselves in target brain areas.

“In the first two years of working on this technology at MIT, we’ve got 35 grant proposals rejected in a row,” Sarkar says. “Comments we got from the reviewers were that our idea was very impactful, but it was impossible.” She acknowledges that the proposal sounded like something you can find in science fiction novels. But after more than six years of research, she and her colleagues have pulled it off.

Nanobot problems

In 2022, when Sarkar and her colleagues gathered initial data and got some promising results with their cell-electronics hybrids, the team proposed the project for the National Institutes of Health Director’s New Innovator Award. For the first time, after 35 rejections, it made it through peer review. “We got the highest impact score ever,” Sarkar says.

The reason for that score was that her technology solved three extremely difficult problems. The first, obviously, was making functional electronic devices smaller than cells that can circulate in our blood.

“Previous explorations, which had not seen a lot of success, relied on putting magnetic particles inside the bloodstream and then guiding them with magnetic fields,” Sarkar explains. “But there is a difference between electronics and particles.” Electronics made using CMOS technology (which we use for making computer processors) can generate electrical power from incoming light in the same way as photovoltaics, as well as perform computations necessary for more intelligent applications like sensing. Particles, on the other hand, can only be used to stimulate cells to an extent.

If they ever reach those cells, of course, which was the second problem. “Controlling the devices with magnetic fields means you need to go into a machine the size of an MRI,” Sarkar says. Once the subject is in the machine, an operator looks at where the devices are and tries to move them to where they need to be using nothing but magnetic fields. Sarkar said that it’s tough to do anything other than move the particles in straight lines, which is a poor match for our very complex vasculature.

The solution her team found was fusing the electronics with monocytes, immune cells that can home in on inflammation in our bodies. The idea was that the monocytes would carry the electronics through the bloodstream using the cells’ chemical homing mechanism. This also solved the third problem: crossing the blood-brain barrier that protects the brain from pathogens and toxins. Electronics alone could not get through it; monocytes could.

The challenge was making all these ideas work.

Clicking together

Sarkar’s team built electronic devices made of biocompatible polymer and metallic layers fabricated on silicon wafers using a standard CMOS process. “We made the devices this small with lithography, the technique used in making transistors for chips in our computers,” Sarkar explains. They were roughly 200 nanometers thick and 10 microns in diameter—that kept them subcellular, since a monocyte cell usually measures between 12 and 18 microns. The devices were activated and powered by infrared light at a wavelength that could penetrate several centimeters into the brain.

Once the devices were manufactured and taken off the wafer, the next thing to figure out was attaching them to monocytes.

To do this, the team covered the surfaces of the electronic devices with dibezocyclooctyne, a very reactive molecule that can easily link to other chemicals, especially nitrogen compounds called azides. Then Sarkar and her colleagues chemically modified monocytes to place azides on their surfaces. This way, the electronics and cells could quickly snap together, almost like Lego blocks (this approach, called click chemistry, got the 2022 Nobel Prize in chemistry).

The resulting solution of cell-electronics hybrids was designed to be biocompatible and could be injected into the circulatory system. This is why Sarkar called her concept “circulatronics.”

Of course, Sarkar’s “circulatronic” hybrids fall a bit short of sci-fi fantasies, in that they aren’t exactly literal nanobots. But they may be the closest thing we’ve created so far.

Artificial neurons

To test these hybrids in live mice, the researchers prepared a fluorescent version to make them easier to track. Mice were anesthetized first, and the team artificially created inflammation at a specific location in their brains, around the ventrolateral thalamic nucleus. Then the hybrids were injected into the veins of the mice. After roughly 72 hours, the time scientists expected would be needed for the monocytes to reach the inflammation, Sarkar and her colleagues started running tests.

It turned out that most of the injected hybrids reached their destination in one piece—the electronics mostly remained attached to the monocytes. The team’s measurements suggest that around 14,000 hybrids managed to successfully implant themselves near the neurons in the target area of the brain. Then, in response to infrared irradiation, they caused significant neuronal activation, comparable to traditional electrodes implanted via surgery.

The real strength of the hybrids, Sarkar thinks, is the way they can be tuned to specific diseases. “We chose monocytes for this experiment because inflammation spots in the brain are usually the target in many neurodegenerative diseases,” Sarkar says. Depending on the application, though, the hybrids’ performance can be adjusted by manipulating their electronic and cellular components. “We have already tested using mesenchymal stem cells for the Alzheimer’s, or T cells and other neural stem cells for tumors,” Sarkar explains.

She went on to say that her technology one day may help with placing the implants in brain regions that today cannot be safely reached through surgery. “There is a brain cancer called glioblastoma that forms diffused tumor sites. Another example is DIPG [a form of glioma], which is a terminal brain cancer in children that develops in a region where surgery is impossible,” she adds.

But in the more distant future, the hybrids can find applications beyond targeting diseases. Most of the studies that have relied on data from brain implants were limited to participants who suffered from severe brain disorders. The implants were put in their brains for therapeutic reasons, and participating in research projects was something they just agreed to do on the side.

Because the electronics in Sarkar’s hybrids can be designed to fully degrade after a set time, the team thinks this could potentially enable them to gather brain implant data from healthy people—the implants would do their job for the duration of the study and be gone once it’s done. Unless we want them to stay, that is.

“The ease of application can make the implants feasible in brain-computer interfaces designed for healthy people,” Sarkar argues. “Also, the electrodes can be made to work as artificial neurons. In principle, we could enhance ourselves—increase our neuronal density.”

First, though, the team wants to put the hybrids through a testing campaign on larger animals and then get them FDA-approved for clinical trials. Through Cahira Technologies, an MIT spinoff company founded to take the “circulatronics” technology to the market, Sarkar wants to make this happen within the next three years.

Nature Biotechnology, 2025. DOI: 10.1038/s41587-025-02809-3

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Tiny chips hitch a ride on immune cells to sites of inflammation Read More »

what-if-the-aliens-come-and-we-just-can’t-communicate?

What if the aliens come and we just can’t communicate?


Ars chats with particle physicist Daniel Whiteson about his new book Do Aliens Speak Physics?

Science fiction has long speculated about the possibility of first contact with an alien species from a distant world and how we might be able to communicate with them. But what if we simply don’t have enough common ground for that to even be possible? An alien species is bound to be biologically very different, and their language will be shaped by their home environment, broader culture, and even how they perceive the universe. They might not even share the same math and physics. These and other fascinating questions are the focus of an entertaining new book, Do Aliens Speak Physics? And Other Questions About Science and the Nature of Reality.

Co-author Daniel Whiteson is a particle physicist at the University of California, Irvine, who has worked on the ATLAS collaboration at CERN’s Large Hadron Collider. He’s also a gifted science communicator who previously co-authored two books with cartoonist Jorge Cham of PhD Comics fame: 2018’s We Have No Idea and 2021’s Frequently Asked Questions About the Universe. (The pair also co-hosted a podcast from 2018 to 2024, Daniel and Jorge Explain the Universe.) This time around, cartoonist Andy Warner provided the illustrations, and Whiteson and Warner charmingly dedicate their book to “all the alien scientists we have yet to meet.”

Whiteson has long been interested in the philosophy of physics. “I’m not the kind of physicist who’s like, ‘whatever, let’s just measure stuff,’” he told Ars. “The thing that always excited me about physics was this implicit promise that we were doing something universal, that we were learning things that were true on other planets. But the more I learned, the more concerned I became that this might have been oversold. None are fundamental, and we don’t understand why anything emerges. Can we separate the human lens from the thing we’re looking at? We don’t know in the end how much that lens is distorting what we see or defining what we’re looking at. So that was the fundamental question I always wanted to explore.”

Whiteson initially pitched his book idea to his 14-year-old son, who inherited his father’s interest in science. But his son thought Whiteson’s planned discussion of how physics might not be universal was, well, boring. “When you ask for notes, you’ve got to listen to them,” said Whiteson. “So I came back and said, ‘Well, what about a book about when aliens arrive? Will their science be similar to ours? Can we collaborate together?’” That pitch won the teen’s enthusiastic approval: same ideas, but couched in the context of alien contact to make them more concrete.

As for Warner’s involvement as illustrator, “I cold-emailed him and said, ‘Want to write a book about aliens? You get to draw lots of weird aliens,’” said Whiteson. Who could resist that offer?

Ars caught up with Whiteson to learn more.

cartoon exploring possible outcomes of first alien contact

Credit: Andy Warner

Ars Technica: You open each chapter with fictional hypothetical scenarios. Why?

Daniel Whiteson: I sent an early version of the book to my friend, [biologist] Matt Giorgianni, who appears in cartoon form in the book. He said, “The book is great, but it’s abstract. Why don’t you write out a hypothetical concrete scenario for each chapter to show us what it would be like and how we would stumble.”

All great ideas seem obvious once you hear them. I’ve always been a huge science fiction fan. It’s thoughtful and creative and exploratory about the way the universe could be and might be. So I jumped at the opportunity to write a little bit of science fiction. Each one was challenging because you have to come up with a specific example that illustrates the concepts in that chapter, the issues that you might run into, but also be believable and interesting. We went through a lot of drafts. But I had a lot of fun writing them.

Ars Technica: The Voyager Golden Record is perhaps the best known example of humans attempting to communicate with an alien species, spearheaded by the late Carl Sagan, among others. But what are the odds that, despite our best efforts, any aliens will ever be able to decipher our “message in a bottle”?

Daniel Whiteson: I did an informal experiment where I printed out a picture of the Pioneer plaque and showed it to a bunch of grad students who were young enough to not have seen it before. This is Sagan’s audience: biological humans, same brain, same culture, physics grad students—none of them had any idea what any of that was supposed to refer to. NASA gave him two weeks to come up with that design. I don’t know that I would’ve done any better. It’s easy to criticize.

Those folks, they were doing their best. They were trying to step away from our culture. They didn’t use English, they didn’t even use mathematical symbols. They understood that those things are arbitrary, and they were reaching for something they hoped was going to be universal. But in the end, nothing can be universal because language is always symbols, and the choice of those symbols is arbitrary and cultural. It’s impossible to choose a symbol that can only be interpreted in one way.

Fundamentally, the book is trying to poke at our assumptions. It’s so inspiring to me that the history of physics is littered with times when we have had to abandon an assumption that we clung to desperately, until we were shown otherwise with enough data. So we’ve got to be really open-minded about whether these assumptions hold true, whether it’s required to do science, to be technological, or whether there is even a single explanation for reality. We could be very well surprised by what we discover.

Ars Technica: It’s often assumed that math and physics are the closest thing we have to a universal language. You challenge that assumption, probing such questions as “what does it even mean to ‘count’”? 

Daniel Whiteson: At an initial glance, you’re like, well, of course mathematics is required, and of course numbers are universal. But then you dig into it and you start to realize there are fuzzy issues here. So many of the assumptions that underlie our interpretation of what we learned about physics are that way. I had this experience that’s probably very common among physics undergrads in quantum mechanics, learning about those calculations where you see nine decimal places in the theory and nine decimal places in the experiment, and you go, “Whoa, this isn’t just some calculational tool. This is how the universe decides what happens to a particle.”

I literally had that moment. I’m not a religious person, but it was almost spiritual. For many years, I believed that deeply, and I thought it was obvious. But to research this book, I read Science Without Numbers by Hartry Field. I was lucky—here at Irvine, we happen to have an amazing logic and philosophy of science department, and those folks really helped me digest [his ideas] and understand how you can pull yourself away from things like having a number line. It turns out you don’t need numbers; you can just think about relationships. It was really eye-opening, both how essential mathematics seems to human science and how obvious it is that we’re making a bunch of assumptions that we don’t know how to justify.

There are dotted lines humans have drawn because they make sense to us. You don’t have to go to plasma swirls and atmospheres to imagine a scenario where aliens might not have the same differentiation between their identities, me and you, here’s where I end, and here’s where you begin. That’s complicated for aliens that are plasma swirls, but also it’s not even very well-defined for us. How do I define the edge of my body? Is it where my skin ends? What about the dead skin, the hair, my personal space? There’s no obvious definition. We just have a cultural sense for “this is me, this is not me.” In the end, that’s a philosophical choice.

Cartoon of an alien emerging from a spaceship and a human saying

Credit: Andy Warner

Ars Technica: You raise another interesting question in the book: Would aliens even need physics theory or a deeper understanding of how the universe works? Perhaps they could invent, say, warp drive through trial and error. You suggest that our theory is more like a narrative framework. It’s the story we tell, and that is very much prone to bias.

Daniel Whiteson: Absolutely. And not just bias, but emotion and curiosity. We put energy into certain things because we think they’re important. Physicists spend our lives on this because we think, among the many things we could spend our time on, this is an important question. That’s an emotional choice. That’s a personal subjective thing.

Other people find my work dead boring and would hate to have my life, and I would feel the same way about theirs. And that’s awesome. I’m glad that not everybody wants to be a particle physicist or a biologist or an economist. We have a diversity of curiosity, which we all benefit from. People have an intuitive feel for certain things. There’s something in their minds that naturally understands how the system works and reflects it, and they probably can’t explain it. They might not be able to design a better car, but they can drive the heck out of it.

This is maybe the biggest stumbling block for people who are just starting to think about this for the first time. “Obviously aliens are scientific.” Well, how do we know? What do we mean by scientific? That concept has evolved over time, and is it really required? I felt that that was a big-picture thing that people could wrap their minds around but also a shock to the system. They were already a little bit off-kilter and might realize that some things they assumed must be true maybe not have to be.

Ars Technica: You cite the 2016 film Arrival as an example of first contact and the challenge of figuring out how to communicate with an alien species. They had to learn each other’s cultural context before they had any chance of figuring out what their respective symbols meant. 

Daniel Whiteson:  I think that is really crucial. Again, how you choose to represent your ideas is, in a sense, arbitrary. So if you’re on the other side of that, and you have to go from symbols to ideas, you have to know something about how they made those choices in order to reverse-engineer a message, in order to figure out what it means. How do you know if you’ve done it correctly? Say we get a message from aliens and we spend years of supercomputer time cranking on it. Something we rely on for decoding human messages is that you can tell when you’ve done it right because you have something that makes sense.

cartoon about the

Credit: Andy Warner

How do we know when we’ve done that for an alien text? How do you know the difference between nonsense and things that don’t yet make sense to you? It’s essentially impossible. I spoke to some philosophers of language who convinced me that if we get a message from aliens, it might be literally impossible to decode it without their help, without their context. There aren’t enough examples of messages from aliens. We have the “Wow!” signal—who knows what that means? Maybe it’s a great example of getting a message and having no idea how to decode it.

As in many places in the book, I turned to human history because it’s the one example we have. I was shocked to discover how many human languages we haven’t decoded. Not to mention, we haven’t decoded whale. We know whales are talking to each other. Maybe they’re talking to us and we can’t decode it. The lesson, again and again, is culture.

With the Egyptians, the only reason we were able to decode hieroglyphics is because we had the Rosetta Stone, but that still took 20 years there. How does it take 20 years when you have examples and you know what it’s supposed to say? And the answer is that we made cultural assumptions. We assumed pictograms reflected the ideas in the pictures. If there’s a bird in it, it’s about birds. And we were just wrong. That’s maybe why we’ve been struggling with Etruscan and other languages we’ve never decoded.

Even when we do have a lot of culture in common, it’s very, very tricky. So the idea that we would get a message from space and have no cultural clues at all and somehow be able to decode it—I think that only works in the scenario where aliens have been listening to us for a long time and they’re essentially writing to us in something like our language. I’m a big fan of SETI. They host these conferences where they listen to philosophers and linguists and anthropologists. I don’t mean to say that they’ve thought too narrowly, but I think it’s not widely enough appreciated how difficult it is to decode another language with no culture in common.

Ars Technica: You devote a chapter to the possibility that aliens might be able to, say, “taste” electrons. That drives home your point that physiologically, they will probably be different, biologically they will be different, and their experiences are likely to be different. So their notion of culture will also be different.

Daniel Whiteson: Something I really wanted to get across was that perception determines your sense of intuition, which defines in many ways the answers you find acceptable. I read Ed Yong’s amazing book, An Immense World, about animal perception, the diversity of animal experience or animal interaction with the environment. What is it like to be an octopus with a distributed mind? What is it like to be a bird that senses magnetic fields? If you’re an alien and you have very different perceptions, you could also have a different intuitive language in which to understand the universe. What is a photon? Is it a particle? Is it a wave?  Maybe we just struggle with the concept because our intuitive language is limited.

For an alien that can see photons in superposition, this is boring. They could be bored by things we’re fascinated by, or we could explain our theories to them and they could be unsatisfied because to them, it doesn’t translate into their intuitive language. So even though we’ve transcended our biological limitations in many ways, we can detect gravitational waves and infrared photons, we’re still, I think, shaped by those original biological senses that frame our experience, the models we build. What it’s like to be a human in the world could be very, very different from what it’s like to be an alien in the world.

Ars Technica:  Your very last line reads, “Our theories may reveal the patterns of our thoughts as much as the patterns of nature.” Why did you choose to end your book on that note?

Daniel Whiteson: That’s a message to myself. When I started this journey, I thought physics is universal. That’s what makes it beautiful. That implies that if the aliens show up and they do physics in a very different way, it would be a crushing disappointment. Not only is what we’ve learned just another human science, but also it means maybe we can’t benefit from their work or we can’t have that fantasy of a galactic scientific conference. But that actually might be the best-case scenario.

I mean, the reason that you want to meet aliens is the reason you go traveling. You want to expand your horizons and learn new things. Imagine how boring it would be if you traveled around the world to some new country and they just had all the same food. It might be comforting in some way, but also boring. It’s so much more interesting to find out that they have spicy fish soup for breakfast. That’s the moment when you learn not just about what to have for breakfast but about yourself and your boundaries.

So if aliens show up and they’re so weirdly different in a way we can’t possibly imagine—that would be deeply revealing. It’ll give us a chance to separate the reality from the human lens. It’s not just questions of the universe. We are interesting, and we should learn about our own biases. I anticipate that when the aliens do come, their culture and their minds and their science will be so alien, it’ll be a real challenge to make any kind of connection. But if we do, I think we’ll learn a lot, not just about the universe but about ourselves.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

What if the aliens come and we just can’t communicate? Read More »

tracking-the-winds-that-have-turned-mars-into-a-planet-of-dust

Tracking the winds that have turned Mars into a planet of dust

Where does all this dust come from? It’s thought to be the result of erosion caused by the winds. Because the Martian atmosphere is so thin, dust particles can be difficult to move, but larger particles can become more easily airborne if winds are turbulent enough, later taking smaller dust motes with them. Perseverance and previous Mars rovers have mostly witnessed wind vortices that were associated with either dust devils or convection, during which warm air rises.

CaSSIS and HRSC data showed that most dust devils occur in the northern hemisphere of Mars, mainly in the Amazonis and Elysium Planitiae, with Amazonis Planitia being a hotspot. They can be kicked up by winds on both rough and smooth terrain, but they tend to spread farther in the southern hemisphere, with some traveling across nearly that entire half of the planet. Seasonal occurrence of dust devils is highest during the southern summer, while they are almost nonexistent during the late northern fall.

Martian dust devils tend to peak between mid-morning and midafternoon, though they can occur from early morning through late afternoon. They also migrate toward the Martian north pole in the northern summer and toward the south pole during the southern summer. Southern dust devils tend to move faster than those in the northern hemisphere. Movement determined by winds can be as fast as 44 meters per second (about 98 mph), which is much faster than dust devils move on Earth.

Weathering the storm

Dust devils have also been found to accelerate extremely rapidly on the red planet. These fierce storms are associated with winds that travel along with them but do not form a vortex, known as nonvortical winds. It only takes a few seconds for these winds to accelerate to velocities high enough that they’re able to lift dust particles from the ground and transfer them to the atmosphere. It is not only dust devils that do this—the team found that even nonvortical winds lift large amounts of dust particles on their own, more than was previously thought, and create a dusty haze in the atmosphere.

Tracking the winds that have turned Mars into a planet of dust Read More »

with-another-record-broken,-the-world’s-busiest-spaceport-keeps-getting-busier

With another record broken, the world’s busiest spaceport keeps getting busier


It’s not just the number of rocket launches, but how much stuff they’re carrying into orbit.

With 29 Starlink satellites onboard, a Falcon 9 rocket streaks through the night sky over Cape Canaveral Space Force Station, Florida, on Monday night. Credit: Stephen Clark/Ars Technica

CAPE CANAVERAL, Florida—Another Falcon 9 rocket fired off its launch pad here on Monday night, taking with it another 29 Starlink Internet satellites to orbit.

This was the 94th orbital launch from Florida’s Space Coast so far in 2025, breaking the previous record for the most satellite launches in a calendar year from the world’s busiest spaceport. Monday night’s launch came two days after a Chinese Long March 11 rocket lifted off from an oceangoing platform on the opposite side of the world, marking humanity’s 255th mission to reach orbit this year, a new annual record for global launch activity.

As of Wednesday, a handful of additional missions have pushed the global figure this year to 259, putting the world on pace for around 300 orbital launches by the end of 2025. This will more than double the global tally of 135 orbital launches in 2021.

Routine vs. complacency

Waiting in the darkness a few miles away from the launch pad, I glanced around at my surroundings before watching SpaceX’s Falcon 9 thunder into the sky. There were no throngs of space enthusiasts anxiously waiting for the rocket to light up the night. No line of photographers snapping photos. Just this reporter and two chipper retirees enjoying what a decade ago would have attracted far more attention.

Go to your local airport and you’ll probably find more people posted up at a plane-spotting park at the end of the runway. Still, a rocket launch is something special. On the same night that I watched the 94th launch of the year depart from Cape Canaveral, Orlando International Airport saw the same number of airplane departures in just three hours.

The crowds still turn out for more meaningful launches, such as a test flight of SpaceX’s Starship megarocket in Texas or Blue Origin’s attempt to launch its second New Glenn heavy-lifter here Sunday. But those are not the norm. Generations of aerospace engineers were taught that spaceflight is not routine for fear of falling into complacency, leading to failure, and in some cases, death.

Compared to air travel, the mantra remains valid. Rockets are unforgiving, with engines operating under extreme pressures, at high thrust, and unable to suck in oxygen from the atmosphere as a reactant for combustion. There are fewer redundancies in a rocket than in an airplane.

The Falcon 9’s established failure rate is less than 1 percent, well short of any safety standard for commercial air travel but good enough to be the most successful orbital-class in history. Given the Falcon 9’s track record, SpaceX seems to have found a way to overcome the temptation for complacency.

A Chinese Long March 11 rocket carrying three Shiyan 32 test satellites lifts off from waters off the coast of Haiyang in eastern China’s Shandong province on Saturday. Credit: Guo Jinqi/Xinhua via Getty Images

Following the trend

The upward trend in rocket launches hasn’t always been the case. Launch numbers were steady for most of the 2010s, following a downward trend in the 2000s, with as few as 52 orbital launches in 2005, the lowest number since the nascent era of spaceflight in 1961. There were just seven launches from here in Florida that year.

The numbers have picked up dramatically in the last five years as SpaceX has mastered reusable rocketry.

It’s important to look at not just the number of launches but also how much stuff rockets are actually putting into orbit. More than half of this year’s launches were performed using SpaceX’s Falcon 9 rocket, and the majority of those deployed Starlink satellites for SpaceX’s global Internet network. Each spacecraft is relatively small in size and weight, but SpaceX stacks up to 29 of them on a single Falcon 9 to max out the rocket’s carrying capacity.

All this mass adds up to make SpaceX’s dominance of the launch industry appear even more absolute. According to analyses by BryceTech, an engineering and space industry consulting firm, SpaceX has launched 86 percent of all the world’s payload mass over the 18 months from the beginning of 2024 through June 30 of this year.

That’s roughly 2.98 million kilograms of the approximately 3.46 million kilograms (3,281 of 3,819 tons) of satellite hardware and cargo that all the world’s rockets placed into orbit during that timeframe.

The charts below were created by Ars Technica using publicly available launch numbers and payload mass estimates from BryceTech. The first illustrates the rising launch cadence at Cape Canaveral Space Force Station and NASA’s Kennedy Space Center, located next to one another in Florida. Launches from other US-licensed spaceports, primarily Vandenberg Space Force Base, California, and Rocket Lab’s base at Māhia Peninsula in New Zealand, are also on the rise.

These numbers represent rockets that reached low-Earth orbit. We didn’t include test flights of SpaceX’s Starship rocket in the chart because all of its launches to have intentionally flown on suborbital trajectories.

In the second chart, we break down the payload upmass to orbit from SpaceX, other US companies, China, Russia, and other international launch providers.

Launch rates are on a clear upward trend, while SpaceX has launched 86 percent of the world’s total payload mass to orbit since the beginning of 2024. Credit: Stephen Clark/Ars Technica/BryceTech

Will it continue?

It’s a good bet that payload upmass will continue to rise in the coming years, with heavy cargo heading to orbit to further expand SpaceX’s Starlink communications network and build out new megaconstellations from Amazon, China, and others. The US military’s Golden Dome missile defense shield will also have a ravenous appetite for rockets to get it into space.

SpaceX’s Starship megarocket could begin flying to low-Earth orbit next year, and if it does, SpaceX’s preeminence in delivering mass to orbit will remain assured. Starship’s first real payloads will likely be SpaceX’s next-generation Starlink satellites. These larger, heavier, more capable spacecraft will launch 60 at a time on Starship, further stretching SpaceX’s lead in the upmass war.

But Starship’s arrival will come at the expense of the workhorse Falcon 9, which lacks the capacity to haul the next-gen Starlinks to orbit. “This year and next year I anticipate will be the highest Falcon launch rates that we will see,” said Stephanie Bednarek, SpaceX’s vice president of commercial sales, at an industry conference in July.

SpaceX is on pace for between 165 and 170 Falcon 9 launches this year, with 144 flights already in the books for 2025. Last year’s total for Falcon 9 and Falcon Heavy was 134 missions. SpaceX has not announced how many Falcon 9 and Falcon Heavy launches it plans for next year.

Starship is designed to be fully and rapidly reusable, eventually enabling multiple flights per day. But that’s still a long way off, and it’s unknown how many years it might take for Starship to surpass the Falcon 9’s proven launch tempo.

A Starship rocket and Super Heavy booster lift off from Starbase, Texas. Credit: SpaceX

In any case, with Starship’s heavy-lifting capacity and upgraded next-gen satellites, SpaceX could match an entire year’s worth of new Starlink capacity with just two fully loaded Starship flights. Starship will be able to deliver 60 times more Starlink capacity to orbit than a cluster of satellites riding on a Falcon 9.

There’s no reason to believe SpaceX will be satisfied with simply keeping pace with today’s Starlink growth rate. There are emerging market opportunities in connecting satellites with smartphones, space-based computer processing and data storage, and military applications.

Other companies have medium-to-heavy rockets that are either new to the market or soon to debut. These include Blue Origin’s New Glenn, now set to make its second test flight in the coming days, with a reusable booster designed to facilitate a rapid-fire launch cadence.

Despite all of the newcomers, most satellite operators see a shortage of launch capacity on the commercial market. “The industry is likely to remain supply-constrained through the balance of the decade,” wrote Caleb Henry, director of research at the industry analysis firm Quilty Space. “That could pose a problem for some of the many large constellations on the horizon.”

United Launch Alliance’s Vulcan rocket, Rocket Lab’s Neutron, Stoke Space’s Nova, Relativity Space’s Terran R, and Firefly Aerospace and Northrop Grumman’s Eclipse are among the other rockets vying for a bite at the launch apple.

“Whether or not the market can support six medium to heavy lift launch providers from the US aloneplus Starshipis an open question, but for the remainder of the decade launch demand is likely to remain high, presenting an opportunity for one or more new players to establish themselves in the pecking order,” Henry wrote in a post on Quilty’s website.

China’s space program will need more rockets, too. That nation’s two megaconstellations, known as Guowang and Qianfan, will have thousands of satellites requiring a significant uptick on Chinese launches.

Taking all of this into account, the demand curve for access to space is sure to continue its upward trajectory. How companies meet this demand, and with how many discrete departures from Earth, isn’t quite as clear.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

With another record broken, the world’s busiest spaceport keeps getting busier Read More »

an-explosion-92-million-miles-away-just-grounded-jeff-bezos’-new-glenn-rocket

An explosion 92 million miles away just grounded Jeff Bezos’ New Glenn rocket

A series of eruptions from the Sun, known as coronal mass ejections, sparked dazzling auroral light shows Tuesday night. The eruptions sent a blast of material from the Sun, including charged particles with a strong localized magnetic field, toward the Earth at more than 1 million mph, or more than 500 kilometers per second.

A solar ultraviolet imager on one of NOAA’s GOES weather satellites captured this view of a coronal mass ejection from the Sun early Tuesday. Credit: NOAA

Satellites detected the most recent strong coronal mass ejection, accompanied by a bright solar flare, early Tuesday. It was expected to arrive at Earth on Wednesday.

“We’ve already had two of three anticipated coronal mass ejections arrive here at Earth,” said Shawn Dahl, a forecaster at NOAA’s Space Weather Prediction Center in Boulder, Colorado. The first two waves “packed quite a punch,” Dahl said, and were “profoundly stronger than we anticipated.”

The storm sparked northern lights that were visible as far south as Texas, Florida, and Mexico on Tuesday night. Another round of northern lights might be visible Wednesday night.

The storm arriving Wednesday was the “most energetic” of all the recent coronal mass ejections, Dahl said. It’s also traveling at higher speed, fast enough to cover the 92 million-mile gulf between the Sun and the Earth in less than two days. Forecasters predict a G4 level, or severe, geomagnetic storm Wednesday into Thursday, with a slight chance of a rarer extreme G5 storm, something that has only happened once in the last two decades.

The Aurora Borealis lights up the night sky over Monroe, Wisconsin, on November 11, 2025, during one of the strongest solar storms in decades. Credit: Ross Harried/NurPhoto via Getty Images

The sudden arrival of a rush of charged particles from the Sun can create disturbances in Earth’s magnetic field, affecting power grids, degrading GPS navigation signals, and disrupting radio communications. A G4 geomagnetic storm can trigger “possible widespread voltage control problems” in terrestrial electrical networks, according to NOAA, along with potential surface charging problems on satellites flying above the protective layers of the atmosphere.

It’s not easy to predict the precise impacts of a geomagnetic storm until it arrives on Earth’s doorstep. Several satellites positioned a million miles from Earth in the direction of the Sun carry sensors to detect the speed of the solar wind, its charge, and the direction of its magnetic field. This information helps forecasters know what to expect.

“These types of storms can be very variable,” Dahl said.

An explosion 92 million miles away just grounded Jeff Bezos’ New Glenn rocket Read More »

quantum-computing-tech-keeps-edging-forward

Quantum computing tech keeps edging forward


More superposition, less supposition

IBM follows through on its June promises, plus more trapped ion news.

IBM has moved to large-scale manufacturing of its Quantum Loon chips. Credit: IBM

The end of the year is usually a busy time in the quantum computing arena, as companies often try to announce that they’ve reached major milestones before the year wraps up. This year has been no exception. And while not all of these announcements involve interesting new architectures like the one we looked at recently, they’re a good way to mark progress in the field, and they often involve the sort of smaller, incremental steps needed to push the field forward.

What follows is a quick look at a handful of announcements from the past few weeks that struck us as potentially interesting.

IBM follows through

IBM is one of the companies announcing a brand-new architecture this year. That’s not at all a surprise, given that the company promised to do so back in June; this week sees the company confirming that it has built the two processors it said it would earlier in the year. These include one called Loon, which is focused on the architecture that IBM will use to host error-corrected logical qubits. Loon represents two major changes for the company: a shift to nearest-neighbor connections and the addition of long-distance connections.

IBM had previously used what it termed the “heavy hex” architecture, in which alternating qubits were connected to either two or three of their neighbors, forming a set of overlapping hexagonal structures. In Loon, the company is using a square grid, with each qubit having connections to its four closest neighbors. This higher density of connections can enable more efficient use of the qubits during computations. But qubits in Loon have additional long-distance connections to other parts of the chip, which will be needed for the specific type of error correction that IBM has committed to. It’s there to allow users to test out a critical future feature.

The second processor, Nighthawk, is focused on the now. It also has the nearest-neighbor connections and a square grid structure, but it lacks the long-distance connections. Instead, the focus with Nighthawk is to get error rates down so that researchers can start testing algorithms for quantum advantage—computations where quantum computers have a clear edge over classical algorithms.

In addition, the company is launching GitHub repository that will allow the community to deposit code and performance data for both classical and quantum algorithms, enabling rigorous evaluations of relative performance. Right now, those are broken down into three categories of algorithms that IBM expects are most likely to demonstrate a verifiable quantum advantage.

This isn’t the only follow-up to IBM’s June announcement, which also saw the company describe the algorithm it would use to identify errors in its logical qubits and the corrections needed to fix them. In late October, the company said it had confirmed that the algorithm could work in real time when run on an FPGA made in collaboration with AMD.

Record lows

A few years back, we reported on a company called Oxford Ionics, which had just announced that it achieved a record-low error rate in some qubit operations using trapped ions. Most trapped-ion quantum computers move qubits by manipulating electromagnetic fields, but they perform computational operations using lasers. Oxford Ionics figured out how to perform operations using electromagnetic fields, meaning more of their processing benefited from our ability to precisely manufacture circuitry (lasers were still needed for tasks like producing a readout of the qubits). And as we noted, it could perform these computational operations extremely effectively.

But Oxford Ionics never made a major announcement that would give us a good excuse to describe its technology in more detail. The company was ultimately acquired by IonQ, a competitor in the trapped-ion space.

Now, IonQ is building on what it gained from Oxford Ionics, announcing a new, record-low error rate for two-qubit gates: greater than 99.99 percent fidelity. That could be critical for the company, as a low error rate for hardware qubits means fewer are needed to get good performance from error-corrected qubits.

But the details of the two-qubit gates are perhaps more interesting than the error rate. Two-qubit gates involve bringing both qubits involved into close proximity, which often requires moving them. That motion pumps a bit of energy into the system, raising the ions’ temperature and leaving them slightly more prone to errors. As a result, any movement of the ions is generally followed by cooling, in which lasers are used to bleed energy back out of the qubits.

This process, which involves two distinct cooling steps, is slow. So slow that as much as two-thirds of the time spent in operations involves the hardware waiting around while recently moved ions are cooled back down. The new IonQ announcement includes a description of a method for performing two-qubit gates that doesn’t require the ions to be fully cooled. This allows one of the two cooling steps to be skipped entirely. In fact, coupled with earlier work involving one-qubit gates, it raises the possibility that the entire machine could operate with its ions at a still very cold but slightly elevated temperature, avoiding all need for one of the two cooling steps.

That would shorten operation times and let researchers do more before the limit of a quantum system’s coherence is reached.

State of the art?

The last announcement comes from another trapped-ion company, Quantum Art. A couple of weeks back, it announced a collaboration with Nvidia that resulted in a more efficient compiler for operations on its hardware. On its own, this isn’t especially interesting. But it’s emblematic of a trend that’s worth noting, and it gives us an excuse to look at Quantum Art’s technology, which takes a distinct approach to boosting the efficiency of trapped-ion computation.

First, the trend: Nvidia’s interest in quantum computing. The company isn’t interested in the quantum aspects (at least not publicly); instead, it sees an opportunity to get further entrenched in high-performance computing. There are three areas where the computational capacity of GPUs can play a role here. One is small-scale modeling of quantum processors so that users can perform an initial testing of algorithms without committing to paying for access to the real thing. Another is what Quantum Art is announcing: using GPUs as part of a compiler chain to do all the computations needed to find more efficient ways of executing an algorithm on specific quantum hardware.

Finally, there’s a potential role in error correction. Error correction involves some indirect measurements of a handful of hardware qubits to determine the most likely state that a larger collection (called a logical qubit) is in. This requires modeling a quantum system in real time, which is quite difficult—hence the computational demands that Nvidia hopes to meet. Regardless of the precise role, there has been a steady flow of announcements much like Quantum Art’s: a partnership with Nvidia that will keep the company’s hardware involved if the quantum technology takes off.

In Quantum Art’s case, that technology is a bit unusual. The trapped-ion companies we’ve covered so far are all taking different routes to the same place: moving one or two ions into a location where operations can be performed and then executing one- or two-qubit gates. Quantum Art’s approach is to perform gates with much larger collections of ions. At the compiler level, it would be akin to figuring out which qubits need a specific operation performed, clustering them together, and doing it all at once. Obviously, there are potential efficiency gains here.

The challenge would normally be moving so many qubits around to create these clusters. But Quantum Art uses lasers to “pin” ions in a row so they act to isolate the ones to their right from the ones to their left. Each cluster can then be operated on separately. In between operations, the pins can be moved to new locations, creating different clusters for the next set of operations. (Quantum Art is calling each cluster of ions a “core” and presenting this as multicore quantum computing.)

At the moment, Quantum Art is behind some of its competitors in terms of qubit count and performing interesting demonstrations, and it’s not pledging to scale quite as fast. But the company’s founders are convinced that the complexity of doing so many individual operations and moving so many ions around will catch up with those competitors, while the added efficiency of multiple qubit gates will allow it to scale better.

This is just a small sampling of all the announcements from this fall, but it should give you a sense of how rapidly the field is progressing—from technology demonstrations to identifying cases where quantum hardware has a real edge and exploring ways to sustain progress beyond those first successes.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Quantum computing tech keeps edging forward Read More »