Science

do-animals-fall-for-optical-illusions?-it’s-complicated.

Do animals fall for optical illusions? It’s complicated.

A tale of two species

View from above of the apparatuses used for ring doves (A) and guppies (B).

View from above of the apparatuses used for ring doves (A) and guppies (B). Credit: M. Santaca et al., 2025

The authors tested 38 ring doves and 19 guppies (of the “snakeskin cobra green” ornamental strain) for their experiments. The doves were placed in a testing cage with the bottom covered with an anti-slip wooden panel and a branch serving as a perch and starting point; the feeding station was at the opposite end of the cage. The guppies were tested in single tanks with a gravel bottom.

In both cases, the test subjects were presented with visual stimuli in the form of two white plastic cards. Sizes differed for the doves and the guppies, but each card showed an array of six black circles with a bit of food serving as the center “circle”: red millet seeds for the doves and commercial flake food for the guppies. The circles were smaller on one of the cards and larger on the other. The subjects were free to choose food from one of the cards, and the card with the unchosen food was removed promptly. If no choice was made after 15 minutes, the trial was null and the team tried again after a 15-minute interval.

The authors found that the guppies were indeed highly susceptible to the Ebbinghaus illusion, choosing food surrounded by smaller circles much more frequently, suggesting they perceived it as larger and hence more desirable. The results for ring doves were more mixed, however: some of the doves seemed to be susceptible while others were not, suggesting that their perceptual strategies are more local, detail-oriented, and less influenced by their surrounding context.

“The doves’ mixed responses suggest that individual experience or innate bias can strongly shape how an animal interprets illusions,” the authors concluded. “Just like in humans, where some people are strongly fooled by illusions and others hardly at all, animal perception is not uniform.”

The authors acknowledge that their study has limitations. For instance, guppies and ring doves diverged hundreds of millions of years ago and hence are phylogenetically distant, so the perceptual differences between them could be due not just to ecological pressures, but also to evolutionary traits gained or lost via natural selection. Future experiments involving more closely related species with different sensory environments would better isolate the role of ecological factors in animal perception.

Frontiers in Psychology, 2025. DOI: 10.3389/fpsyg.2025.1653695 (About DOIs).

Do animals fall for optical illusions? It’s complicated. Read More »

dead-ends-is-a-fun,-macabre-medical-history-for-kids

Dead Ends is a fun, macabre medical history for kids


flukes, flops, and failures

Ars chats with co-authors Lindsey Fitzharris and Adrian Teal about their delightful new children’s book.

In 1890, a German scientist named Robert Koch thought he’d invented a cure for tuberculosis, a substance derived from the infecting bacterium itself that he dubbed Tuberculin. His substance didn’t actually cure anyone, but it was eventually widely used as a diagnostic skin test. Koch’s successful failure is just one of the many colorful cases featured in Dead Ends! Flukes, Flops, and Failures that Sparked Medical Marvels, a new nonfiction illustrated children’s book by science historian Lindsey Fitzharris and her husband, cartoonist Adrian Teal.

A noted science communicator with a fondness for the medically macabre, Fitzharris published a biography of surgical pioneer Joseph Lister, The Butchering Art, in 2017—a great, if occasionally grisly, read. She followed up with 2022’s  The Facemaker: A Visionary Surgeon’s Battle to Mend the Disfigured Soldiers of World War I, about a WWI surgeon named Harold Gillies who rebuilt the faces of injured soldiers.

And in 2020, she hosted a documentary for the Smithsonian Channel, The Curious Life and Death Of…, exploring famous deaths, ranging from drug lord Pablo Escobar to magician Harry Houdini. Fitzharris performed virtual autopsies, experimented with blood samples, interviewed witnesses, and conducted real-time demonstrations in hopes of gleaning fresh insights. For his part, Teal is a well-known caricaturist and illustrator, best known for his work on the British TV series Spitting Image. His work has also appeared in The Guardian and the Sunday Telegraph, among other outlets.

The couple decided to collaborate on children’s books as a way to combine their respective skills. Granted, “[The market for] children’s nonfiction is very difficult,” Fitzharris told Ars. “It doesn’t sell that well in general. It’s very difficult to get publishers on board with it. It’s such a shame because I really feel that there’s a hunger for it, especially when I see the kids picking up these books and loving it. There’s also just a need for it with the decline in literacy rates. We need to get people more engaged with these topics in ways that go beyond a 30-second clip on TikTok.”

Their first foray into the market was 2023’s Plague-Busters! Medicine’s Battles with History’s Deadliest Diseases, exploring “the ickiest illnesses that have infected humans and affected civilizations through the ages”—as well as the medical breakthroughs that came about to combat those diseases. Dead Ends is something of a sequel, focusing this time on historical diagnoses, experiments, and treatments that were useless at best, frequently harmful, yet eventually led to unexpected medical breakthroughs.

Failure is an option

The book opens with the story of Robert Liston, a 19th-century Scottish surgeon known as “the fastest knife in the West End,” because he could amputate a leg in less than three minutes. That kind of speed was desirable in a period before the discovery of anesthetic, but sometimes Liston’s rapid-fire approach to surgery backfired. One story (possibly apocryphal) holds that Liston accidentally cut off the finger of his assistant in the operating theater as he was switching blades, then accidentally cut the coat of a spectator, who died of fright. The patient and assistant also died, so that operation is now often jokingly described as the only one with a 300 percent mortality rate, per Fitzharris.

Liston is the ideal poster child for the book’s theme of celebrating the role of failure in scientific progress. “I’ve always felt that failure is something we don’t talk about enough in the history of science and medicine,” said Fitzharris. “For everything that’s succeeded there’s hundreds, if not thousands, of things that’s failed. I think it’s a great concept for children. If you think that you’ve made mistakes, look at these great minds from the past. They’ve made some real whoppers. You are in good company. And failure is essential to succeeding, especially in science and medicine.”

“During the COVID pandemic, a lot of people were uncomfortable with the fact that some of the advice would change, but to me that was a comfort because that’s what you want to see scientists and doctors doing,” she continued. “They’re learning more about the virus, they’re changing their advice. They’re adapting. I think that this book is a good reminder of what the scientific process involves.”

The details of Liston’s most infamous case might be horrifying, but as Teal observes, “Comedy equals tragedy plus time.” One of the reasons so many of his patients died was because this was before the broad acceptance of germ theory and Joseph Lister’s pioneering work on antiseptic surgery. Swashbuckling surgeons like Liston prided themselves on operating in coats stiffened with blood—the sign of a busy and hence successful surgeon. Frederick Treves once observed that in the operating room, “cleanliness was out of place. It was considered to be finicking and affected. An executioner might as well manicure his nails before chopping off a head.”

“There’s always a lot of initial resistance to new ideas, even in science and medicine,” said Teal. “A lot of what we talk about is paradigm shifts and the difficulty of achieving [such a shift] when people are entrenched in their thinking. Galen was a hugely influential Roman doctor and got a lot of stuff right, but also got a lot of stuff wrong. People were clinging onto that stuff for centuries. You have misunderstanding compounded by misunderstanding, century after century, until somebody finally comes along and says, ‘Hang on a minute, this is all wrong.’”

You know… for kids

Writing for children proved to be a very different experience for Fitzharris after two adult-skewed science history books. “I initially thought children’s writing would be easy,” she confessed. “But it’s challenging to take these high-level concepts and complex stories about past medical movements and distill them for children in an entertaining and fun way.” She credits Teal—a self-described “man-child”—for taking her drafts and making them more child-friendly.

Teal’s clever, slightly macabre illustrations also helped keep the book accessible to its target audience, appealing to children’s more ghoulish side. “There’s a lot of gruesome stuff in this book,” Teal said. “Obviously it’s for kids, so you don’t want to go over the top, but equally, you don’t want to shy away from those details. I always say kids love it because kids are horrible, in the best possible way. I think adults sometimes worry too much about kids’ sensibilities. You can be a lot more gruesome than you think you can.”

The pair did omit some darker subject matter, such as the history of frontal lobotomies, notably the work of a neuroscientist named Walter Freeman, who operated an actual “lobotomobile.” For the authors, it was all about striking the right balance. “How much do you give to the kids to keep them engaged and interested, but not for it to be scary?” said Fitzharris. “We don’t want to turn people off from science and medicine. We want to celebrate the greatness of what we’ve achieved scientifically and medically. But we also don’t want to cover up the bad bits because that is part of the process, and it needs to be acknowledged.”

Sometimes Teal felt it just wasn’t necessary to illustrate certain gruesome details in the text—such as their discussion of the infamous case of Phineas Gage. Gage was a railroad construction foreman. In 1848, he was overseeing a rock blasting team when an explosion drove a three-foot tamping iron through his skull. “There’s a horrible moment when [Gage] leans forward and part of his brain drops out,” said Teal. “I’m not going to draw that, and I don’t need to, because it’s explicit in the text. If we’ve done a good enough job of writing something, that will put a mental picture in someone’s head.”

Miraculously, Gage survived, although there were extreme changes in his behavior and personality, and his injuries eventually caused epileptic seizures, one of which killed Gage in 1860. Gage became the index case for personality changes due to frontal lobe damage, and 50 years after his death, the case inspired neurologist David Ferrier to create brain maps based on his research into whether certain areas of the brain controlled specific cognitive functions.

“Sometimes it takes a beat before we get there,” said Fitzharris. “Science builds upon ideas, and it can take time. In the age of looking for instantaneous solutions, I think it’s important to remember that research needs to allow itself to do what it needs to do. It shouldn’t just be guided by an end goal. Some of the best discoveries that were made had no end goal in mind. And if you read Dead Ends, you’re going to be very happy that you live in 2025. Medically speaking, this is the best time. That’s really what Dead Ends is about. It’s a celebration of how far we’ve come.”

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Dead Ends is a fun, macabre medical history for kids Read More »

nasa’s-next-moonship-reaches-last-stop-before-launch-pad

NASA’s next Moonship reaches last stop before launch pad

The Orion spacecraft, which will fly four people around the Moon, arrived inside the cavernous Vehicle Assembly Building at NASA’s Kennedy Space Center in Florida late Thursday night, ready to be stacked on top of its rocket for launch early next year.

The late-night transfer covered about 6 miles (10 kilometers) from one facility to another at the Florida spaceport. NASA and its contractors are continuing preparations for the Artemis II mission after the White House approved the program as an exception to work through the ongoing government shutdown, which began on October 1.

The sustained work could set up Artemis II for a launch opportunity as soon as February 5 of next year. Astronauts Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen will be the first humans to fly on the Orion spacecraft, a vehicle that has been in development for nearly two decades. The Artemis II crew will make history on their 10-day flight by becoming the first people to travel to the vicinity of the Moon since 1972.

Where things stand

The Orion spacecraft, developed by Lockheed Martin, has made several stops at Kennedy over the last few months since leaving its factory in May.

First, the capsule moved to a fueling facility, where technicians filled it with hydrazine and nitrogen tetroxide propellants, which will feed Orion’s main engine and maneuvering thrusters on the flight to the Moon and back. In the same facility, teams loaded high-pressure helium and ammonia coolant into Orion propulsion and thermal control systems.

The next stop was a nearby building where the Launch Abort System was installed on the Orion spacecraft. The tower-like abort system would pull the capsule away from its rocket in the event of a launch failure. Orion stands roughly 67 feet (20 meters) tall with its service module, crew module, and abort tower integrated together.

Teams at Kennedy also installed four ogive panels to serve as an aerodynamic shield over the Orion crew capsule during the first few minutes of launch.

The Orion spacecraft, with its Launch Abort System and ogive panels installed, is seen last month inside the Launch Abort System Facility at Kennedy Space Center, Florida. Credit: NASA/Frank Michaux

It was then time to move Orion to the Vehicle Assembly Building (VAB), where a separate team has worked all year to stack the elements of NASA’s Space Launch System rocket. In the coming days, cranes will lift the spacecraft, weighing 78,000 pounds (35 metric tons), dozens of stories above the VAB’s center aisle, then up and over the transom into the building’s northeast high bay to be lowered atop the SLS heavy-lift rocket.

NASA’s next Moonship reaches last stop before launch pad Read More »

lead-poisoning-has-been-a-feature-of-our-evolution

Lead poisoning has been a feature of our evolution


A recent study found lead in teeth from 2 million-year-old hominin fossils.

Our hominid ancestors faced a Pleistocene world full of dangers—and apparently one of those dangers was lead poisoning.

Lead exposure sounds like a modern problem, at least if you define “modern” the way a paleoanthropologist might: a time that started a few thousand years ago with ancient Roman silver smelting and lead pipes. According to a recent study, however, lead is a much more ancient nemesis, one that predates not just the Romans but the existence of our genus Homo. Paleoanthropologist Renaud Joannes-Boyau of Australia’s Southern Cross University and his colleagues found evidence of exposure to dangerous amounts of lead in the teeth of fossil apes and hominins dating back almost 2 million years. And somewhat controversially, they suggest that the toxic element’s pervasiveness may have helped shape our evolutionary history.

The skull of an early hominid, aged to a dark brown color. The skull is fragmentary, but the fragments are held in the appropriate locations by an underlying beige material.

The skull of an early hominid. Credit: Einsamer Schütze / Wikimedia

The Romans didn’t invent lead poisoning

Joannes-Boyau and his colleagues took tiny samples of preserved enamel and dentin from the teeth of 51 fossils. In most of those teeth, the paleoanthropologists found evidence that these apes and hominins had been exposed to lead—sometimes in dangerous quantities—fairly often during their early years.

Tooth enamel forms in thin layers, a little like tree rings, during the first six or so years of a person’s life. The teeth in your mouth right now (and of which you are now uncomfortably aware; you’re welcome) are a chemical and physical record of your childhood health—including, perhaps, whether you liked to snack on lead paint chips. Bands of lead-tainted tooth enamel suggest that a person had a lot of lead in their bloodstream during the year that layer of enamel was forming (in this case, “a lot” means an amount measurable in parts per million).

In 71 percent of the hominin teeth that Joannes-Boyau and his colleagues sampled, dark bands of lead in the tooth enamel showed “clear signs of episodic lead exposure” during the crucial early childhood years. Those included teeth from 100,000-year-old members of our own species found in China and 250,000-year-old French Neanderthals. They also included much earlier hominins who lived between 1 and 2 million years ago in South Africa: early members of our genus Homo, along with our relatives Australopithecus africanus and Paranthropus robustus. Lead exposure, it turns out, is a very ancient problem.

Living in a dangerous world

This study isn’t the first evidence that ancient hominins dealt with lead in their environments. Two Neanderthals living 250,000 years ago in France experienced lead exposure as young children, according to a 2018 study. At the time, they were the oldest known examples of lead exposure (and they’re included in Joannes-Boyau and his colleagues’ recent study).

Until a few thousand years ago, no one was smelting silver, plumbing bathhouses, or releasing lead fumes in car exhaust. So how were our hominin ancestors exposed to the toxic element? Another study, published in 2015, showed that the Spanish caves occupied by other groups of Neanderthals contained enough heavy metals, including lead, to “meet the present-day standards of ‘contaminated soil.’”

Today, we mostly think of lead in terms of human-made pollution, so it’s easy to forget that it’s also found naturally in bedrock and soil. If that weren’t the case, archaeologists couldn’t use lead isotope ratios to tell where certain artifacts were made. And some places—and some types of rock—have higher lead concentrations than others. Several common minerals contain lead compounds, including galena or lead sulfide. And the kind of lead exposure documented in Joannes-Boyau and his colleagues’ study would have happened at an age when little hominins were very prone to putting rocks, cave dirt, and other random objects in their mouths.

Some of the fossils from the Queque cave system in China, which included a 1.8 million-year-old extinct gorilla-like ape called Gigantopithecus blacki, had lead levels higher than 50 parts per million, which Joannes-Boyau and his colleagues describe as “a substantial level of lead that could have triggered some developmental, health, and perhaps social impairments.”

Even for ancient hominins who weren’t living in caves full of lead-rich minerals, wildfires, or volcanic eruptions can also release lead particles into the air, and erosion or flooding can sweep buried lead-rich rock or sediment into water sources. If you’re an Australopithecine living upstream of a lead-rich mica outcropping, for example, erosion might sprinkle poison into your drinking water—or the drinking water of the gazelle you eat or the root system of the bush you get those tasty berries from… .

Our world is full of poisons. Modern humans may have made a habit of digging them up and pumping them into the air, but they’ve always been lying in wait for the unwary.

screenshot from the app

Cubic crystals of the lead-sulfide mineral galena.

Digging into the details

Joannes-Boyau and his colleagues sampled the teeth of several hominin species from South Africa, all unearthed from cave systems just a few kilometers apart. All of them walked the area known as Cradle of Humankind within a few hundred thousand years of each other (at most), and they would have shared a very similar environment. But they also would have had very different diets and ways of life, and that’s reflected in their wildly different exposures to lead.

A. africanus had the highest exposure levels, while P. robustus had signs of infrequent, very slight exposures (with Homo somewhere in between the two). Joannes-Boyau and his colleagues chalk the difference up to the species’ different diets and ecological niches.

“The different patterns of lead exposure could suggest that P. robustus lead bands were the result of acute exposure (e.g., wild forest fire),” Joannes-Boyau and his colleagues wrote, “while for the other two species, known to have a more varied diet, lead bands may be due to more frequent, seasonal, and higher lead concentration through bioaccumulation processes in the food chain.”

Did lead exposure affect our evolution?

Given their evidence that humans and their ancestors have regularly been exposed to lead, the team looked into whether this might have influenced human evolution. In doing so, they focused on a gene called NOVA1, which has been linked to both brain development and the response to lead exposure. The results were quite a bit short of decisive; you can think of things as remaining within the realm of a provocative hypothesis.

The NOVA1 gene encodes a protein that influences the processing of messenger RNAs, allowing it to control the production of closely related variants of a single gene. It’s notable for a number of reasons. One is its role in brain development; mice without a working copy of NOVA1 die shortly after birth due to defects in muscle control. Its activity is also altered following exposure to lead.

But perhaps its most interesting feature is that modern humans have a version of the gene that differs by a single amino acid from the version found in all other primates, including our closest relatives, the Denisovans and Neanderthals. This raises the prospect that the difference is significant from an evolutionary perspective. Altering the mouse version so that it is identical to the one found in modern humans does alter the vocal behavior of these mice.

But work with human stem cells has produced mixed results. One group, led by one of the researchers involved in this work, suggested that stem cells carrying the ancestral form of the protein behaved differently from those carrying the modern human version. But others have been unable to replicate those results.

Regardless of that bit of confusion, the researchers used the same system, culturing stem cells with the modern human and ancestral versions of the protein. These clusters of cells (called organoids) were grown in media containing two different concentrations of lead, and changes in gene activity and protein production were examined. The researchers found changes, but the significance isn’t entirely clear. There were differences between the cells with the two versions of the gene, even without any lead present. Adding lead could produce additional changes, but some of those were partially reversed if more lead was added. And none of those changes were clearly related either to a response to lead or the developmental defects it can produce.

The relevance of these changes isn’t obvious, either, as stem cell cultures tend to reflect early neural development while the lead exposure found in the fossilized remains is due to exposure during the first few years of life.

So there isn’t any clear evidence that the variant found in modern humans protects individuals who are exposed to lead, much less that it was selected by evolution for that function. And given the widespread exposure seen in this work, it seems like all of our relatives—including some we know modern humans interbred with—would also have benefited from this variant if it was protective.

Science Advances, 2025. DOI: 10.1126/sciadv.adr1524  (About DOIs).

Photo of Kiona N. Smith

Kiona is a freelance science journalist and resident archaeology nerd at Ars Technica.

Lead poisoning has been a feature of our evolution Read More »

spacex-has-plans-to-launch-falcon-heavy-from-california—if-anyone-wants-it-to

SpaceX has plans to launch Falcon Heavy from California—if anyone wants it to

There’s more to the changes at Vandenberg than launching additional rockets. The authorization gives SpaceX the green light to redevelop Space Launch Complex 6 (SLC-6) to support Falcon 9 and Falcon Heavy missions. SpaceX plans to demolish unneeded structures at SLC-6 (pronounced “Slick 6”) and construct two new landing pads for Falcon boosters on a bluff overlooking the Pacific just south of the pad.

SpaceX currently operates from a single pad at Vandenberg—Space Launch Complex 4-East (SLC-4E)—a few miles north of the SLC-6 location. The SLC-4E location is not configured to launch the Falcon Heavy, an uprated rocket with three Falcon 9 boosters bolted together.

SLC-6, cocooned by hills on three sides and flanked by the ocean to the west, is no stranger to big rockets. It was first developed for the Air Force’s Manned Orbiting Laboratory program in the 1960s, when the military wanted to put a mini-space station into orbit for astronauts to spy on the Soviet Union. Crews readied the complex to launch military astronauts on top of Titan rockets, but the Pentagon canceled the program in 1969 before anything actually launched from SLC-6.

NASA and the Air Force then modified SLC-6 to launch space shuttles. The space shuttle Enterprise was stacked vertically at SLC-6 for fit checks in 1985, but the Air Force abandoned the Vandenberg-based shuttle program after the Challenger accident in 1986. The launch facility sat mostly dormant for nearly two decades until Boeing, and then United Launch Alliance, took over SLC-6 and began launching Delta IV rockets there in 2006.

The space shuttle Enterprise stands vertically at Space Launch Complex-6 at Vandenberg. NASA used the shuttle for fit checks at the pad, but it never launched from California. Credit: NASA

ULA launched its last Delta IV Heavy rocket from California in 2022, leaving the future of SLC-6 in question. ULA’s new rocket, the Vulcan, will launch from a different pad at Vandenberg. Space Force officials selected SpaceX in 2023 to take over the pad and prepare it to launch the Falcon Heavy, which has the lift capacity to carry the military’s most massive satellites into orbit.

No big rush

Progress at SLC-6 has been slow. It took nearly a year to prepare the Environmental Impact Statement. In reality, there’s no big rush to bring SLC-6 online. SpaceX has no Falcon Heavy missions from Vandenberg in its contract backlog, but the company is part of the Pentagon’s stable of launch providers. To qualify as a member of the club, SpaceX must have the capability to launch the Space Force’s heaviest missions from the military’s spaceports at Vandenberg and Cape Canaveral, Florida.

SpaceX has plans to launch Falcon Heavy from California—if anyone wants it to Read More »

antarctica-is-starting-to-look-a-lot-like-greenland—and-that-isn’t-good

Antarctica is starting to look a lot like Greenland—and that isn’t good


Global warming is awakening sleeping giants of ice at the South Pole.

A view of the Shoesmith Glacier on Horseshoe Island on Feb. 21. Credit: Sebnem Coskun/Anadolu via Getty Images

As recently as the 1990s, when the Greenland Ice Sheet and the rest of the Arctic region were measurably thawing under the climatic blowtorch of human-caused global warming, most of Antarctica’s vast ice cap still seemed securely frozen.

But not anymore. Physics is physics. As the planet heats up, more ice will melt at both poles, and recent research shows that Antarctica’s ice caps, glaciers, and floating ice shelves, as well as its sea ice, are just as vulnerable to warming as the Arctic.

Both satellite data and field observations in Antarctica reveal alarming signs of a Greenland-like meltdown, with increased surface melting of the ice fields, faster-moving glaciers, and dwindling sea ice. Some scientists are sounding the alarm, warning that the rapid “Greenlandification” of Antarctica will have serious consequences, including an accelerated rise in sea levels and significant shifts in rainfall and drought patterns.

The Antarctic ice sheet covers about 5.4 million square miles, an area larger than Europe. On average, it is more than 1 mile thick and holds 61 percent of all the fresh water on Earth, enough to raise the global average sea level by about 190 feet if it all melts. The smaller, western portion of the ice sheet is especially vulnerable, with enough ice to raise sea level more than 10 feet.

Thirty years ago, undergraduate students were told that the Antarctic ice sheets were going to be stable and that they weren’t going to melt much, said Ruth Mottram, an ice researcher with the Danish Meteorological Institute and lead author of a new paper in Nature Geoscience that examined the accelerating ice melt and other similarities between changes in northern and southern polar regions.

“We thought it was just going to take ages for any kind of climate impacts to be seen in Antarctica. And that’s really not true,” said Mottram, adding that some of the earliest warnings came from scientists who saw collapsing ice shelves, retreating glaciers, and increased surface melting in satellite data.

One of the early warning signs was the rapid collapse of an ice shelf along the narrow Antarctic Peninsula, which extends northward toward the tip of South America, said Helen Amanda Fricker, a geophysics professor with the Scripps Institute of Oceanography Polar Center at the University of California, San Diego.

Chunks of sea ice on the shore

Stranded remnants of sea ice along the Antarctic Peninsula are a reminder that much of the ice on the frozen continent around the South Pole is just as vulnerable to global warming as Arctic ice, where a long-term meltdown is well underway.

Credit: Bob Berwyn/Inside Climate News

Stranded remnants of sea ice along the Antarctic Peninsula are a reminder that much of the ice on the frozen continent around the South Pole is just as vulnerable to global warming as Arctic ice, where a long-term meltdown is well underway. Credit: Bob Berwyn/Inside Climate News

After a string of record-warm summers riddled the floating Rhode Island-sized slab of ice with cracks and meltwater ponds, it crumbled almost overnight. The thick, ancient ice dam was gone, and the seven major outlet glaciers behind it accelerated toward the ocean, raising sea levels as their ice melted.

“The Larsen B ice shelf collapse in 2002 was a staggering event in our community,” said Fricker, who was not an author of the new paper. “We just couldn’t believe the pace at which it happened, within six weeks. Basically, the ice shelves are there and then, boom, boom, boom, a series of melt streams and melt ponds. And then the whole thing collapsed, smattered into smithereens.”

Glaciologists never thought that events would happen that quickly in Antarctica, she said.

Same physics, same changes

Fricker said glaciologists thought of changes in Antarctica on millennial timescales, but the ice shelf collapse showed that extreme warming can lead to much more rapid change.

Current research focuses on the edges of Antarctica, where floating sea ice and relatively narrow outlet glaciers slow the flow of the ice cap toward the sea. She described the Antarctic Ice Sheet as a giant ice reservoir contained by a series of dams.

“If humans had built those containment structures,” she said, “we would think that they weren’t very adequate. We are relying on those dams to hold back all of that ice, but the dams are weakening all around Antarctica and releasing more ice into the ocean.”

Satellite view of ice cap coverage

A comparison of the average concentration of Antarctic sea ice.

Credit: NASA Earth Observatory

A comparison of the average concentration of Antarctic sea ice. Credit: NASA Earth Observatory

Credit: NASA Earth Observatory

The amount of ice that’s entered the ocean has increased fourfold since the 1990s, and she said, “We’re on the cusp of it becoming a really big number… because at some point, there’s no stopping it anymore.”

The Antarctic Ice Sheet is often divided into three sectors: the East Antarctic Ice Sheet, the largest and thickest; the West Antarctic Ice Sheet; and the Antarctic Peninsula, which is deemed the most vulnerable to thawing and melting.

Mottram, the new paper’s lead author, said a 2022 heatwave that penetrated to the coldest interior part of the East Antarctic Ice Sheet may be another sign that the continent is not as isolated from the rest of the global climate system as once thought. The extraordinary 2022 heatwave was driven by an atmospheric river, or a concentrated stream of moisture-laden air. Ongoing research “shows that there’s been an increase in the number of atmospheric rivers and an increase in their intensity,” she said.

Antarctica is also encircled by a powerful circumpolar ocean current that has prevented the Southern Ocean from warming as quickly as other ocean regions. But recent climate models and observations show the buffer is breaking down and that relatively warmer waters are starting to reach the base of the ice shelves, she said.

New maps detailing winds in the region show that “swirls of air from higher latitudes are dragging in all the time, so it’s not nearly as isolated as we were always told when we were students,” she said.

Ice researcher Eric Rignot, an Earth system science professor at the University of California, Irvine, who did not contribute to the new paper, said via email that recent research on Antarctica’s floating ice shelves emphasizes the importance of how the oceans and ice interact, a process that wasn’t studied very closely in early Greenland research. And Greenland shows what will happen to Antarctic glaciers in a warmer climate with more surface melt and more intense ice-ocean interactions, he added.

“We learn from both but stating that one is becoming the other is an oversimplification,” he said. “There is no new physics in Greenland that does not apply to Antarctica and vice versa.”

Rignot said the analogy between the two regions also partly breaks down because Greenland is warming up at two to three times the global average, “which has triggered a slowing of the jet stream,” with bigger wobbles and “weird weather patterns” in the Northern Hemisphere.

Antarctica is warming slightly less than the global average rate, according to a 2025 study, and the Southern Hemisphere jet stream is strengthening and tightening toward the South Pole, “behaving completely opposite,” he said.

Mottram said her new paper aims to help people understand that Antarctica is not as remote or isolated as often portrayed, and that what happens there will affect the rest of the global climate system.

“It’s not just this place far away that nobody goes to and nobody understands,” she said. “We actually understand quite a lot of what’s going on there. And so I also hope that it drives more urgency to decarbonize, because it’s very clear that the only way we’re going to get out of this problem is bringing our greenhouse gases down as much as possible, as soon as possible.”

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

Antarctica is starting to look a lot like Greenland—and that isn’t good Read More »

rice-weevil-on-a-grain-of-rice-wins-2025-nikon-small-world-contest

Rice weevil on a grain of rice wins 2025 Nikon Small World contest

A stunning image of a rice weevil on a single grain of rice has won the 2025 Nikon Small World photomicrography contest, yielding valuable insight into the structure and behavior of—and providing a fresh perspective on—this well-known agricultural pest. The image was taken by Zhang You of Yunnan, China. Another of You’s photographs placed 15th in this year’s contest.

“It pays to dive deep into entomology: understanding insects’ behaviors and mastering lighting,” You said in a statement. “A standout work blends artistry with scientific rigor, capturing the very essence, energy, and spirit of these creatures.”

There was an element of luck in creating his winning image, too. “I had observed rice weevils in grains before, but never one with its wings spread,” You said. “This one was naturally preserved on a windowsill, perhaps in a final attempt to escape. Its tiny size makes manually preparing spread-wing specimens extremely difficult, so encountering it was both serendipitous and inspiring.”

Nikon’s annual contest was founded in 1974 “to showcase the beauty and complexity of things seen through the light microscope.” Photomicrography involves attaching a camera to a microscope (either an optical microscope or an electron microscope) so that the user can take photographs of objects at very high resolutions. British physiologist Richard Hill Norris was one of the first to use it for his studies of blood cells in 1850, and the method has increasingly been highlighted as art since the 1970s. There have been many groundbreaking technological advances in the ensuing decades, particularly with the advent of digital imaging methods.

Rice weevil on a grain of rice wins 2025 Nikon Small World contest Read More »

starship’s-elementary-era-ends-today-with-mega-rocket’s-11th-test-flight

Starship’s elementary era ends today with mega-rocket’s 11th test flight

Future flights of Starship will end with returns to Starbase, where the launch tower will try to catch the vehicle coming home from space, similar to the way SpaceX has shown it can recover the Super Heavy booster. A catch attempt with Starship is still at least a couple of flights away.

In preparation for future returns to Starbase, the ship on Flight 11 will perform a “dynamic banking maneuver” and test subsonic guidance algorithms prior to its final engine burn to brake for splashdown. If all goes according to plan, the flight will end with a controlled water landing in the Indian Ocean approximately 66 minutes after liftoff.

Turning point

Monday’s test flight will be the last Starship launch of the year as SpaceX readies a new generation of the rocket, called Version 3, for its debut sometime in early 2026. The new version of the rocket will fly with upgraded Raptor engines and larger propellant tanks and have the capability for refueling in low-Earth orbit.

Starship Version 3 will also inaugurate SpaceX’s second launch pad at Starbase, which has several improvements over the existing site, including a flame trench to redirect engine exhaust away from the pad. The flame trench is a common feature of many launch pads, but all of the Starship flights so far have used an elevated launch mount, or stool, over a water-cooled flame deflector.

The current launch complex is expected to be modified to accommodate future Starship V3s, giving the company two pads to support a higher flight rate.

NASA is counting on a higher flight rate for Starship next year to move closer to fulfilling SpaceX’s contract to provide a human-rated lander to the agency’s Artemis lunar program. SpaceX has contracts worth more than $4 billion to develop a derivative of Starship to land NASA astronauts on the Moon.

But much of SpaceX’s progress toward a lunar landing hinges on launching numerous Starships—perhaps a dozen or more—in a matter of a few weeks or months. SpaceX is activating the second launch pad in Texas and building several launch towers and a new factory in Florida to make this possible.

Apart from recovering and reusing Starship itself, the program’s most pressing near-term hurdle is the demonstration of in-orbit refueling, a prerequisite for any future Starship voyages to the Moon or Mars. This first refueling test could happen next year but will require Starship V3 to have a smoother introduction than Starship V2, which is retiring after Flight 11 with, at best, a 40 percent success rate.

Starship’s elementary era ends today with mega-rocket’s 11th test flight Read More »

rocket-report:-bezos’-firm-will-package-satellites-for-launch;-starship-on-deck

Rocket Report: Bezos’ firm will package satellites for launch; Starship on deck


The long, winding road for Franklin Chang-Diaz’s plasma rocket engine takes another turn.

Blue Origin’s second New Glenn booster left its factory this week for a road trip to the company’s launch pad a few miles away. Credit: Blue Origin

Welcome to Edition 8.14 of the Rocket Report! We’re now more than a week into a federal government shutdown, but there’s been little effect on the space industry. Military space operations are continuing unabated, and NASA continues preparations at Kennedy Space Center, Florida, for the launch of the Artemis II mission around the Moon early next year. The International Space Station is still flying with a crew of seven in low-Earth orbit, and NASA’s fleet of spacecraft exploring the cosmos remain active. What’s more, so much of what the nation does in space is now done by commercial companies largely (but not completely) immune from the pitfalls of politics. But the effect of the shutdown on troops and federal employees shouldn’t be overlooked. They will soon miss their first paychecks unless political leaders reach an agreement to end the stalemate.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Danger from dead rockets. A new listing of the 50 most concerning pieces of space debris in low-Earth orbit is dominated by relics more than a quarter-century old, primarily dead rockets left to hurtle through space at the end of their missions, Ars reports. “The things left before 2000 are still the majority of the problem,” said Darren McKnight, lead author of a paper presented October 3 at the International Astronautical Congress in Sydney. “Seventy-six percent of the objects in the top 50 were deposited last century, and 88 percent of the objects are rocket bodies. That’s important to note, especially with some disturbing trends right now.”

Littering in LEO … The disturbing trends mainly revolve around China’s actions in low-Earth orbit. “The bad news is, since January 1, 2024, we’ve had 26 rocket bodies abandoned in low-Earth orbit that will stay in orbit for more than 25 years,” McKnight told Ars. China is responsible for leaving behind 21 of those 26 rockets. Overall, Russia and the Soviet Union lead the pack with 34 objects listed in McKnight’s Top 50, followed by China with 10, the United States with three, Europe with two, and Japan with one. Russia’s SL-16 and SL-8 rockets are the worst offenders, combining to take 30 of the Top 50 slots. An impact with even a modestly sized object at orbital velocity would create countless pieces of debris, potentially triggering a cascading series of additional collisions clogging LEO with more and more space junk, a scenario called the Kessler Syndrome.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

New Shepard flies again. Blue Origin, Jeff Bezos’ space company, launched its sixth crewed New Shepard flight so far this year Wednesday as the company works to increase the vehicle’s flight rate, Space News reports. This was the 36th flight of Blue Origin’s suborbital New Shepard rocket. The passengers included: Jeff Elgin, Danna Karagussova, Clint Kelly III, Will Lewis, Aaron Newman, and Vitalii Ostrovsky. Blue Origin said it has now flown 86 humans (80 individuals) into space. The New Shepard booster returned to a pinpoint propulsive landing, and the capsule parachuted into the desert a few miles from the launch site near Van Horn, Texas.

Two-month turnaround … This flight continued Blue Origin’s trend of launching New Shepard about once per month. The company has two capsules and two boosters in its active inventory, and each vehicle has flown about once every two months this year. Blue Origin currently has command of the space tourism and suborbital research market as its main competitor in this sector, Virgin Galactic, remains grounded while it builds a next-generation rocket plane. (submitted by EllPeaTea)

NASA still interested in former astronaut’s rocket engine. NASA has awarded the Ad Astra Rocket Company a $4 million, two-year contract for the continued development of the company’s Variable Specific Impulse Magnetoplasma Rocket (VASIMR) concept, Aviation Week & Space Technology reports. Ad Astra, founded by former NASA astronaut Franklin Chang-Diaz, claims the vehicle has the potential to reach Mars with human explorers within 45 days using a nuclear power source rather than solar power. The new contract will enable federal funding to support development of the engine’s radio frequency, superconducting magnet, and structural exoskeleton subsystems.

Slow going … Houston-based Ad Astra said in a press release that it sees the high-power plasma engine as “nearing flight readiness.” We’ve heard this before. The VASIMR engine has been in development for decades now, beset by a lack of stable funding and the technical hurdles inherent in designing and testing such demanding technology. For example, Ad Astra once planned a critical 100-hour, 100-kilowatt ground test of the VASIMR engine in 2018. The test still hasn’t happened. Engineers discovered a core component of the engine tended to overheat as power levels approached 100 kilowatts, forcing a redesign that set the program back by at least several years. Now, Ad Astra says it is ready to build and test a pair of 150-kilowatt engines, one of which is intended to fly in space at the end of the decade.

Gilmour eyes return to flight next year. Australian rocket and satellite startup Gilmour Space Technologies is looking to return to the launch pad next year after the first attempt at an orbital flight failed over the summer, Aviation Week & Space Technology reports. “We are well capitalized. We are going to be launching again next year,” Adam Gilmour, the company’s CEO, said October 3 at the International Astronautical Congress in Sydney.

What happened? … Gilmour didn’t provide many details about the cause of the launch failure in July, other than to say it appeared to be something the company didn’t test for ahead of the flight. The Eris rocket flew for 14 seconds, losing control and crashing a short distance from the launch pad in the Australian state of Queensland. If there’s any silver lining, Gilmour said the failure didn’t damage the launch pad, and the rocket’s use of a novel hybrid propulsion system limited the destructive power of the blast when it struck the ground.

Stoke Space’s impressive funding haul. Stoke Space announced a significant capital raise on Wednesday, a total of $510 million as part of Series D funding. The new financing doubles the total capital raised by Stoke Space, founded in 2020, to $990 million, Ars reports. The infusion of money will provide the company with “the runway to complete development” of the Nova rocket and demonstrate its capability through its first flights, said Andy Lapsa, the company’s co-founder and chief executive, in a news release characterizing the new funding.

A futuristic design … Stoke is working toward a 2026 launch of the medium-lift Nova rocket. The rocket’s innovative design is intended to be fully reusable from the payload fairing on down, with a regeneratively cooled heat shield on the vehicle’s second stage. In fully reusable mode, Nova will have a payload capacity of 3 metric tons to low-Earth orbit, and up to 7 tons in fully expendable mode. Stoke is building a launch pad for the Nova rocket at Cape Canaveral Space Force Station, Florida.

SpaceX took an unusual break from launching. SpaceX launched its first Falcon 9 rocket from Florida in 12 days during the predawn hours of Tuesday morning, Spaceflight Now reports. The launch gap was highlighted by a run of persistent, daily storms in Central Florida and over the Atlantic Ocean, including hurricanes that prevented deployment of SpaceX’s drone ships to support booster landings. The break ended with the launch of 28 more Starlink broadband satellites. SpaceX launched three Starlink missions in the interim from Vandenberg Space Force Base, California.

Weather still an issue … Weather conditions on Florida’s Space Coast are often volatile, particularly in the evenings during summer and early autumn. SpaceX’s next launch from Florida was supposed to take off Thursday evening, but officials pushed it back to no earlier than Saturday due to a poor weather forecast over the next two days. Weather still gets a vote in determining whether a rocket lifts off or doesn’t, despite SpaceX’s advancements in launch efficiency and the Space Force’s improved weather monitoring capabilities at Cape Canaveral.

ArianeGroup chief departs for train maker. Current ArianeGroup CEO Martin Sion has been named the new head of French train maker Alstom. He will officially take up the role in April 2026, European Spaceflight reports. Sion assumed the role as ArianeGroup’s chief executive in 2023, replacing the former CEO who left the company after delays in the debut of its main product: the Ariane 6 rocket. Sion’s appointment was announced by Alstom, but ArianeGroup has not made any official statement on the matter.

Under pressure … The change in ArianeGroup’s leadership comes as the company ramps up production and increases the launch cadence of the Ariane 6 rocket, which has now flown three times, with a fourth launch due next month. ArianeGroup’s subsidiary, Arianespace, seeks to increase the Ariane 6’s launch cadence to 10 missions per year by 2029. ArianeGroup and its suppliers will need to drastically improve factory throughput to reach this goal.

New Glenn emerges from factory. Blue Origin rolled the first stage of its massive New Glenn rocket from its hangar on Wednesday morning in Florida, kicking off the final phase of the campaign to launch the heavy-lift vehicle for the second time, Ars reports. In sharing video of the rollout to Launch Complex-36 on Wednesday online, the space company did not provide a launch target for the mission, which seeks to put two small Mars-bound payloads into orbit. The pair of identical spacecraft to study the solar wind at Mars is known as ESCAPADE. However, sources told Ars that on the current timeline, Blue Origin is targeting a launch window of November 9 to November 11. This assumes pre-launch activities, including a static-fire test of the first stage, go well.

Recovery or bust? Blue Origin has a lot riding on this booster, named “Never Tell Me The Odds,” which it will seek to recover and reuse. Despite the name of the booster, the company is quietly confident that it will successfully land the first stage on a drone ship named Jacklyn. Internally, engineers at Blue Origin believe there is about a 75 percent chance of success. The first booster malfunctioned before landing on the inaugural New Glenn test flight in January. Company officials are betting big on recovering the booster this time, with plans to reuse it early next year to launch Blue’s first lunar lander to the Moon.

SpaceX gets bulk of this year’s military launch orders. Around this time each year, the US Space Force convenes a Mission Assignment Board to dole out contracts to launch the nation’s most critical national security satellites. The military announced this year’s launch orders Friday, and SpaceX was the big winner, Ars reports. Space Systems Command, the unit responsible for awarding military launch contracts, selected SpaceX to launch five of the seven missions up for assignment this year. United Launch Alliance (ULA), a 50-50 joint venture between Boeing and Lockheed Martin, won contracts for the other two. These missions for the Space Force and the National Reconnaissance Office are still at least a couple of years away from flying.

Vulcan getting more expensive A closer examination of this year’s National Security Space Launch contracts reveals some interesting things. The Space Force is paying SpaceX $714 million for the five launches awarded Friday, for an average of roughly $143 million per mission. ULA will receive $428 million for two missions, or $214 million for each launch. That’s about 50 percent more expensive than SpaceX’s price per mission. This is in line with the prices the Space Force paid SpaceX and ULA for last year’s contracts. However, look back a little further and you’ll find ULA’s prices for military launches have, for some reason, increased significantly over the last few years. In late 2023, the Space Force awarded a $1.3 billion deal to ULA for a batch of 11 launches at an average cost per mission of $119 million. A few months earlier, Space Systems Command assigned six launches to ULA for $672 million, or $112 million per mission.

Starship Flight 11 nears launch. SpaceX rolled the Super Heavy booster for the next test flight of the company’s Starship mega-rocket out to the launch pad in Texas this week. The booster stage, with 33 methane-fueled engines, will power the Starship into the upper atmosphere during the first few minutes of flight. This booster is flight-proven, having previously launched and landed on a test flight in March.

Next steps With the Super Heavy booster installed on the pad, the next step for SpaceX will be the rollout of the Starship upper stage. That is expected to happen in the coming days. Ground crews will raise Starship atop the Super Heavy booster to fully stack the rocket to its total height of more than 400 feet (120 meters). If everything goes well, SpaceX is targeting liftoff of the 11th full-scale test flight of Starship and Super Heavy as soon as Monday evening. (submitted by EllPeaTea)

Blue Origin takes on a new line of business. Blue Origin won a US Space Force competition to build a new payload processing facility at Cape Canaveral Space Force Station, Florida, Spaceflight Now reports. Under the terms of the $78.2 million contract, Blue Origin will build a new facility capable of handling payloads for up to 16 missions per year. The Space Force expects to use about half of that capacity, with the rest available to NASA or Blue Origin’s commercial customers. This contract award follows a $77.5 million agreement the Space Force signed with Astrotech earlier this year to expand the footprint of its payload processing facility at Vandenberg Space Force Base, California.

Important stuff … Ground infrastructure often doesn’t get the same level of attention as rockets, but the Space Force has identified bottlenecks in payload processing as potential constraints on ramping up launch cadences at the government’s spaceports in Florida and California. Currently, there are only a handful of payload processing facilities in the Cape Canaveral area, and most of them are only open to a single user, such as SpaceX, Amazon, the National Reconnaissance Office, or NASA. So, what exactly is payload processing? The Space Force said Blue Origin’s new facility will include space for “several pre-launch preparatory activities” that include charging batteries, fueling satellites, loading other gaseous and fluid commodities, and encapsulation. To accomplish those tasks, Blue Origin will create “a clean, secure, specialized high-bay facility capable of handling flight hardware, toxic fuels, and explosive materials.”

Next three launches

Oct. 11: Gravity 1 | Unknown Payload | Haiyang Spaceport, China Coastal Waters | 02: 15 UTC

Oct. 12: Falcon 9 | Project Kuiper KF-03 | Cape Canaveral Space Force Station, Florida | 00: 41 UTC

Oct. 13: Starship/Super Heavy | Flight 11 | Starbase, Texas | 23: 15 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: Bezos’ firm will package satellites for launch; Starship on deck Read More »

“like-putting-on-glasses-for-the-first-time”—how-ai-improves-earthquake-detection

“Like putting on glasses for the first time”—how AI improves earthquake detection


AI is “comically good” at detecting small earthquakes—here’s why that matters.

Credit: Aurich Lawson | Getty Images

On January 1, 2008, at 1: 59 am in Calipatria, California, an earthquake happened. You haven’t heard of this earthquake; even if you had been living in Calipatria, you wouldn’t have felt anything. It was magnitude -0.53, about the same amount of shaking as a truck passing by. Still, this earthquake is notable, not because it was large but because it was small—and yet we know about it.

Over the past seven years, AI tools based on computer imaging have almost completely automated one of the fundamental tasks of seismology: detecting earthquakes. What used to be the task of human analysts—and later, simpler computer programs—can now be done automatically and quickly by machine-learning tools.

These machine-learning tools can detect smaller earthquakes than human analysts, especially in noisy environments like cities. Earthquakes give valuable information about the composition of the Earth and what hazards might occur in the future.

“In the best-case scenario, when you adopt these new techniques, even on the same old data, it’s kind of like putting on glasses for the first time, and you can see the leaves on the trees,” said Kyle Bradley, co-author of the Earthquake Insights newsletter.

I talked with several earthquake scientists, and they all agreed that machine-learning methods have replaced humans for the better in these specific tasks.

“It’s really remarkable,” Judith Hubbard, a Cornell University professor and Bradley’s co-author, told me.

Less certain is what comes next. Earthquake detection is a fundamental part of seismology, but there are many other data processing tasks that have yet to be disrupted. The biggest potential impacts, all the way to earthquake forecasting, haven’t materialized yet.

“It really was a revolution,” said Joe Byrnes, a professor at the University of Texas at Dallas. “But the revolution is ongoing.”

When an earthquake happens in one place, the shaking passes through the ground, similar to how sound waves pass through the air. In both cases, it’s possible to draw inferences about the materials the waves pass through.

Imagine tapping a wall to figure out if it’s hollow. Because a solid wall vibrates differently than a hollow wall, you can figure out the structure by sound.

With earthquakes, this same principle holds. Seismic waves pass through different materials (rock, oil, magma, etc.) differently, and scientists use these vibrations to image the Earth’s interior.

The main tool that scientists traditionally use is a seismometer. These record the movement of the Earth in three directions: up–down, north–south, and east–west. If an earthquake happens, seismometers can measure the shaking in that particular location.

An old-fashioned physical seismometer. Today, seismometers record data digitally. Credit: Yamaguchi先生 on Wikimedia CC BY-SA 3.0

Scientists then process raw seismometer information to identify earthquakes.

Earthquakes produce multiple types of shaking, which travel at different speeds. Two types, Primary (P) waves and Secondary (S) waves are particularly important, and scientists like to identify the start of each of these phases.

Before good algorithms, earthquake cataloging had to happen by hand. Byrnes said that “traditionally, something like the lab at the United States Geological Survey would have an army of mostly undergraduate students or interns looking at seismograms.”

However, there are only so many earthquakes you can find and classify manually. Creating algorithms to effectively find and process earthquakes has long been a priority in the field—especially since the arrival of computers in the early 1950s.

“The field of seismology historically has always advanced as computing has advanced,” Bradley told me.

There’s a big challenge with traditional algorithms, though: They can’t easily find smaller quakes, especially in noisy environments.

Composite seismogram of common events. Note how each event has a slightly different shape. Credit: EarthScope Consortium CC BY 4.0

As we see in the seismogram above, many different events can cause seismic signals. If a method is too sensitive, it risks falsely detecting events as earthquakes. The problem is especially bad in cities, where the constant hum of traffic and buildings can drown out small earthquakes.

However, earthquakes have a characteristic “shape.” The magnitude 7.7 earthquake above looks quite different from the helicopter landing, for instance.

So one idea scientists had was to make templates from human-labeled datasets. If a new waveform correlates closely with an existing template, it’s almost certainly an earthquake.

Template matching works very well if you have enough human-labeled examples. In 2019, Zach Ross’ lab at Caltech used template matching to find 10 times as many earthquakes in Southern California as had previously been known, including the earthquake at the start of this story. Almost all of the new 1.6 million quakes they found were very small, magnitude 1 and below.

If you don’t have an extensive pre-existing dataset of templates, however, you can’t easily apply template matching. That isn’t a problem in Southern California—which already had a basically complete record of earthquakes down to magnitude 1.7—but it’s a challenge elsewhere.

Also, template matching is computationally expensive. Creating a Southern California quake dataset using template matching took 200 Nvidia P100 GPUs running for days on end.

There had to be a better way.

AI detection models solve all of these problems:

  • They are faster than template matching.

  • Because AI detection models are very small (around 350,000 parameters compared to billions in LLMs like GPT4.0), they can be run on consumer CPUs.

  • AI models generalize well to regions not represented in the original dataset.

As an added bonus, AI models can give better information about when the different types of earthquake shaking arrive. Timing the arrivals of the two most important waves—P and S waves—is called phase picking. It allows scientists to draw inferences about the structure of the quake. AI models can do this alongside earthquake detection.

The basic task of earthquake detection (and phase picking) looks like this:

Cropped figure from Earthquake Transformer—an attentive deep-learning model for simultaneous earthquake detection and phase picking. Credit: Nature Communications

The first three rows represent different directions of vibration (east–west, north–south, and up–down respectively). Given these three dimensions of vibration, can we determine if an earthquake occurred, and if so, when it started?

We want to detect the initial P wave, which arrives directly from the site of the earthquake. But this can be tricky because echoes of the P wave may get reflected off other rock layers and arrive later, making the waveform more complicated.

Ideally, then, our model outputs three things at every time step in the sample:

  1. The probability that an earthquake is occurring at that moment.

  2. The probability that the first P wave arrives at that moment.

  3. The probability that the first S wave arrives at that moment.

We see all three outputs in the fourth row: the detection in green, the P wave arrival in blue, and the S wave arrival in red. (There are two earthquakes in this sample.)

To train an AI model, scientists take large amounts of labeled data, like what’s above, and do supervised training. I’ll describe one of the most used models: Earthquake Transformer, which was developed around 2020 by a Stanford University team led by S. Mostafa Mousavi, who later became a Harvard professor.

Like many earthquake detection models, Earthquake Transformer adapts ideas from image classification. Readers may be familiar with AlexNet, a famous image-recognition model that kicked off the deep-learning boom in 2012.

AlexNet used convolutions, a neural network architecture that’s based on the idea that pixels that are physically close together are more likely to be related. The first convolutional layer of AlexNet broke an image down into small chunks—11 pixels on a side—and classified each chunk based on the presence of simple features like edges or gradients.

The next layer took the first layer’s classifications as input and checked for higher-level concepts such as textures or simple shapes.

Each convolutional layer analyzed a larger portion of the image and operated at a higher level of abstraction. By the final layers, the network was looking at the entire image and identifying objects like “mushroom” and “container ship.”

Images are two-dimensional, so AlexNet is based on two-dimensional convolutions. By contrast, seismograph data is one-dimensional, so Earthquake Transformer uses one-dimensional convolutions over the time dimension. The first layer analyzes vibration data in 0.1-second chunks, while later layers identify patterns over progressively longer time periods.

It’s difficult to say what exact patterns the earthquake model is picking out, but we can analogize this to a hypothetical audio transcription model using one-dimensional convolutions. That model might first identify consonants, then syllables, then words, then sentences over increasing time scales.

Earthquake Transformer converts raw waveform data into a collection of high-level representations that indicate the likelihood of earthquakes and other seismologically significant events. This is followed by a series of deconvolution layers that pinpoint exactly when an earthquake—and its all-important P and S waves—occurred.

The model also uses an attention layer in the middle of the model to mix information between different parts of the time series. The attention mechanism is most famous in large language models, where it helps pass information between words. It plays a similar role in seismographic detection. Earthquake seismograms have a general structure: P waves followed by S waves followed by other types of shaking. So if a segment looks like the start of a P wave, the attention mechanism helps it check that it fits into a broader earthquake pattern.

All of the Earthquake Transformer’s components are standard designs from the neural network literature. Other successful detection models, like PhaseNet, are even simpler. PhaseNet uses only one-dimensional convolutions to pick the arrival times of earthquake waves. There are no attention layers.

Generally, there hasn’t been “much need to invent new architectures for seismology,” according to Byrnes. The techniques derived from image processing have been sufficient.

What made these generic architectures work so well then? Data. Lots of it.

Ars has previously reported on how the introduction of ImageNet, an image recognition benchmark, helped spark the deep learning boom. Large, publicly available earthquake datasets have played a similar role in seismology.

Earthquake Transformer was trained using the Stanford Earthquake Dataset (STEAD), which contains 1.2 million human-labeled segments of seismogram data from around the world. (The paper for STEAD explicitly mentions ImageNet as an inspiration). Other models, like PhaseNet, were also trained on hundreds of thousands or millions of labeled segments.

All recorded earthquakes in the Stanford Earthquake Dataset. Credit: IEEE (CC BY 4.0)

The combination of the data and the architecture just works. The current models are “comically good” at identifying and classifying earthquakes, according to Byrnes. Typically, machine-learning methods find 10 or more times the quakes that were previously identified in an area. You can see this directly in an Italian earthquake catalog:

From Machine learning and earthquake forecasting—next steps by Beroza et al. Credit: Nature Communications (CC-BY 4.0)

AI tools won’t necessarily detect more earthquakes than template matching. But AI-based techniques are much less compute- and labor-intensive, making them more accessible to the average research project and easier to apply in regions around the world.

All in all, these machine-learning models are so good that they’ve almost completely supplanted traditional methods for detecting and phase-picking earthquakes, especially for smaller magnitudes.

The holy grail of earthquake science is earthquake prediction. For instance, scientists know that a large quake will happen near Seattle but have little ability to know whether it will happen tomorrow or in a hundred years. It would be helpful if we could predict earthquakes precisely enough to allow people in affected areas to evacuate.

You might think AI tools would help predict earthquakes, but that doesn’t seem to have happened yet.

The applications are more technical and less flashy, said Cornell’s Judith Hubbard.

Better AI models have given seismologists much more comprehensive earthquake catalogs, which have unlocked “a lot of different techniques,” Bradley said.

One of the coolest applications is in understanding and imaging volcanoes. Volcanic activity produces a large number of small earthquakes, whose locations help scientists understand the structure of the magma system. In a 2022 paper, John Wilding and co-authors used a large AI-generated earthquake catalog to create this incredible image of the structure of the Hawaiian volcanic system.

Each dot represents an individual earthquake. Credit: Wilding et al., The magmatic web beneath Hawai‘i.

They provided direct evidence of a previously hypothesized magma connection between the deep Pāhala sill complex and Mauna Loa’s shallow volcanic structure. You can see this in the image with the arrow labeled as Pāhala-Mauna Loa seismicity band. The authors were also able to clarify the structure of the Pāhala sill complex into discrete sheets of magma. This level of detail could potentially facilitate better real-time monitoring of earthquakes and more accurate eruption forecasting.

Another promising area is lowering the cost of dealing with huge datasets. Distributed Acoustic Sensing (DAS) is a powerful technique that uses fiber-optic cables to measure seismic activity across the entire length of the cable. A single DAS array can produce “hundreds of gigabytes of data” a day, according to Jiaxuan Li, a professor at the University of Houston. That much data can produce extremely high-resolution datasets—enough to pick out individual footsteps.

AI tools make it possible to very accurately time earthquakes in DAS data. Before the introduction of AI techniques for phase picking in DAS data, Li and some of his collaborators attempted to use traditional techniques. While these “work roughly,” they weren’t accurate enough for their downstream analysis. Without AI, much of his work would have been “much harder,” he told me.

Li is also optimistic that AI tools will be able to help him isolate “new types of signals” in the rich DAS data in the future.

Not all AI techniques have paid off

As in many other scientific fields, seismologists face some pressure to adopt AI methods, whether or not they are relevant to their research.

“The schools want you to put the word AI in front of everything,” Byrnes said. “It’s a little out of control.”

This can lead to papers that are technically sound but practically useless. Hubbard and Bradley told me that they’ve seen a lot of papers based on AI techniques that “reveal a fundamental misunderstanding of how earthquakes work.”

They pointed out that graduate students can feel pressure to specialize in AI methods at the cost of learning less about the fundamentals of the scientific field. They fear that if this type of AI-driven research becomes entrenched, older methods will get “out-competed by a kind of meaninglessness.”

While these are real issues, and ones Understanding AI has reported on before, I don’t think they detract from the success of AI earthquake detection. In the last five years, an AI-based workflow has almost completely replaced one of the fundamental tasks in seismology for the better.

That’s pretty cool.

Kai Williams is a reporter for Understanding AI, a Substack newsletter founded by Ars Technica alum Timothy B. Lee. His work is supported by a Tarbell Fellowship. Subscribe to Understanding AI to get more from Tim and Kai.

“Like putting on glasses for the first time”—how AI improves earthquake detection Read More »

putin-oks-plan-to-turn-russian-spacecraft-into-flying-billboards

Putin OKs plan to turn Russian spacecraft into flying billboards

These are tough times for Russia’s civilian space program. In the last few years, Russia has cut back on the number of Soyuz crew missions it is sending to the International Space Station, and a replacement for the nearly 60-year-old Soyuz spacecraft remains elusive.

While the United States and China are launching more space missions than ever before, Russia’s once-dominant launch cadence is on a downhill slide.

Russia’s access to global markets dried up after Russian President Vladimir Putin launched the country’s invasion of Ukraine in February 2022. The fallout from the invasion killed several key space partnership between Russia and Europe. Russia’s capacity to do new things in space seems to be focused on military programs like anti-satellite weapons.

The Roscosmos State Corporation for Space Activities, Russia’s official space agency, may have a plan to offset the decline. Late last month, Putin approved changes to federal laws governing advertising and space activities to “allow for the placement of advertising on spacecraft,” Roscosmos posted on its official Telegram account.

We’ve seen this before

The Russian State Duma, dominated by Putin loyalists, previously approved the amendments.

“According to the amendments, Roscosmos has been granted the right, effective January 1, 2026, to place advertising on space objects owned by both the State Corporation itself and federally,” Roscosmos said. “The amendments will create a mechanism for attracting private investment in Russian space exploration and reduce the burden on the state budget.”

The law requires that advertising symbols not affect spacecraft safety. The Russian government said it will establish a fee structure for advertising on federally owned space objects.

Roscosmos didn’t say this, but advertisers eligible for the offer will presumably be limited to Russia and its allies. Any ads from the West would likely violate sanctions.

Rocket-makers have routinely applied decals, stickers, and special paint jobs to their vehicles. This is a particularly popular practice in Russia. Usually, these logos represent customers and suppliers. Sometimes they honor special occasions, like the 60th anniversary of the first human spaceflight mission by Soviet cosmonaut Yuri Gagarin and the 80th anniversary of the end of World War II.

Putin OKs plan to turn Russian spacecraft into flying billboards Read More »

termite-farmers-fine-tune-their-weed-control

Termite farmers fine-tune their weed control

Odontotermes obesus is one of the termite species that grows fungi, called Termitomyces, in their mounds. Workers collect dead leaves, wood, and grass to stack them in underground fungus gardens called combs. There, the fungi break down the tough plant fibers, making them accessible for the termites in an elaborate form of symbiotic agriculture.

Like any other agriculturalist, however, the termites face a challenge: weeds. “There have been numerous studies suggesting the termites must have some kind of fixed response—that they always do the same exact thing when they detect weed infestation,” says Rhitoban Raychoudhury, a professor of biological sciences at the Indian Institute of Science Education, “but that was not the case.” In a new Science study, Raychoudhury’s team discovered that termites have pretty advanced, surprisingly human-like gardening practices.

Going blind

Termites do not look like particularly good gardeners at first glance. They are effectively blind, which is not that surprising considering they spend most of their life in complete darkness working in endless corridors of their mounds. But termites make up for their lack of sight with other senses. “They can detect the environment based on advanced olfactory reception and touch, and I think this is what they use to identify the weeds in their gardens,” Raychoudhury says. To learn how termites react once they detect a weed infestation, his team collected some Odontotermes obesus and challenged them with different gardening problems.

The experimental setup was quite simple. The team placed some autoclaved soil sourced from termite mounds into glass Petri dishes. On this soil, Raychoudhury and his colleagues placed two fungus combs in each dish. The first piece acted as a control and was a fresh, uninfected comb with Termitomyces. “Besides acting as a control, it was also there to make sure the termites have the food because it is very hard for them to survive outside their mounds,” Raychoudhury explains. The second piece was intentionally contaminated with Pseudoxylaria, a filamentous fungal weed that often takes over Termitomyces habitats in termite colonies.

Termite farmers fine-tune their weed control Read More »