Science

if-you-want-to-satiate-ai’s-hunger-for-power,-google-suggests-going-to-space

If you want to satiate AI’s hunger for power, Google suggests going to space


Google engineers think they already have all the pieces needed to build a data center in orbit.

With Project Suncatcher, Google will test its Tensor Processing Units on satellites. Credit: Google

It was probably always when, not if, Google would add its name to the list of companies intrigued by the potential of orbiting data centers.

Google announced Tuesday a new initiative, named Project Suncatcher, to examine the feasibility of bringing artificial intelligence to space. The idea is to deploy swarms of satellites in low-Earth orbit, each carrying Google’s AI accelerator chips designed for training, content generation, synthetic speech and vision, and predictive modeling. Google calls these chips Tensor Processing Units, or TPUs.

“Project Suncatcher is a moonshot exploring a new frontier: equipping solar-powered satellite constellations with TPUs and free-space optical links to one day scale machine learning compute in space,” Google wrote in a blog post.

“Like any moonshot, it’s going to require us to solve a lot of complex engineering challenges,” Google’s CEO, Sundar Pichai, wrote on X. Pichai noted that Google’s early tests show the company’s TPUs can withstand the intense radiation they will encounter in space. “However, significant challenges still remain like thermal management and on-orbit system reliability.”

The why and how

Ars reported on Google’s announcement on Tuesday, and Google published a research paper outlining the motivation for such a moonshot project. One of the authors, Travis Beals, spoke with Ars about Project Suncatcher and offered his thoughts on why it just might work.

“We’re just seeing so much demand from people for AI,” said Beals, senior director of Paradigms of Intelligence, a research team within Google. “So, we wanted to figure out a solution for compute that could work no matter how large demand might grow.”

Higher demand will lead to bigger data centers consuming colossal amounts of electricity. According to the MIT Technology Review, AI alone could consume as much electricity annually as 22 percent of all US households by 2028. Cooling is also a problem, often requiring access to vast water resources, raising important questions about environmental sustainability.

Google is looking to the sky to avoid potential bottlenecks. A satellite in space can access an infinite supply of renewable energy and an entire Universe to absorb heat.

“If you think about a data center on Earth, it’s taking power in and it’s emitting heat out,” Beals said. “For us, it’s the satellite that’s doing the same. The satellite is going to have solar panels … They’re going to feed that power to the TPUs to do whatever compute we need them to do, and then the waste heat from the TPUs will be distributed out over a radiator that will then radiate that heat out into space.”

Google envisions putting a legion of satellites into a special kind of orbit that rides along the day-night terminator, where sunlight meets darkness. This north-south, or polar, orbit would be synchronized with the Sun, allowing a satellite’s power-generating solar panels to remain continuously bathed in sunshine.

“It’s much brighter even than the midday Sun on Earth because it’s not filtered by Earth’s atmosphere,” Beals said.

This means a solar panel in space can produce up to eight times more power than the same collecting area on the ground, and you don’t need a lot of batteries to reserve electricity for nighttime. This may sound like the argument for space-based solar power, an idea first described by Isaac Asimov in his short story Reason published in 1941. But instead of transmitting the electricity down to Earth for terrestrial use, orbiting data centers would tap into the power source in space.

“As with many things, the ideas originate in science fiction, but it’s had a number of challenges, and one big one is, how do you get the power down to Earth?” Beals said. “So, instead of trying to figure out that, we’re embarking on this moonshot to bring [machine learning] compute chips into space, put them on satellites that have the solar panels and the radiators for cooling, and then integrate it all together so you don’t actually have to be powered on Earth.”

SpaceX is driving down launch costs, thanks to reusable rockets and an abundant volume of Starlink satellite launches. Credit: SpaceX

Google has a mixed record with its ambitious moonshot projects. One of the most prominent moonshot graduates is the self-driving car kit developer Waymo, which spun out to form a separate company in 2016 and is now operational. The Project Loon initiative to beam Internet signals from high-altitude balloons is one of the Google moonshots that didn’t make it.

Ars published two stories last week on the promise of space-based data centers. One of the startups in this field, named Starcloud, is partnering with Nvidia, the world’s largest tech company by market capitalization, to build a 5 gigawatt orbital data center with enormous solar and cooling panels approximately 4 kilometers (2.5 miles) in width and length. In response to that story, Elon Musk said SpaceX is pursuing the same business opportunity but didn’t provide any details. It’s worth noting that Google holds an estimated 7 percent stake in SpaceX.

Strength in numbers

Google’s proposed architecture differs from that of Starcloud and Nvidia in an important way. Instead of putting up just one or a few massive computing nodes, Google wants to launch a fleet of smaller satellites that talk to one another through laser data links. Essentially, a satellite swarm would function as a single data center, using light-speed interconnectivity to aggregate computing power hundreds of miles over our heads.

If that sounds implausible, take a moment to think about what companies are already doing in space today. SpaceX routinely launches more than 100 Starlink satellites per week, each of which uses laser inter-satellite links to bounce Internet signals around the globe. Amazon’s Kuiper satellite broadband network uses similar technology, and laser communications will underpin the US Space Force’s next-generation data-relay constellation.

Artist’s illustration of laser crosslinks in space. Credit: TESAT

Autonomously constructing a miles-long structure in orbit, as Nvidia and Starcloud foresee, would unlock unimagined opportunities. The concept also relies on tech that has never been tested in space, but there are plenty of engineers and investors who want to try. Starcloud announced an agreement last week with a new in-space assembly company, Rendezvous Robotics, to explore the use of modular, autonomous assembly to build Starcloud’s data centers.

Google’s research paper describes a future computing constellation of 81 satellites flying at an altitude of some 400 miles (650 kilometers), but Beals said the company could dial the total swarm size to as many spacecraft as the market demands. This architecture could enable terawatt-class orbital data centers, according to Google.

“What we’re actually envisioning is, potentially, as you scale, you could have many clusters,” Beals said.

Whatever the number, the satellites will communicate with one another using optical inter-satellite links for high-speed, low-latency connectivity. The satellites will need to fly in tight formation, perhaps a few hundred feet apart, with a swarm diameter of a little more than a mile, or about 2 kilometers. Google says its physics-based model shows satellites can maintain stable formations at such close ranges using automation and “reasonable propulsion budgets.”

“If you’re doing something that requires a ton of tight coordination between many TPUs—training, in particular—you want links that have as low latency as possible and as high bandwidth as possible,” Beals said. “With latency, you run into the speed of light, so you need to get things close together there to reduce latency. But bandwidth is also helped by bringing things close together.”

Some machine-learning applications could be done with the TPUs on just one modestly sized satellite, while others may require the processing power of multiple spacecraft linked together.

“You might be able to fit smaller jobs into a single satellite. This is an approach where, potentially, you can tackle a lot of inference workloads with a single satellite or a small number of them, but eventually, if you want to run larger jobs, you may need a larger cluster all networked together like this,” Beals said.

Google has worked on Project Suncatcher for more than a year, according to Beals. In ground testing, engineers tested Google’s TPUs under a 67 MeV proton beam to simulate the total ionizing dose of radiation the chip would see over five years in orbit. Now, it’s time to demonstrate Google’s AI chips, and everything else needed for Project Suncatcher will actually work in the real environment.

Google is partnering with Planet, the Earth-imaging company, to develop a pair of small prototype satellites for launch in early 2027. Planet builds its own satellites, so Google has tapped it to manufacture each spacecraft, test them, and arrange for their launch. Google’s parent company, Alphabet, also has an equity stake in Planet.

“We have the TPUs and the associated hardware, the compute payload… and we’re bringing that to Planet,” Beals said. “For this prototype mission, we’re really asking them to help us do everything to get that ready to operate in space.”

Beals declined to say how much the demo slated for launch in 2027 will cost but said Google is paying Planet for its role in the mission. The goal of the demo mission is to show whether space-based computing is a viable enterprise.

“Does it really hold up in space the way we think it will, the way we’ve tested on Earth?” Beals said.

Engineers will test an inter-satellite laser link and verify Google’s AI chips can weather the rigors of spaceflight.

“We’re envisioning scaling by building lots of satellites and connecting them together with ultra-high bandwidth inter-satellite links,” Beals said. “That’s why we want to launch a pair of satellites, because then we can test the link between the satellites.”

Evolution of a free-fall (no thrust) constellation under Earth’s gravitational attraction, modeled to the level of detail required to obtain Sun-synchronous orbits, in a non-rotating coordinate system. Credit: Google

Getting all this data to users on the ground is another challenge. Optical data links could also route enormous amounts of data between the satellites in orbit and ground stations on Earth.

Aside from the technical feasibility, there have long been economic hurdles to fielding large satellite constellations. But SpaceX’s experience with its Starlink broadband network, now with more than 8,000 active satellites, is proof that times have changed.

Google believes the economic equation is about to change again when SpaceX’s Starship rocket comes online. The company’s learning curve analysis shows launch prices could fall to less than $200 per kilogram by around 2035, assuming Starship is flying about 180 times per year by then. This is far below SpaceX’s stated launch targets for Starship but comparable to SpaceX’s proven flight rate with its workhorse Falcon 9 rocket.

It’s possible there could be even more downward pressure on launch costs if SpaceX, Nvidia, and others join Google in the race for space-based computing. The demand curve for access to space may only be eclipsed by the world’s appetite for AI.

“The more people are doing interesting, exciting things in space, the more investment there is in launch, and in the long run, that could help drive down launch costs,” Beals said. “So, it’s actually great to see that investment in other parts of the space supply chain and value chain. There are a lot of different ways of doing this.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

If you want to satiate AI’s hunger for power, Google suggests going to space Read More »

google’s-new-hurricane-model-was-breathtakingly-good-this-season

Google’s new hurricane model was breathtakingly good this season

This early model comparison does not include the “gold standard” traditional, physics-based model produced by the European Centre for Medium-Range Weather Forecasts. However, the ECMWF model typically does not do better on hurricane track forecasts than the hurricane center or consensus models, which weigh several different model outputs. So it is unlikely to be superior to Google’s DeepMind.

This will change forecasting forever

It’s worth noting that DeepMind also did exceptionally well at intensity forecasting, which is the fluctuations in the strength of a hurricane. So in its first season, it nailed both hurricane tracks and intensity.

As a forecaster who has relied on traditional physics-based models for a quarter of a century, it is difficult to say how gobsmacking these results are. Going forward, it is safe to say that we will rely heavily on Google and other AI weather models, which are likely to improve in the coming years, as they are relatively new and have room for improvement.

“The beauty of DeepMind and other similar data-driven, AI-based weather models is how much more quickly they produce a forecast compared to their traditional physics-based counterparts that require some of the most expensive and advanced supercomputers in the world,” noted Michael Lowry, a hurricane specialist and author of the Eye on the Tropics newsletter, about the model performance. “Beyond that, these ‘smart’ models with their neural network architectures have the ability to learn from their mistakes and correct on-the-fly.”

What about the North American model?

As for the GFS model, it is difficult to explain why it performed so poorly this season. In the past, it has been, at worst, worthy of consideration in making a forecast. But this year, myself and other forecasters often disregarded it.

“It’s not immediately clear why the GFS performed so poorly this hurricane season,” Lowry wrote. “Some have speculated the lapse in data collection from DOGE-related government cuts this year could have been a contributing factor, but presumably such a factor would have affected other global physics-based models as well, not just the American GFS.”

With the US government in shutdown mode, we probably cannot expect many answers soon. But it seems clear that the massive upgrade of the model’s dynamic core, which began in 2019, has largely been a failure. If the GFS was a little bit behind some competitors a decade ago, it is now fading further and faster.

Google’s new hurricane model was breathtakingly good this season Read More »

some-stinkbugs’-legs-carry-a-mobile-fungal-garden

Some stinkbugs’ legs carry a mobile fungal garden

Many insect species hear using tympanal organs, membranes roughly resembling our eardrums but located on their legs. Grasshoppers, mantises, and moths all have them, and for decades, we thought that female stinkbugs of the Dinidoridae family have them, too, although located a bit unusually on their hind rather than front legs.

Suspecting that they use their hind leg tympanal organs to listen to male courtship songs, a team of Japanese researchers took a closer look at the organs in Megymenum gracilicorne, a Dinidoridae stinkbug species native to Japan. They discovered that these “tympanal organs” were not what they seemed. They’re actually mobile fungal nurseries of a kind we’ve never seen before.

Portable gardens

Dinidoridae is a small stinkbug family that lives exclusively in Asia. The bug did attract some scientific attention, but not nearly as much as its larger relatives like Pentatomidae. Prior work looking specifically into organs growing on the hind legs of Dinidoridae females was thus somewhat limited. “Most research relied on taxonomic and morphological approaches. Some taxonomists did describe that female Dinidoridae stinkbugs have an enlarged part on the hind legs that looks like the tympanal organ you can find, for example, in crickets,” said Takema Fukatsu, an evolutionary biologist at the National Institute of Advanced Industrial Science and Technology in Tokyo.

Based on that appearance, these parts were classified as tympanal organs—the case was closed, and it stayed closed until Fukatsu’s team started examining them more closely. Most insects have tympanal organs on their front legs, not hind legs, or on abdominal segments. The initial goal of Fukatsu’s study was to figure out what impact this unusual position has on Dinidoridae females’ ability to hear sounds.

Early on in the study, it turned out that whatever Dinidoridae females have on their hind legs, they are not tympanal organs. “We found no tympanal membrane and no sensory neurons, so the enlarged parts on the hind legs had nothing to do with hearing,” Fukatsu explained. Instead, the organ had thousands of small pores filled with benign filamentous fungi. The pores were connected to secretory cells that released substances that Fukatsu’s team hypothesized were nutrients enabling the fungi to grow.

Some stinkbugs’ legs carry a mobile fungal garden Read More »

disruption-to-science-will-last-longer-than-the-us-government-shutdown

Disruption to science will last longer than the US government shutdown

President Donald Trump alongside Office of Management and Budget Director Russell Vought.

Credit: Brendan Smialowski/AFP via Getty Images

President Donald Trump alongside Office of Management and Budget Director Russell Vought. Credit: Brendan Smialowski/AFP via Getty Images

However, the full impact of the shutdown and the Trump administration’s broader assaults on science to US international competitiveness, economic security, and electoral politics could take years to materialize.

In parallel, the dramatic drop in international student enrollment, the financial squeeze facing research institutions, and research security measures to curb foreign interference spell an uncertain future for American higher education.

With neither the White House nor Congress showing signs of reaching a budget deal, Trump continues to test the limits of executive authority, reinterpreting the law—or simply ignoring it.

Earlier in October, Trump redirected unspent research funding to pay furloughed service members before they missed their Oct. 15 paycheck. Changing appropriated funds directly challenges the power vested in Congress—not the president—to control federal spending.

The White House’s promise to fire an additional 10,000 civil servants during the shutdown, its threat to withhold back pay from furloughed workers, and its push to end any programs with lapsed funding “not consistent with the President’s priorities” similarly move to broaden presidential power.

Here, the damage to science could snowball. If Trump and Vought chip enough authority away from Congress by making funding decisions or shuttering statutory agencies, the next three years will see an untold amount of impounded, rescinded, or repurposed research funds.

photo of empty science lab

The government shutdown has emptied many laboratories staffed by federal scientists. Combined with other actions by the Trump administration, more scientists could continue to lose funding.

Credit: Monty Rakusen/DigitalVision via Getty Images

The government shutdown has emptied many laboratories staffed by federal scientists. Combined with other actions by the Trump administration, more scientists could continue to lose funding. Credit: Monty Rakusen/DigitalVision via Getty Images

Science, democracy, and global competition

While technology has long served as a core pillar of national and economic security, science has only recently reemerged as a key driver of greater geopolitical and cultural change.

China’s extraordinary rise in science over the past three decades and its arrival as the United States’ chief technological competitor has upended conventional wisdom that innovation can thrive only in liberal democracies.

The White House’s efforts to centralize federal grantmaking, restrict free speech, erase public data, and expand surveillance mirror China’s successful playbook for building scientific capacity while suppressing dissent.

As the shape of the Trump administration’s vision for American science has come into focus, what remains unclear is whether, after the shutdown, it can outcompete China by following its lead.

Kenneth M. Evans is a Fellow in Science, Technology, and Innovation Policy at the Baker Institute for Public Policy, Rice University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disruption to science will last longer than the US government shutdown Read More »

research-roundup:-6-cool-science-stories-we-almost-missed

Research roundup: 6 cool science stories we almost missed


Also: the science of regular vs. gluten-free spaghetti, catching high-speed snake bites in action, etc.

Karnak Temple, Luxor, Egypt. Credit: Ben Pennington

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. October’s list includes the microstructural differences between regular and gluten-free spaghetti, capturing striking snakes in action, the mystery behind the formation of Martian gullies, and—for all you word game enthusiasts—an intriguing computational proof of the highest possible scoring Boggle board.

Highest-scoring Boggle board

boggle board showing highest scoring selection of letters

Credit: Dan Vanderkam

Sometimes we get handy story tips from readers about quirkily interesting research projects. Sometimes those projects involve classic games like Boggle, in which players find as many words as they can from a 4×4 grid of 16 lettered cubic dice, within a given time limit. Software engineer Dan Vanderkam alerted us to a a preprint he posted to the physics arXiv, detailing his quest to find the Boggle board configuration that yields the highest possible score. It’s pictured above, with a total score of 3,625 points, according to Vanderkam’s first-ever computational proof. There are more than 1000 possible words, with “replastering” being the longest.

Vanderkam has documented his quest and its resolution (including the code he used) extensively on his blog, admitting to the Financial Times that, “As far as I can tell, I’m the only person who is actually interested in this problem.” That’s not entirely true: there was an attempt in 1982 that found an optimal board yielding 2,195 points. Vanderkam’s board was known as possibly being the highest scoring, it was just very difficult to prove using standard heuristic search methods. Vanderkam’s solution involved grouping board configurations with similar patterns into classes, and then finding upper bounds to discard clear losers, rather than trying to tally scores for each board individually—i.e., an old school “branch and bound” technique.

DOI: arXiv, 2025. 10.48550/arXiv.2507.02117  (About DOIs).

Origins of Egypt’s Karnak Temple

Core samples being extracted at Karnak Temple

Credit: Ben Pennington

Egypt’s Karnak Temple complex, located about 500 meters of the Nile River near Luxor, has long been of interest to archaeologists and millions of annual tourists alike. But its actual age has been a matter of much debate. The most comprehensive geological survey conducted to date is yielding fresh insights into the temple’s origins and evolution over time, according to a paper published in the journal Antiquity.

The authors analyzed sediment cores and thousands of ceramic fragments from within and around the site to map out how the surrounding landscape has changed. They concluded that early on, circa 2520 BCE, the site would have experienced regular flooding from the Nile; thus, the earliest permanent settlement at Karnak would have emerged between 2591 and 2152 BCE, in keeping with the earliest dated ceramic fragments.  This would have been after river channels essentially created an island of higher ground that served as the foundation for constructing the temple. As those channels diverged over millennia, the available area for the temple expanded and thus, so did the complex.

This might be supported by Egyptian creation myths. “It’s tempting to suggest the Theban elites chose Karnak’s location for the dwelling place of a new form of the creator god, ‘Ra-Amun,’ as it fitted the cosmogonical scene of high ground emerging from surrounding water,” said co-author Ben Pennington, a geoarchaeologist at the University of Southampton. “Later texts of the Middle Kingdom (c.1980–1760 BC) develop this idea, with the ‘primeval mound’ rising from the ‘Waters of Chaos.’ During this period, the abating of the annual flood would have echoed this scene, with the mound on which Karnak was built appearing to ‘rise’ and grow from the receding floodwaters.”

DOI: Antiquity, 2025. 10.15184/aqy.2025.10185  (About DOIs).

Gullies on Mars

Mars dune with gullies in the Russell crater. On their way down, the ice blocks threw up levees.

Credit: HiRISE/NASA/JPL/University of Arizon

Mars has many intriguing features but one of the more puzzling is the sinuous gullies that form on some its dunes. Scientists have proposed two hypotheses for how such gullies might form. The first is that they are the result of debris flow from an earlier time in the planet’s history where liquid water might have existed on the surface—evidence that the red planet might once have been habitable. The second is that the gullies form because of seasonal deposition and sublimation of CO2 ice on the surface in the present day. A paper published in the journal Geophysical Research Letters demonstrated strong evidence in favor of the latter hypothesis.

Building on her earlier research on how sublimation of CO2 ice can drive debris flows on Mars, earth scientist Lonneke Roelofs of Utrecht University in the Netherlands collaborated with scientists at the Open University in Milton Keynes, UK, which boasts a facility for simulating conditions on Mars. She ran several experiments with different sediment types, creating dune slopes of different angles and dropping blocks of CO2 ice from the top of the slope. At just the right angle, the blocks did indeed start digging into the sandy slope and moving downwards to create a gully. Roelofs likened the effect to a burrowing mole or the sandworms in Dune.

Per Roelofs, on Mars, CO2 ice forms over the surface during the winter and starts to sublimate in the spring. The ice blocks are remnants found on the shaded side of dune tops, where they break off once the temperature gets high enough and slide down the slope. At the bottom, they keep sublimating until all the CO2 has evaporated, leaving behind a hollow of sand.

DOI: Geophysical Research Letters, 2025. 10.1029/2024GL112860  (About DOIs).

Snake bites in action

S.G.C. Cleuren et al., 2025

Snakes can strike out and bite into prey in as little as 60 microseconds and until quite recently it just wasn’t technologically possible to capture those strikes in high definition. Researchers at Monash University in Australia decided to test 36 different species of snake in this way to learn more about their unique biting styles, detailing their results in a paper published in the Journal of Experimental Biology. And oh yes, there is awesome video footage.

Alistair Evans and Silke Cleuren traveled to Venomworld in Paris, France, where snake venom is harvested for medical and pharmaceutical applications.  For each snake species, they poked at said snake with a cylindrical piece of warm medical gel to mimic meaty muscle until the snake lunged and buried its fangs into the gel. Two cameras recorded the action at 1000 frames per second, capturing more than 100 individual strikes in great detail.

Among their findings: vipers moved the fastest when they struck, with the blunt-nosed viper accelerating up to 710 m/s2, landing a bite within 22 microseconds. All the vipers landed bites within 100 microseconds of striking. By contrast, the rough-scaled death adder only reached speeds of 2.5 m/s2. Vipers also sometimes pulled out and reinserted their fangs if they didn’t like the resulting angle; only then did they inject their venom. Elapids like the Cape coral cobra bit their prey repeatedly to inject their venom, while colubrids would tear gashes into their prey by sweeping their jaws from side to side, ensuing the maximum possible amount of venom was delivered.

DOI: Journal of Experimental Biology, 2025. 10.1242/jeb.250347  (About DOIs).

Spaghetti secrets

Spaghetti, like most pasta, is made of semolina flour, which is mixed with water to form a paste and then extruded to create a desired shape. The commercial products are then dried—an active area of research, since it’s easy for the strands to crack during the process. In fact, there have been a surprisingly large number of scientific papers seeking to understand the various properties of spaghetti, both cooking and eating it—the mechanics of slurping the pasta into one’s mouth, for instance, or spitting it out (aka, the “reverse spaghetti problem”); how to tell when it’s perfectly al dente; and how to get dry spaghetti strands to break neatly in two, rather than three or more scattered pieces.

Pasta also has a fairly low glycemic index, and is thus a good option for those with heart disease or type 2 diabetes. With the rise in the number of people with a gluten intolerance, gluten-free spaghetti has emerged as an alternative. The downside is that gluten-free pasta is harder to cook correctly and decidedly subpar in taste and texture (mouthfeel) compared to regular pasta. The reason for the latter lies in the microstructure, according to a paper published in the journal Food Hydrocolloids.

The authors used small-angle x-ray scattering and small-angle neutron scattering to analyze the microstructure of both regular and gluten-free pasta—i.e., the gluten matrix and its artificial counterpart—cooked al dente with varying salt concentrations in the water. They found that because of its gluten matrix, regular pasta has better resistance to structural degradation, and that adding just the right amount of salt further reinforces that matrix—so it’s not just a matter of salting to taste. This could lead to a better alternative matrix for gluten-free pasta that holds its structure better and has a taste and mouthfeel closer to that of regular pasta.

DOI: Food Hydrocolloids, 2025. 10.1016/j.foodhyd.2025.111855  (About DOIs).

Can machine learning identify ancient artists?

Dr Andrea Jalandoni studies finger flutings at a cave site in Australia

Credit: Andrea Jalandoni

Finger flutings are one of the oldest examples of prehistoric art, usually found carved into the walls of caves in southern Australia, New Guinea, and parts of Europe. They’re basically just marks made by human fingers drawn through the “moonmilk” (a soft mineral film) covering those walls. Very little is known about the people who left those flutings and while some have tried to draw inferences based on biometric finger ratios or hand size measurements—notably whether given marks were made by men or women—such methods produce inconsistent results and are prone to human error and bias.

That’s why digital archaeologist Andrea Jaladonia of Griffith University decided to experiment with machine learning image recognition methods as a possible tool, detailing her findings in a paper published the journal Scientific Reports. She recruited 96 adult volunteers to create their own finger flutings in two different settings: once in a virtual reality environment, and once on a substitute for the moonmilk clay that mimicked the look and feel of the real thing. Her team took images of those flutings and then used them to train two common image recognition models.

The results were decidedly mixed. The virtual reality images performed the worst, yielding highly unreliable attempts at classifying whether flutings were made by men or women. The images produced in actual clay produced better results, even reaching close to 84 percent accuracy in one model. But there were also signs the models were overfitting, i.e., memorizing patterns in the training data rather than more generalized patterns, so the approach needs more refinement before it is ready for actual deployment. As for why determining sex classifications matters, “This information has been used to decide who can access certain sites for cultural reasons,” Jalandoni explained.

DOI: Scientific Reports, 2025. 10.1038/s41598-025-18098-4  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 6 cool science stories we almost missed Read More »

wear-marks-suggest-neanderthals-made-ocher-crayons

Wear marks suggest Neanderthals made ocher crayons

“The combination of shaping, wear, and resharpening indicates they were used to draw or mark on soft surfaces,” D’Errico told Ars in an email. “Although the material is too fragile to reveal the specific material on which they were used, such as hide, human skin, or stone, an experimental approach may, in the future, allow us at least to rule out their use on some materials.”

A 73,000-year-old drawing from Blombo Cave in South Africa looks like it was made with tools much like the ocher crayons from Crimea, which means that Neanderthals and Homo sapiens both invented crayons in their own little corners of the world at around the same time.

Image of a reddish-brown rock with a series of lines carved in its surface

The surface of this flat piece of orange ocher was carved over 47,000 years ago, then worn smooth, perhaps by carrying in a bag. Credit: D’Errico et al. 2025

Sometimes you’re the crayon, sometimes you’re the canvas

A third item from Zaskalnaya V is a flat piece of orange ocher. One side is covered with a thin layer of hard, dark rock. But more than 47,000 years ago, someone carefully cut several deep lines, regularly spaced and almost parallel, into its surface. The area of stone between the lines has been worn and polished smooth, suggesting that someone carried it and handled it for years.

“The polish smoothing the engraved lines suggest that the piece was curated, perhaps transported in a bag,” D’Errico told Ars. Whoever carved the lines into the piece of ocher also appears to have been right-handed, based on the angle of the incisions’ walls.

The finds join a host of other evidence of Neanderthal artwork and jewelry, from 57,000-year-old finger marks on a cave wall in France to 114,000-year-old ocher-painted shells in Spain.

“Traditionally viewed as lacking the cognitive flexibility and symbolic capacity of humans, the Neanderthals of Crimea demonstrate the opposite: They engaged in cultural practices that were not merely adaptive but deeply meaningful,” wrote D’Errico and his colleagues. “Their sophisticated use of ocher is one facet of their complex cultural life.”

photo of a reddish-brown pointed rock from four angles

The tip of this red ocher crayon was broken off. Credit: D’Errico et al. 2025

Coloring in some details of Neanderthal culture

It’s hard to say whether the rest of the ocher from the Zaskalnaya sites and other nearby rock shelters meant anything to the Neanderthals beyond the purely pragmatic. However, it’s unlikely that humans (of any stripe) could spend 70,000 years working with vividly colored pigment without developing a sense of aesthetics, assigning some meaning to the colors, or maybe doing both.

Wear marks suggest Neanderthals made ocher crayons Read More »

neural-network-finds-an-enzyme-that-can-break-down-polyurethane

Neural network finds an enzyme that can break down polyurethane

You’ll often hear plastic pollution referred to as a problem. But the reality is that it’s multiple problems. Depending on the properties we need, we form plastics out of different polymers, each of which is held together by a distinct type of chemical bond. So the method we use to break down one type of polymer may be incompatible with the chemistry of another.

That problem is why, even though we’ve had success finding enzymes that break down common plastics like polyesters and PET, they’re only partial solutions to plastic waste. However, researchers aren’t sitting back and basking in the triumph of partial solutions, and they’ve now got very sophisticated protein design tools to help them out.

That’s the story behind a completely new enzyme that researchers developed to break down polyurethane, the polymer commonly used to make foam cushioning, among other things. The new enzyme is compatible with an industrial-style recycling process that breaks the polymer down into its basic building blocks, which can be used to form fresh polyurethane.

Breaking down polyurethane

Image of a set of chemical bonds. From left to right there is an X, then a single bond to an oxygen, then a single bond to an oxygen that's double-bonded to carbon, then a single bond to a nitrogen, then a single bond to another X.

The basics of the chemical bonds that link polyurethanes. The rest of the polymer is represented by X’s here.

The new paper that describes the development of this enzyme lays out the scale of the problem: In 2024, we made 22 million metric tons of polyurethane. The urethane bond that defines these involves a nitrogen bonded to a carbon that in turn is bonded to two oxygens, one of which links into the rest of the polymer. The rest of the polymer, linked by these bonds, can be fairly complex and often contains ringed structures related to benzene.

Digesting polyurethanes is challenging. Individual polymer chains are often extensively cross-linked, and the bulky structures can make it difficult for enzymes to get at the bonds they can digest. A chemical called diethylene glycol can partially break these molecules down, but only at elevated temperatures. And it leaves behind a complicated mess of chemicals that can’t be fed back into any useful reactions. Instead, it’s typically incinerated as hazardous waste.

Neural network finds an enzyme that can break down polyurethane Read More »

new-study-settles-40-year-debate:-nanotyrannus-is-a-new-species

New study settles 40-year debate: Nanotyrannus is a new species

For four decades, a frequently acrimonious debate has raged in paleontological circles about the correct taxonomy for a handful of rare fossil specimens. One faction insisted the fossils were juvenile Tyrannosaurus rex; the other argued that they represented a new species dubbed Nanotyrannus lancensis. Now, paleontologists believe they have settled the debate once and for all due to a new analysis of a well-preserved fossil.

The verdict: It is indeed a new species, according to a new paper published in the journal Nature. The authors also reclassified another specimen as a second new species, distinct from N. lancensis. In short, Nanotyrannus is a valid taxon and contains two species.

“This fossil doesn’t just settle the debate,” said Lindsay Zanno, a paleontologist at North Carolina State University and head of paleontology at North Carolina Museum of Natural Sciences. “It flips decades of T. rex research on its head.” That’s because paleontologists have relied on such fossils to model the growth and behavior of T. rex. The new findings suggest that there could have been multiple tyrannosaur species and that paleontologists have been underestimating the diversity of dinosaurs from this period.

Our story begins in 1942, when the fossilized skull of a Nanotyrannus, nicknamed Chomper, was excavated in Montana by a Cleveland Museum of Natural History expedition. Originally, paleontologists thought it belonged to a Gorgosaurus, but a 1965 paper challenged that identification and argued that the skull belonged to a juvenile T. rex. It wasn’t until 1988 that scientists proposed that the skull was actually that of a new species, Nanotyrannus. It’s been a constant back-and-forth ever since.

As recently as 2020, a highly influential paper claimed that Nanotyrannus was definitively a juvenile T. Rex. Yet a substantial number of paleontologists still believed it should be classified as a distinct species. A January 2024 paper, for instance, came down firmly on the Nanotyrannus side of the debate. Co-authors Nicholas Longrich of the University of Bath and Evan Saitta of the University of Chicago measured the growth rings in Nanotyrannus bones and concluded the animals were nearly fully grown.

Dueling dinosaurs

Lindsay Zanno, associate research professor at North Carolina State University and head of paleontology at the North Carolina Museum of Natural Sciences, with the Dueling Dinosaurs fossil.

Lindsay Zanno of North Carolina State University, who also heads paleontology at the North Carolina Museum of Natural Sciences, with the “dueling dinosaurs” fossil. Credit: N.C. State University/CC BY-NC-ND

Furthermore, there was no evidence of hybrid fossils combining features of both Nanotyrannus and T. rex, which one would expect if the former were a juvenile version of the latter. Longrich and Saitta had also discovered a skull bone, archived in a San Francisco museum, that did belong to a juvenile T. rex, and they were able to do an anatomical comparison. They argued that Nanotyrannus had a lighter build, longer limbs, and larger arms than a T. rex and likely was smaller, faster, and more agile.

New study settles 40-year debate: Nanotyrannus is a new species Read More »

the-chemistry-behind-that-pricey-cup-of-civet-coffee

The chemistry behind that pricey cup of civet coffee

A sampling of scat

Kopi luwak is quite popular, with well-established markets in several South and East Asian countries. Its popularity has risen in Europe and the US as well, and India has recently become an emerging new market. Since there haven’t been similar studies of the chemical properties of kopi luwak from the Indian subcontinent, the authors of this latest study decided to fill that scientific gap. They focused on civet coffee produced in Kodagu, which produces nearly 36 percent of India’s total coffee production.

The authors collected 68 fresh civet scat samples from five different sites in Kodagu during peak fruit harvesting in January of this year. Collectors wore gloves to avoid contamination of the samples. For comparative analysis, they also harvested several bunches of ripened Robusta coffee berries. They washed the scat samples to remove the feces and also removed any palm seeds or other elements to ensure only Robusta beans remained.

For the manually harvested berries, the authors removed the pulp after a natural fermentation process and then sun-dried the beans for seven days. They then removed the hulls of both scat-derived and manually harvested berries and dried the beans in an oven for two hours. None of the bean samples were roasted, since roasting might significantly alter the acidity and chemical composition of the samples. For the chemical analysis, 10 distinct samples (five from each site where berries were collected) were ground into powder and subjected to various tests.

The civet beans had higher fat levels, particularly those compounds known to influence aroma and flavor, such as caprylic acid and methyl esters—contributing to kopi luwak’s distinctive aroma and flavor—but lower levels of caffeine, protein, and acidity, which would reduce the bitterness. The lower acidity is likely due to the coffee berries being naturally fermented in the civets’ digestive tracts, and there is more to learn about the role the gut microbiome plays in all of this. There were also several volatile organic compounds, common to standard coffee, that were extremely low or absent entirely in the civet samples.

In short, the comparative analysis “further supports the notion that civet coffee is chemically different from conventionally produced coffee of similar types, mainly due to fermentation,” the authors concluded. They recommend further research using roasted samples, along with studying other coffee varieties, samples from a more diverse selection of farms, and the influence of certain ecological conditions, such as canopy cover and the presence of wild trees.

Scientific Reports, 2025. DOI: 10.1038/s41598-025-21545-x  (About DOIs).

The chemistry behind that pricey cup of civet coffee Read More »

westinghouse-is-claiming-a-nuclear-deal-would-see-$80b-of-new-reactors

Westinghouse is claiming a nuclear deal would see $80B of new reactors

On Tuesday, Westinghouse announced that it had reached an agreement with the Trump administration that would purportedly see $80 billion of new nuclear reactors built in the US. And the government indicated that it had finalized plans for a collaboration of GE Vernova and Hitachi to build additional reactors. Unfortunately, there are roughly zero details about the deal at the moment.

The agreements were apparently negotiated during President Trump’s trip to Japan. An announcement of those agreements indicates that “Japan and various Japanese companies” would invest “up to” $332 billion for energy infrastructure. This specifically mentioned Westinghouse, GE Vernova, and Hitachi. This promises the construction of both large AP1000 reactors and small modular nuclear reactors. The announcement then goes on to indicate that many other companies would also get a slice of that “up to $332 billion,” many for basic grid infrastructure.

So the total amount devoted to nuclear reactors is not specified in the announcement or anywhere else. As of the publication time, the Department of Energy has no information on the deal; Hitachi, GE Vernova, and the Hitachi/GE Vernova collaboration websites are also silent on it.

Meanwhile, Westinghouse claims that it will be involved in the construction of “at least $80 billion of new reactors,” a mix of AP1000 and AP300 (each named for the MW of capacity of the reactor/generator combination). The company claims that doing so will “reinvigorate the nuclear power industrial base.”

Westinghouse is claiming a nuclear deal would see $80B of new reactors Read More »

melissa-strikes-jamaica,-tied-as-most-powerful-atlantic-storm-to-come-ashore

Melissa strikes Jamaica, tied as most powerful Atlantic storm to come ashore

Hurricane Melissa made landfall in southwestern Jamaica, near New Hope, on Tuesday at 1 pm ET with staggeringly powerful sustained winds of 185 mph.

In the National Hurricane Center update noting the precise landfall time and location, specialist Larry Kelly characterized Melissa as an “extremely dangerous and life-threatening” hurricane. Melissa is bringing very heavy rainfall, damaging surge, and destructive winds to the small Caribbean island that is home to about 3 million people.

The effects on the island are sure to be catastrophic and prolonged.

A record-breaking hurricane by any measure

By any measure, Melissa is an extraordinary and catastrophic storm.

By strengthening overnight and then maintaining its incredible intensity of 185 mph, Melissa has tied the Labor Day Hurricane of 1935 as the most powerful hurricane to strike a landmass in the Atlantic Basin, which includes the United States, Mexico, Central America, and the Caribbean islands.

Melissa also tied the Labor Day storm, which struck the Florida Keys, as the most intense storm at landfall, measured by central pressure at 892 millibars.

Overall, Melissa is tied for the second strongest hurricane, measured by winds, ever observed in the Atlantic basin, behind only Hurricane Allen and its 190 mph winds in 1980. Only Hurricane Wilma (882 millibars) and Gilbert (888 millibars) have recorded lower pressures at sea.

Melissa strikes Jamaica, tied as most powerful Atlantic storm to come ashore Read More »

why-imperfection-could-be-key-to-turing-patterns-in-nature

Why imperfection could be key to Turing patterns in nature

In essence, it’s a type of symmetry breaking. Any two processes that act as activator and inhibitor will produce periodic patterns and can be modeled using Turing’s diffusion function. The challenge is moving from Turing’s admittedly simplified model to pinpointing the precise mechanisms serving in the activator and inhibitor roles.

This is especially challenging in biology. Per the authors of this latest paper, the classical approach to a Turing mechanism balances reaction and diffusion using a single length scale, but biological patterns often incorporate multiscale structures, grain-like textures, or certain inherent imperfections. And the resulting patterns are often much blurrier than those found in nature.

Can you say “diffusiopherosis”?

Simulated hexagon and stripe patterns obtained by diffusiophoretic assembly of two types of cells on top of the chemical patterns. Credit: Siamak Mirfendereski and Ankur Gupta/CU Boulder

In 2023, UCB biochemical engineers Ankur Gupta and Benjamin Alessio developed a new model that added diffusiopherosis into the mix. It’s a process by which colloids are transported via differences in solute concentration gradients—the same process by which soap diffuses out of laundry in water, dragging particles of dirt out of the fabric. Gupta and Alessio successfully used their new model to simulate the distinctive hexagon pattern (alternating purple and black) on the ornate boxfish, native to Australia, achieving much sharper outlines than the model originally proposed by Turing.

The problem was that the simulations produced patterns that were too perfect: hexagons that were all the same size and shape and an identical distance apart. Animal patterns in nature, by contrast, are never perfectly uniform. So Gupta and his UCB co-author on this latest paper, Siamak Mirfendereski, figured out how to tweak the model to get the pattern outputs they desired. All they had to do was define specific sizes for individual cells. For instance, larger cells create thicker outlines, and when they cluster, they produce broader patterns. And sometimes the cells jam up and break up a stripe. Their revised simulations produced patterns and textures very similar to those found in nature.

“Imperfections are everywhere in nature,” said Gupta. “We proposed a simple idea that can explain how cells assemble to create these variations. We are drawing inspiration from the imperfect beauty of [a] natural system and hope to harness these imperfections for new kinds of functionality in the future.” Possible future applications include “smart” camouflage fabrics that can change color to better blend with the surrounding environment, or more effective targeted drug delivery systems.

Matter, 2025. DOI: 10.1016/j.matt.2025.102513 (About DOIs).

Why imperfection could be key to Turing patterns in nature Read More »