robotics

research-roundup:-6-cool-stories-we-almost-missed

Research roundup: 6 cool stories we almost missed


The assassination of a Hungarian duke, why woodpeckers grunt when they peck, and more.

Skull of remains found in a 13th century Dominican monastery on Margaret Island, Budapest, Hungary Credit: Eötvös Loránd University

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. November’s list includes forensic details of the medieval assassination of a Hungarian duke, why woodpeckers grunt when they peck, and more evidence that X’s much-maligned community notes might actually help combat the spread of misinformation after all.

An assassinated medieval Hungarian duke

The observed perimortem lesions on the human remains (CL=cranial lesion, PL= Postcranial lesion). The drawing of the skeleton was generated using OpenAI’s image generation tools (DALL·E) via ChatGPT.

Credit: Tamás Hajdu et al., 2026

Back in 1915, archaeologists discovered the skeletal remains of a young man in a Dominican monastery on Margaret Island in Budapest, Hungary. The remains were believed to be those of Duke Bela of Masco, grandson of the medieval Hungarian King Bela IV. Per historical records, the young duke was brutally assassinated in 1272 by a rival faction and his mutilated remains were recovered by the duke’s sister and niece and buried in the monastery.

The identification of the remains was based on a contemporary osteological analysis, but they were subsequently lost and only rediscovered in 2018. A paper published in the journal Forensic Science International: Genetics has now confirmed that identification and shed more light on precisely how the duke died. (A preprint is available on bioRxiv.]

An interdisciplinary team of researchers performed various kinds of bioarchaeological analysis on the remains. including genetic testing, proteomics, 3D modeling, and radiocarbon dating. The resulting data definitively proves that the skeleton is indeed that of Duke Bela of Masco.

The authors were also able to reconstruct the manner of the duke’s death, concluding that this was a coordinated attack by three people. One attacked from the front while the other two attacked from the left and right sides, and the duke was facing his assassins and tried to defend himself. The weapons used were most likely a saber and a long sword, and the assassins kept raining down blows even after the duke had fallen to the ground. The authors concluded that while the attack was clearly planned, it was also personal and fueled by rage or hate.

DOI: Forensic Science International: Genetics, 2025. 10.1016/j.fsigen.2025.103381  (About DOIs).

Why woodpeckers grunt when they peck

A male Pileated woodpecker foraging on a t

Woodpeckers energetically drum away at tree trunks all day long with their beaks and yet somehow never seem to get concussions, despite the fact that such drumming can produce deceleration forces as high as 1,200 g’s. (Humans suffer concussions with a sudden deceleration of just 100 g’s.) While popular myth holds that woodpecker heads are structured in such a way to absorb the shock, and there has been some science to back that up, more recent research found that their heads act more like hammers than shock absorbers. A paper published in the Journal of Experimental Biology sheds further light on the biomechanics of how woodpeckers essentially turn themselves into hammers and reveals that the birds actually grunt as they strike wood.

The authors caught eight wild downy woodpeckers and recorded them drilling and tapping on pieces of hardwood in the lab for three days, while also measuring electrical signals in their heads, necks, abdomens, tails, and leg muscles. Analyzing the footage, they found that woodpeckers use their hip flexors and front neck muscles to propel themselves forward as they peck while tipping their heads back and bracing themselves using muscles at the base of the skull and back of the neck. The birds use abdominal muscles for stability and brace for impact using their tail muscles to anchor their bodies against a tree. As for the grunting, the authors noted that it’s a type of breathing pattern used by tennis players (and martial artists) to boost the power of a strike.

DOI: Journal of Experimental Biology, 2025. 10.1242/jeb.251167  (About DOIs).

Raisins turn water into wine

wine glass half filled with raisins

Credit: Kyoto University

Fermentation has been around in some form for millennia, relying on alcohol-producing yeasts like Saccharomyces cerevisiae; cultured S. cerevisiae is still used by winemakers today. It’s long been thought that winemakers in ancient times stored fresh crushed grapes in jars and relied on natural fermentation to work its magic, but recent studies have called this into question by demonstrating that S. cerevisiae colonies usually don’t form on fresh grape skins. But the yeast does like raisins, as Kyoto University researchers recently discovered. They’ve followed up that earlier work with a paper published in Scientific Reports, demonstrating that it’s possible to use raisins to turn water into wine.

The authors harvested fresh grapes and dried them for 28 days. Some were dried using an incubator, some were sun-dried, and a third batch was dried using a combination of the two methods. The researchers then added the resulting raisins to bottles of water—three samples for each type of drying process—sealed the bottles, and stored them at room temperature for two weeks. One incubator-dried sample and two combo samples successfully fermented, but all three of the sun-dried samples did so, and at higher ethanol concentrations. Future research will focus on identifying the underlying molecular mechanisms. And for those interested in trying this at home, the authors warn that it only works with naturally sun-dried raisins, since store-bought varieties have oil coatings that block fermentation.

DOI: Scientific Reports, 2025. 10.1038/s41598-025-23715-3  (About DOIs).

An octopus-inspired pigment

An octopus camouflages itself with the seafloor.

Credit: Charlotte Seid

Octopuses, cuttlefish, and several other cephalopods can rapidly shift the colors in their skin thanks to that skin’s unique complex structure, including layers of chromatophores, iridophores, and leucophores. A color-shifting natural pigment called xanthommatin also plays a key role, but it’s been difficult to study because it’s hard to harvest enough directly from animals, and lab-based methods of making the pigment are labor-intensive and don’t yield much. Scientists at the University of San Diego have developed a new method for making xanthommatin in substantially larger quantities, according to a paper published in Nature Biotechnology.

The issue is that trying to get microbes to make foreign compounds creates a metabolic burden, and the microbes hence resist the process, hindering yields. The USD team figured out how to trick the cells into producing more xanthommatin by genetically engineering them in such a way that making the pigment was essential to a cell’s survival. They achieved yields of between 1 and 3 grams per liter, compared to just five milligrams of pigment per liter using traditional approaches. While this work is proof of principle, the authors foresee such future applications as photoelectronic devices and thermal coatings, dyes, natural sunscreens, color-changing paints, and environmental sensors. It could also be used to make other kinds of chemicals and help industries shift away from older methods that rely on fossil fuel-based materials.

DOI: Nature Biotechnology, 2025. 10.1038/s41587-025-02867-7  (About DOIs).

A body-swap robot

Participant standing on body-swap balance robot

Credit: Sachi Wickramasinghe/UBC Media Relations

Among the most serious risks facing older adults is falling. According to the authors of a paper published in Science Robotics, standing upright requires the brain to coordinate signals from the eyes, inner ears, and feet to counter gravity, and there’s a natural lag in how fast this information travels back and forth between brain and muscles. Aging and certain diseases like diabetic neuropathy and multiple sclerosis can further delay that vital communication; the authors liken it to steering a car with a wheel that responds half a second late. And it’s a challenge to directly study the brain under such conditions.

That’s why researchers at the University of British Columbia built a large “body swap” robotic platform. Subjects stood on force plates attached to a motor-driven backboard to reproduce the physical forces at play when standing upright: gravity, inertia, and “viscosity,” which in this case describes the damping effect of muscles and joints that allow us to lean without falling. The platform is designed to subtly alter those forces and also add a 200-millisecond delay.

The authors tested 20 participants and found that lowering inertia and making the viscosity negative resulted in similar instability to that which resulted from a signal delay. They then brought in ten new subjects to study whether adjusting body mechanics could compensate for information delays. They found that adding inertia and viscosity could at least partially counter the instability that arose from signal delay—essentially giving the body a small mechanical boost to help the brain maintain balance. The eventual goal is to design wearables that offer gentle resistance when an older person starts to lose their balance, and/or help patients with MS, for example, adjust to slower signal feedback.

DOI: Science Robotics, 2025. 10.1126/scirobotics.adv0496  (About DOIs).

X community notes might actually work

cropped image of phone screen showing an X post with a community note underneath

Credit: Huaxia Rui

Earlier this year, Elon Musk claimed that X’s community notes feature needed tweaking because it was being gamed by “government & legacy media” to contradict Trump—despite vigorously defending the robustness of the feature against such manipulation in the past. A growing body of research seems to back Musk’s earlier stance.

For instance, last year Bloomberg pointed to several studies suggesting that crowdsourcing worked just as well as using professional fact-checkers when assessing the accuracy of news stories. The latest evidence that crowd-sourcing fact checks can be effective at curbing misinformation comes from a paper published in the journal Information Systems Research, which found that X posts with public corrections were 32 percent more likely to be deleted by authors.

Co-author Huaxia Rui of the University of Rochester pointed out that community notes must meet a threshold before they will appear publicly on posts, while those that do not remain hidden from public view. Seeing a prime opportunity in the arrangement, Rui et al. analyzed 264,600 X posts that had received at least one community note and compared those just above and just below that threshold. The posts were collected from two different periods: June through August 2024, right before the US presidential election (when misinformation typically surges), and the post-election period of January and February 2025.

The fact that roughly one-third of authors responded to public community notes by deleting the post suggests that the built-in dynamics of social media (e.g., status, visibility, peer feedback) might actually help improve the spread of misinformation as intended. The authors concluded that crowd-checking “strikes a balance between First Amendment rights and the urgent need to curb misinformation.” Letting AI write the community notes, however, is probably still a bad idea.

DOI: Information Systems Research, 2025. 10.1287/isre.2024.1609  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 6 cool stories we almost missed Read More »

why-irobot’s-founder-won’t-go-within-10-feet-of-today’s-walking-robots

Why iRobot’s founder won’t go within 10 feet of today’s walking robots

In his post, Brooks recounts being “way too close” to an Agility Robotics Digit humanoid when it fell several years ago. He has not dared approach a walking one since. Even in promotional videos from humanoid companies, Brooks notes, humans are never shown close to moving humanoid robots unless separated by furniture, and even then, the robots only shuffle minimally.

This safety problem extends beyond accidental falls. For humanoids to fulfill their promised role in health care and factory settings, they need certification to operate in zones shared with humans. Current walking mechanisms make such certification virtually impossible under existing safety standards in most parts of the world.

Apollo robot

The humanoid Apollo robot. Credit: Google

Brooks predicts that within 15 years, there will indeed be many robots called “humanoids” performing various tasks. But ironically, they will look nothing like today’s bipedal machines. They will have wheels instead of feet, varying numbers of arms, and specialized sensors that bear no resemblance to human eyes. Some will have cameras in their hands or looking down from their midsections. The definition of “humanoid” will shift, just as “flying cars” now means electric helicopters rather than road-capable aircraft, and “self-driving cars” means vehicles with remote human monitors rather than truly autonomous systems.

The billions currently being invested in forcing today’s rigid, vision-only humanoids to learn dexterity will largely disappear, Brooks argues. Academic researchers are making more progress with systems that incorporate touch feedback, like MIT’s approach using a glove that transmits sensations between human operators and robot hands. But even these advances remain far from the comprehensive touch sensing that enables human dexterity.

Today, few people spend their days near humanoid robots, but Brooks’ 3-meter rule stands as a practical warning of challenges ahead from someone who has spent decades building these machines. The gap between promotional videos and deployable reality remains large, measured not just in years but in fundamental unsolved problems of physics, sensing, and safety.

Why iRobot’s founder won’t go within 10 feet of today’s walking robots Read More »

google-deepmind-unveils-its-first-“thinking”-robotics-ai

Google DeepMind unveils its first “thinking” robotics AI

Imagine that you want a robot to sort a pile of laundry into whites and colors. Gemini Robotics-ER 1.5 would process the request along with images of the physical environment (a pile of clothing). This AI can also call tools like Google search to gather more data. The ER model then generates natural language instructions, specific steps that the robot should follow to complete the given task.

Gemin iRobotics thinking

The two new models work together to “think” about how to complete a task.

Credit: Google

The two new models work together to “think” about how to complete a task. Credit: Google

Gemini Robotics 1.5 (the action model) takes these instructions from the ER model and generates robot actions while using visual input to guide its movements. But it also goes through its own thinking process to consider how to approach each step. “There are all these kinds of intuitive thoughts that help [a person] guide this task, but robots don’t have this intuition,” said DeepMind’s Kanishka Rao. “One of the major advancements that we’ve made with 1.5 in the VLA is its ability to think before it acts.”

Both of DeepMind’s new robotic AIs are built on the Gemini foundation models but have been fine-tuned with data that adapts them to operating in a physical space. This approach, the team says, gives robots the ability to undertake more complex multi-stage tasks, bringing agentic capabilities to robotics.

The DeepMind team tests Gemini robotics with a few different machines, like the two-armed Aloha 2 and the humanoid Apollo. In the past, AI researchers had to create customized models for each robot, but that’s no longer necessary. DeepMind says that Gemini Robotics 1.5 can learn across different embodiments, transferring skills learned from Aloha 2’s grippers to the more intricate hands on Apollo with no specialized tuning.

All this talk of physical agents powered by AI is fun, but we’re still a long way from a robot you can order to do your laundry. Gemini Robotics 1.5, the model that actually controls robots, is still only available to trusted testers. However, the thinking ER model is now rolling out in Google AI Studio, allowing developers to generate robotic instructions for their own physically embodied robotic experiments.

Google DeepMind unveils its first “thinking” robotics AI Read More »

a-robot-walks-on-water-thanks-to-evolution’s-solution

A robot walks on water thanks to evolution’s solution

Robots can serve pizza, crawl over alien planets, swim like octopuses and jellyfish, cosplay as humans, and even perform surgery. But can they walk on water?

Rhagobot isn’t exactly the first thing that comes to mind at the mention of a robot. Inspired by Rhagovelia water striders, semiaquatic insects also known as ripple bugs, these tiny bots can glide across rushing streams because of the robotization of an evolutionary adaptation.

Rhagovelia (as opposed to other species of water striders) have fan-like appendages toward the ends of their middle legs that passively open and close depending on how the water beneath them is moving. This is why they appear to glide effortlessly across the water’s surface. Biologist Victor Ortega-Jimenez of the University of California, Berkeley, was intrigued by how such tiny insects can accelerate and pull off rapid turns and other maneuvers, almost as if they are flying across a liquid surface.

“Rhagovelia’s fan serves as an inspiring template for developing self-morphing artificial propellers, providing insights into their biological form and function,” he said in a study recently published in Science. “Such configurations are largely unexplored in semi-aquatic robots.”

Mighty morphin’

It took Ortega-Jimenez five years to figure out how the bugs get around. While Rhagovelia leg fans were thought to morph because they were powered by muscle, he found that the appendages automatically adjusted to the surface tension and elastic forces beneath them, passively opening and closing ten times faster than it takes to blink. They expand immediately when making contact with water and change shape depending on the flow.

By covering an extensive surface area for their size and maintaining their shape when the insects move their legs, Rhagovelia fans generate a tremendous amount of propulsion. They also do double duty. Despite being rigid enough to resist deformation when extended, the fans are still flexible enough to easily collapse, adhering to the claw above to keep from getting in the animal’s way when it’s out of water. It also helps that the insects have hydrophobic legs that repel water that could otherwise weigh them down.

Ortega-Jimenez and his research team observed the leg fans using a scanning electron microscope. If they were going to create a robot based on ripple bugs, they needed to know the exact structure they were going for. After experimenting with cylindrical fans, the researchers found that Rhagovellia fans are actually structures made of many flat barbs with barbules, something which was previously unknown.

A robot walks on water thanks to evolution’s solution Read More »

robots-eating-other-robots:-the-benefits-of-machine-metabolism

Robots eating other robots: The benefits of machine metabolism


If you define “metabolism” loosely enough, these robots may have one.

For decades we’ve been trying to make the robots smarter and more physically capable by mimicking biological intelligence and movement. “But in doing so, we’ve been just replicating the results of biological evolution—I say we need to replicate its methods,” argues Philippe Wyder, a developmental robotics researcher at Columbia University. Wyder led a team that demonstrated a machine with a rudimentary form of what they’re calling a metabolism.

He and his colleagues built a robot that could consume other robots to physically grow, become stronger, more capable, and continue functioning.

Nature’s methods

The idea of robotic metabolism combines various concepts in AI and robotics. The first is artificial life, which Wyder termed “a field where people study the evolution of organisms through computer simulations.” Then there is the idea of modular robots: reconfigurable machines that can change their architecture by rearranging collections of basic modules. That was pioneered in the US by Daniela Rus or Mark Yim at Carnegie Mellon University in the 1990s.

Finally, there is the idea that we need a shift from a goal-oriented design we’ve been traditionally implementing in our machines to a survivability-oriented design found in living organisms, which Magnus Egerstedt proposed in his book Robot Ecology.

Wyder’s team took all these ideas, merged them, and prototyped a robot that could “eat” other robots. “I kind of came at this from many different angles,” Wyder says.

The key source of inspiration, though, was the way nature builds its organisms. There are 20 standard amino acids universally used by life that can be combined into trillions of proteins, forming the building blocks of countless life forms. Wyder started his project by designing a basic robotic module that was intended to play a role roughly equivalent to a single amino acid. This module, called a Truss Link, looked like a rod, being 16 centimeters long and containing batteries, electronic controllers, and servomotors than enabled them to expand, contract, and crawl in a straight line. They had permanent magnets at each end, which let them connect to other rods and form lightweight lattices.

Wyder’s idea was to throw a number of these modules in a confined space to see if they would assemble into more complex structures by bumping into each other. The process might be analogous to how amino acids spontaneously formed simple organic molecules roughly 4 billion years ago.

Robotic growth

The first stage of Wyder’s experiment was set up in a space with a few terrain features, like a drop, a few obstacles, and a standing cylinder. The robots were operated by the team, which directed them to form various structures. Three Truss Links connected with the magnets at one center point formed a three-pointed star. Other structures they formed included a triangle, a diamond with a tail that was a triangle connected with a three-pointed star, or a tetrahedron, and a 3D structure that looked like a triangular pyramid. The robots had to find other Truss Links and make them part of their bodies to grow into more complex forms.

As they were growing, they were also becoming more capable. A single Truss Link could only move in a straight line, a triangle could turn left and right, a diamond with a tail could traverse small bumps, while a tetrahedron could move itself over small walls. Finally, a tetrahedron with a ratchet—an additional Truss Link the robot could use a bit like a walking stick—could assist other robots in forming tetrahedrons, which was a difficult, risky maneuver that took multiple attempts even for the skilled operators.

Still, all this growth in size and capability was orchestrated by the researchers controlling the hardware. The question was whether these self-assembly processes could work with no human overlords around.

“We wanted to know if the Truss Links would meet on their own,” Wyder says. “If the Truss Links are exactly parallel, they will never connect. But being parallel is just one configuration, and there are infinite configurations where they are not parallel.” To check how this would play out, the team used computer simulations of six randomly spawned and randomly moving Truss Links in a walled environment. In 2,000 runs, each 20 minutes long, the modules ended up with a 64 percent chance of forming two three-pointed star shapes; a roughly 8.4 percent of assembling into two triangles, and nearly 45 percent of ending up as a diamond with a tail. (Some of these configurations were intermediates on the pathway to others, so the numbers add up to more than 100 percent.)

When moving randomly, Truss Links could also repair structures after their magnets got disconnected and even replace a malfunctioning Truss Link in the structure with a new one. But did they really metabolize anything?

Searching for purpose

The name “metabolism” comes from the Greek word “metabolē” which means “change.” Wyder’s robots can assemble, grow, reconfigure, rebuild, and, to a limited extent, sustain themselves, which definitely qualifies as change.

But metabolism, as it’s commonly understood, involves consuming materials in ways that extract energy and transform their chemicals. The Truss Links are limited to using prefabricated, compatible modules—they can’t consume some plastic and old lithium-ion batteries and metabolize them into brand-new Truss Links. Whether this qualifies as metabolism depends more on how far we want to stretch the definition than on what the actual robots can do.

And stretching definitions, so far, may be their strongest use case. “I can’t give you a real-world use case,” Wyder acknowledges. “We tried to make the truss robots carry loads from one point to another, but it’s not even included in our paper—it’s a research platform at this point.” The first thing he thinks the robotic metabolism platform is missing is a wider variety of modules. The team used homogeneous modules in this work but is already thinking about branching out. “Life uses around 20 different amino acids to work, so we’re currently focusing on integrating additional modules with various sensors,” Wyder explains. But the robots  are also lacking something way more fundamental: a purpose.

Life evolves to improve the chances of survival. It does so in response to pressures like predators or a challenging environment. A living thing is usually doing its best to avoid dying.

Egerstedt in “Robot Ecology“ argues we should build and program robots the same way with “survivability constraints” in mind. Wyder, in his paper, also claims we need to develop a “self-sustained robot ecology” in the future. But he also thinks we shouldn’t take this life analogy too far. His goal is not creating a robotic ecosystem where robots would hunt and feed on other robots, constantly improving their own designs.

“We would give robots a purpose. Let’s say a purpose is to build a lunar colony,” Wyder says. Survival should be the first objective, because if the platform doesn’t survive on the Moon, it won’t build a lunar colony. Multiple small units would first disperse to explore the area and then assemble into a bigger structure like a building or a crane. “And this large structure would absorb, recycle, or eat, if you will, all these smaller robots to integrate and make use of them,” Wyder claims.

A robotic platform like this, Wyder thinks, should adapt to unexpected circumstances even better than life itself. “There may be a moment where having a third arm would really save your life, but you can’t grow one. A robot, given enough time, won’t have that problem,” he says.

Science Advances, 2025.  DOI: 10.1126/sciadv.adu6897

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Robots eating other robots: The benefits of machine metabolism Read More »

robotic-sucker-can-adapt-to-surroundings-like-an-actual-octopus

Robotic sucker can adapt to surroundings like an actual octopus

This isn’t the first time suction cups were inspired by highly adaptive octopus suckers. Some models have used pressurized chambers meant to push against a surface and conform to it. Others have focused more on matching the morphology of a biological sucker. This has included giving the suckers microdenticles, the tiny tooth-like projections on octopus suckers that give them a stronger grip.

Previous methods of artificial conformation have had some success, but they could be prone to leakage from gaps between the sucker and the surface it is trying to stick to, and they often needed vacuum pumps to operate. Yue and his team created a sucker that was morphologically and mechanically similar to that of an octopus.

Suckers are muscular structures with an extreme flexibility that helps them conform to objects without leakage, contract when gripping objects, and release tension when letting them go. This inspired the researchers to create suckers from a silicone sponge material on the inside and a soft silicone pad on the outside.

For the ultimate biomimicry, Yue thought that the answer to the problems experienced with previous models was to come up with a sucker that simulated the mucus secretion of octopus suckers.

This really sucks

Cephalopod suction was previously thought to be a product of these creatures’ soft, flexible bodies, which can deform easily to adapt to whatever surface it needs to grip. Mucus secretion was mostly overlooked until Yue decided to incorporate it into his robo-suckers.

Mollusk mucus is known to be five times more viscous than water. For Yue’s suckers, an artificial fluidic system, designed to mimic the secretions released by glands on a biological sucker, creates a liquid seal between the sucker and the surface it is adhering to, just about eliminating gaps. It might not have the strength of octopus slime, but water is the next best option for a robot that is going to be immersed in water when it goes exploring, possibly in underwater caves or at the bottom of the ocean.

Robotic sucker can adapt to surroundings like an actual octopus Read More »

research-roundup:-7-stories-we-almost-missed

Research roundup: 7 stories we almost missed


Ping-pong bots, drumming chimps, picking styles of two jazz greats, and an ancient underground city’s soundscape

Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim

It’s a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. May’s list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights.

Special relativity made visible

The Terrell-Penrose-Effect: Fast objects appear rotated

Credit: TU Wien

Perhaps the most well-known feature of Albert Einstein’s special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It’s not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics.

They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera.

Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere’s North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959.

DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  (About DOIs).

Drumming chimpanzees

A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025.

Chimpanzees are known to “drum” on the roots of trees as a means of communication, often combining that action with what are known as “pant-hoot” vocalizations (see above video). Scientists have found that the chimps’ drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms.

Back in 2022, the same team observed that individual chimps had unique styles of “buttress drumming,” which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africa (Uganda) and West Africa (Ivory Coast), amounting to 371 drumming bouts.

Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  (About DOIs).

Distinctive styles of two jazz greats

Wes Montgomery (left)) and Joe Pass (right) playing guitars

Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn’t use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn’t want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and “flat picking.”

Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA.

Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass’s rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumb (Montgomery) produced more of a “pluck” compared to the pick (Pass), which produced more of a “strike.” Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery.

Sounds of an ancient underground city

A collection of images from the underground tunnels of Derinkuyu.

Credit: Sezin Nas

Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channels (and some 50,000 smaller shafts) serving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions.

The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu’s most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site’s acoustic environment.

Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves.

MIT’s latest ping-pong robot

Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to “learn” from prior data to improve their performance.

MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid’s arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras.

The new bot can execute three different swing types (loop, drive, and chip) and during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot’s strike speed up to 19 meters per second (about 42 MPH), close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming.

Why orange cats are orange

an orange tabby kitten

Cat lovers know orange cats are special for more than their unique coloring, but that’s the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology.

Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshell (partially orange) coloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resources (including complete sequenced genomes) for cats which greatly aided the team’s research, along with taking additional DNA samples from cats at spay and neuter clinics.

From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn’t known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutation (sex-linked orange) turns on Arhgap36 expression in pigment cells (and only pigment cells), thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  (About DOIs).

Not a Roman “massacre” after all

Two of the skeletons excavated by Mortimer Wheeler in the 1930s, dating from the 1st century AD.

Credit: Martin Smith

In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that’s the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries.

But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn’t die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It’s possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle.

DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 stories we almost missed Read More »

want-a-humanoid,-open-source-robot-for-just-$3,000?-hugging-face-is-on-it.

Want a humanoid, open source robot for just $3,000? Hugging Face is on it.

You may have noticed he said “robots” plural—that’s because there’s a second one. It’s called Reachy Mini, and it looks like a cute, Wall-E-esque statue bust that can turn its head and talk to the user. Among other things, it’s meant to be used to test AI applications, and it’ll run between $250 and $300.

You can sort of think of these products as the equivalent to a Raspberry Pi, but in robot form and for AI developers—Hugging Face’s main customer base.

Hugging Face has previously released AI models meant for robots, as well as a 3D-printable robotic arm. This year, it announced an acquisition of Pollen Robotics, a company that was working on humanoid robots. Hugging Face’s Cadene came to the company by way of Tesla.

For context on the pricing, Tesla’s Optimus Gen 2 humanoid robot (while admittedly much more advanced, at least in theory) is expected to cost at least $20,000.

There is a lot of investment in robotics like this, but there are still big barriers—and price isn’t the only one. There’s battery life, for example; Unitree’s G1 only runs for about two hours on a single charge.

Want a humanoid, open source robot for just $3,000? Hugging Face is on it. Read More »

robobee-sticks-the-landing

RoboBee sticks the landing

Image of the RoboBee with insect-inspired legs standing next to a US penny for scale.

The RoboBee is only slightly larger than a penny. Credit: Harvard Microrobotics Laboratory

The first step was to perform experiments to determine the effects of oscillation on the newly designed robotic legs and leg joints. This involved manually disturbing the leg and then releasing it, capturing the resulting oscillations on high-speed video. This showed that the leg and joint essentially acted as an “underdamped spring-mass-damper model,” with a bit of “viscoelastic creep” for good measure. Next, the team performed a series of free-fall experiments with small fiberglass crash-test dummy vehicles with mass and inertia similar to RoboBee’s, capturing each free fall on high-speed video. This was followed by tests of different takeoff and landing approaches.

The final step was running experiments on consecutive takeoff and landing sequences using RoboBee, with the little robot taking off from one leaf, hovering, then moving laterally before hovering briefly and landing on another leaf nearby. The basic setup was the same as prior experiments, with the exception of placing a plant branch in the motion-capture arena. RoboBee was able to safely land on the second leaf (or similar uneven surfaces) over repeated trials with varying parameters.

Going forward, Wood’s team will seek to further improve the mechanical damping upon landing, drawing lessons from stingless bees and mosquitoes, as well as scaling up to larger vehicles. This would require an investigation into more complex leg geometries, per the authors. And RoboBee still needs to be tethered to off-board control systems. The team hopes one day to incorporate onboard electronics with built-in sensors.

“The longer-term goal is full autonomy, but in the interim we have been working through challenges for electrical and mechanical components using tethered devices,” said Wood. “The safety tethers were, unsurprisingly, getting in the way of our experiments, and so safe landing is one critical step to remove those tethers.” This would make RoboBee more viable for a range of practical applications, including environmental monitoring, disaster surveillance, or swarms of RoboBees engaged in artificial pollination.

Science Robotics, 2025. DOI: 10.1126/scirobotics.adq3059  (About DOIs).

RoboBee sticks the landing Read More »

spiderbot-experiments-hint-at-“echolocation”-to-locate-prey

SpiderBot experiments hint at “echolocation” to locate prey

It’s well understood that spiders have poor eyesight and thus sense the vibrations in their webs whenever prey (like a fly) gets caught; the web serves as an extension of their sensory system. But spiders also exhibit less-understood behaviors to locate struggling prey. Most notably, they take on a crouching position, sometimes moving up and down to shake the web or plucking at the web by pulling in with one leg. The crouching seems to be triggered when prey is stationary and stops when the prey starts moving.

But it can be difficult to study the underlying mechanisms of this behavior because there are so many variables at play when observing live spiders. To simplify matters, researchers at Johns Hopkins University’s Terradynamics Laboratory are building crouching spider robots and testing them on synthetic webs. The results provide evidence for the hypothesis that spiders crouch to sense differences in web frequencies to locate prey that isn’t moving—something analogous to echolocation. The researchers presented their initial findings today at the American Physical Society’s Global Physics Summit in Anaheim, California.

“Our lab investigates biological problems using robot physical models,” team member Eugene Lin told Ars. “Animal experiments are really hard to reproduce because it’s hard to get the animal to do what you want to do.” Experiments with robot physical models, by contrast, “are completely repeatable. And while you’re building them, you get a better idea of the actual [biological] system and how certain behaviors happen.” The lab has also built robots inspired by cockroaches and fish.

The research was done in collaboration with two other labs at JHU. Andrew Gordus’ lab studies spider behavior, particularly how they make their webs, and provided biological expertise as well as videos of the particular spider species (U. diversus) of interest. Jochen Mueller’s lab provided expertise in silicone molding, allowing the team to use their lab to 3D-print their spider robot’s flexible joints.

Crouching spider, good vibrations

A spider exhibiting crouching behavior.

A spider exhibiting crouching behavior. Credit: YouTube/Terradynamics Lab/JHU

The first spider robot model didn’t really move or change its posture; it was designed to sense vibrations in the synthetic web. But Lin et al. later modified it with actuators so it could move up and down. Also, there were only four legs, with two joints in each and two accelerometers on each leg; real spiders have eight legs and many more joints. But the model was sufficient for experimental proof of principle. There was also a stationary prey robot.

SpiderBot experiments hint at “echolocation” to locate prey Read More »

a-“biohybrid”-robotic-hand-built-using-real-human-muscle-cells

A “biohybrid” robotic hand built using real human muscle cells

Biohybrid robots work by combining biological components like muscles, plant material, and even fungi with non-biological materials. While we are pretty good at making the non-biological parts work, we’ve always had a problem with keeping the organic components alive and well. This is why machines driven by biological muscles have always been rather small and simple—up to a couple centimeters long and typically with only a single actuating joint.

“Scaling up biohybrid robots has been difficult due to the weak contractile force of lab-grown muscles, the risk of necrosis in thick muscle tissues, and the challenge of integrating biological actuators with artificial structures,” says Shoji Takeuchi, a professor at the Tokyo University, Japan. Takeuchi led a research team that built a full-size, 18 centimeter-long biohybrid human-like hand with all five fingers driven by lab-grown human muscles.

Keeping the muscles alive

Out of all the roadblocks that keep us from building large-scale biohybrid robots, necrosis has probably been the most difficult to overcome. Growing muscles in a lab usually means a liquid medium to supply nutrients and oxygen to muscle cells seeded on petri dishes or applied to gel scaffoldings. Since these cultured muscles are small and ideally flat, nutrients and oxygen from the medium can easily reach every cell in the growing culture.

When we try to make the muscles thicker and therefore more powerful, cells buried deeper in those thicker structures are cut off from nutrients and oxygen, so they die, undergoing necrosis. In living organisms, this problem is solved by the vascular network. But building artificial vascular networks in lab-grown muscles is still something we can’t do very well. So, Takeuchi and his team had to find their way around the necrosis problem. Their solution was sushi rolling.

The team started by growing thin, flat muscle fibers arranged side by side on a petri dish. This gave all the cells access to nutrients and oxygen, so the muscles turned out robust and healthy. Once all the fibers were grown, Takeuchi and his colleagues rolled them into tubes called MuMuTAs (multiple muscle tissue actuators) like they were preparing sushi rolls. “MuMuTAs were created by culturing thin muscle sheets and rolling them into cylindrical bundles to optimize contractility while maintaining oxygen diffusion,” Takeuchi explains.

A “biohybrid” robotic hand built using real human muscle cells Read More »

google’s-new-robot-ai-can-fold-delicate-origami,-close-zipper-bags-without-damage

Google’s new robot AI can fold delicate origami, close zipper bags without damage

On Wednesday, Google DeepMind announced two new AI models designed to control robots: Gemini Robotics and Gemini Robotics-ER. The company claims these models will help robots of many shapes and sizes understand and interact with the physical world more effectively and delicately than previous systems, paving the way for applications such as humanoid robot assistants.

It’s worth noting that even though hardware for robot platforms appears to be advancing at a steady pace (well, maybe not always), creating a capable AI model that can pilot these robots autonomously through novel scenarios with safety and precision has proven elusive. What the industry calls “embodied AI” is a moonshot goal of Nvidia, for example, and it remains a holy grail that could potentially turn robotics into general-use laborers in the physical world.

Along those lines, Google’s new models build upon its Gemini 2.0 large language model foundation, adding capabilities specifically for robotic applications. Gemini Robotics includes what Google calls “vision-language-action” (VLA) abilities, allowing it to process visual information, understand language commands, and generate physical movements. By contrast, Gemini Robotics-ER focuses on “embodied reasoning” with enhanced spatial understanding, letting roboticists connect it to their existing robot control systems.

For example, with Gemini Robotics, you can ask a robot to “pick up the banana and put it in the basket,” and it will use a camera view of the scene to recognize the banana, guiding a robotic arm to perform the action successfully. Or you might say, “fold an origami fox,” and it will use its knowledge of origami and how to fold paper carefully to perform the task.

Gemini Robotics: Bringing AI to the physical world.

In 2023, we covered Google’s RT-2, which represented a notable step toward more generalized robotic capabilities by using Internet data to help robots understand language commands and adapt to new scenarios, then doubling performance on unseen tasks compared to its predecessor. Two years later, Gemini Robotics appears to have made another substantial leap forward, not just in understanding what to do but in executing complex physical manipulations that RT-2 explicitly couldn’t handle.

While RT-2 was limited to repurposing physical movements it had already practiced, Gemini Robotics reportedly demonstrates significantly enhanced dexterity that enables previously impossible tasks like origami folding and packing snacks into Zip-loc bags. This shift from robots that just understand commands to robots that can perform delicate physical tasks suggests DeepMind may have started solving one of robotics’ biggest challenges: getting robots to turn their “knowledge” into careful, precise movements in the real world.

Better generalized results

According to DeepMind, the new Gemini Robotics system demonstrates much stronger generalization, or the ability to perform novel tasks that it was not specifically trained to do, compared to its previous AI models. In its announcement, the company claims Gemini Robotics “more than doubles performance on a comprehensive generalization benchmark compared to other state-of-the-art vision-language-action models.” Generalization matters because robots that can adapt to new scenarios without specific training for each situation could one day work in unpredictable real-world environments.

That’s important because skepticism remains regarding how useful humanoid robots currently may be or how capable they really are. Tesla unveiled its Optimus Gen 3 robot last October, claiming the ability to complete many physical tasks, yet concerns persist over the authenticity of its autonomous AI capabilities after the company admitted that several robots in its splashy demo were controlled remotely by humans.

Here, Google is attempting to make the real thing: a generalist robot brain. With that goal in mind, the company announced a partnership with Austin, Texas-based Apptronik to”build the next generation of humanoid robots with Gemini 2.0.” While trained primarily on a bimanual robot platform called ALOHA 2, Google states that Gemini Robotics can control different robot types, from research-oriented Franka robotic arms to more complex humanoid systems like Apptronik’s Apollo robot.

Gemini Robotics: Dexterous skills.

While the humanoid robot approach is a relatively new application for Google’s generative AI models (from this cycle of technology based on LLMs), it’s worth noting that Google had previously acquired several robotics companies around 2013–2014 (including Boston Dynamics, which makes humanoid robots), but later sold them off. The new partnership with Apptronik appears to be a fresh approach to humanoid robotics rather than a direct continuation of those earlier efforts.

Other companies have been hard at work on humanoid robotics hardware, such as Figure AI (which secured significant funding for its humanoid robots in March 2024) and the aforementioned former Alphabet subsidiary Boston Dynamics (which introduced a flexible new Atlas robot last April), but a useful AI “driver” to make the robots truly useful has not yet emerged. On that front, Google has also granted limited access to the Gemini Robotics-ER through a “trusted tester” program to companies like Boston Dynamics, Agility Robotics, and Enchanted Tools.

Safety and limitations

For safety considerations, Google mentions a “layered, holistic approach” that maintains traditional robot safety measures like collision avoidance and force limitations. The company describes developing a “Robot Constitution” framework inspired by Isaac Asimov’s Three Laws of Robotics and releasing a dataset unsurprisingly called “ASIMOV” to help researchers evaluate safety implications of robotic actions.

This new ASIMOV dataset represents Google’s attempt to create standardized ways to assess robot safety beyond physical harm prevention. The dataset appears designed to help researchers test how well AI models understand the potential consequences of actions a robot might take in various scenarios. According to Google’s announcement, the dataset will “help researchers to rigorously measure the safety implications of robotic actions in real-world scenarios.”

The company did not announce availability timelines or specific commercial applications for the new AI models, which remain in a research phase. While the demo videos Google shared depict advancements in AI-driven capabilities, the controlled research environments still leave open questions about how these systems would actually perform in unpredictable real-world settings.

Google’s new robot AI can fold delicate origami, close zipper bags without damage Read More »