Physics

flexible-feline-spines-shed-light-on-“falling-cat”-problem

Flexible feline spines shed light on “falling cat” problem

Why do falling cats always seem to land on their feet? Scientists have been arguing about the precise mechanism for a very long time—since at least 1700, in fact—conducting all manner of experiments to pin down what’s going on. The research continues, with a paper published in the journal The Anatomical Record reporting on new experiments to analyze the flexibility of feline spines.

We covered this topic in-depth in 2019, when University of North Carolina, Charlotte, physicist Greg Gbur published his book, Falling Felines and Fundamental Physics. For a long time, scientists believed that it would be impossible for a cat in free fall to turn over. That’s why French physiologist Etienne-Jules Marey’s 1894 high-speed photographs of a falling cat landing on its feet proved so shocking to Marey’s peers. But Gbur has emphasized that cats are living creatures, not idealized rigid bodies, so the motion is more complicated than one might think.

Over the centuries, scientists have offered four distinct hypotheses to explain the phenomenon. There is the original “tuck and turn” model, in which the cat pulls in one set of paws so it can rotate different sections of its body. Nineteenth-century physicist James Clerk Maxwell offered a “falling figure skater” explanation, whereby the cat tweaks its angular momentum by pulling in or extending its paws as needed. Then there is the “bend and twist,” in which the cat bends at the waist to counter-rotate the two segments of its body. Finally, there is the “propeller tail,” in which the cat can reverse its body’s rotation by rotating its tail in one direction like a propeller.

At the time, Gbur told Ars that, while all those different motions play a role, he thought that the bend-and-twist motion was the most important. “When one goes through the math, that seems to be the most fundamental aspect of how a cat turns over,” he said. “But there are all these little corrections on top of that: using the tail, or using the paws for additional leverage, also play a role.” This latest paper has Gbur rethinking that conclusion, according to his recent blog post, giving a bit more credence to the tuck-and-turn mechanism.

Flexible feline spines shed light on “falling cat” problem Read More »

asteroid-defense-mission-shifted-the-orbit-of-more-than-its-target

Asteroid defense mission shifted the orbit of more than its target


The binary asteroid’s orbit around the Sun was affected by the impact.

Italy’s LICIACube spacecraft snapped this image of asteroids Didymos (lower left) and Dimorphos (upper right) a few minutes after the impact of DART on September 26, 2022. Credit: ASI/NASA

On September 26, 2022, NASA’s Double Asteroid Redirection Test (DART) spacecraft crashed into a binary asteroid system. By intentionally ramming a probe into the 160-meter-wide moonlet named Dimorphos, the smaller of the two asteroids, humanity demonstrated that the kinetic impact method of planetary defense actually works. The immediate result was that Dimorphos’ orbital period around Didymos, its larger parent body, was slashed by 33 minutes.

Of course, altering a moonlet’s local orbit doesn’t seem like enough to safeguard Earth from civilization-ending impacts. But now, as long-term observational data has come in, it seems we accomplished more than that. DART actually changed the trajectory of the entire Didymos binary system, altering its orbit around the Sun.

Tracking space rocks

Measuring the orbital shift of a 780-meter-wide primary asteroid and its moonlet from millions of miles away isn’t trivial. When DART slammed into Dimorphos, it didn’t knock the binary system wildly off its trajectory around the Sun. The change in the system’s heliocentric trajectory was expected to be small, a minuscule nudge that would become apparent only after months or years of continuous observation. By analyzing enough painstakingly gathered data, a global team of researchers led by Rahil Makadia at the University of Illinois Urbana-Champaign has now determined the consequences of the DART impact.

To find the infinitesimal deviation DART created, Makadia’s team relied mostly on a technique called stellar occultation. When an asteroid passes in front of a distant star from the perspective of an observer on Earth, the star briefly blinks out. By precisely timing these blinks as they sweep across the globe, astronomers can pinpoint an asteroid’s position with astonishing accuracy.

Between October 2022 and March 2025, we captured 22 such stellar occultations of the Didymos system. Combined with a huge dataset publicly available at the Minor Planet Data Center that included nearly 6,000 ground-based astrometric measurements taken over 29 years, optical navigation data from the DART probe’s approach, and ground-based radar measurements, researchers finally had all they needed.

“Once we had enough measurements before and after the DART impact, we could discern how Didymos’ orbit has changed,” Makadia said.

When the vending-machine-sized DART probe crashed into Dimorphos at over 22,000 kilometers per hour, it decreased the along-track velocity of the entire Didymos system by roughly 11.7 micrometers per second. But the team thinks it’s still significant. “When you do it early enough, even a small impulse can accumulate over years and cause a meaningful shift,” Makadia explained.

Also, the DART impact itself was not the only force that changed Didymos’ orbit.

The ejecta engine

The pure kinetic energy of a 500-kilogram spacecraft hitting at hypersonic speeds is impressive, but on its own, it would not slow a huge asteroid that much. When DART struck Dimorphos, it blasted pulverized rock and dust out into the void. “The material kicked up off an asteroid surface acts like an extra rocket plume,” Makadia said.

Scientists call this effect the momentum enhancement factor, denoted by the Greek letter beta. If the spacecraft impact transferred exactly its own momentum and no debris was kicked up, beta would be exactly one.

Because Dimorphos orbits Didymos, some of the ejecta remained trapped in the system, where it altered the mutual orbit between the two rocks. But a crucial fraction of the ejecta achieved escape velocity from the entire binary system. The momentum carried away by the system-escaping debris is what ultimately contributed to shoving the center of mass of the whole Didymos-Dimorphos pair. “In our case, we found that the beta parameter due to DART impact was around two,” Makadia explained.

The debris blasted completely out of the Didymos system gave the asteroids a push roughly equal to the initial impact of the spacecraft itself.

To calculate how momentum was transferred, Makadia and his colleagues had to determine precisely how massive Didymos and Dimorphos are. By linking the heliocentric deflection to the previously known changes in Dimorphos’ local orbit, the researchers were able to perform a neat mathematical trick to uncover the bulk densities of both asteroids. And this revealed something a bit unexpected about the Didymos system.

“Most studies were going under the assumption that both asteroids have equal density—turns out that assumption was not correct,” Makadia said.

A rubble pile

Based on Makadia’s calculations, Didymos, the primary body, is relatively solid. It has a bulk density of around 2.6 tons per cubic meter, which aligns with standard estimates for siliceous asteroids. Dimorphos, however, is a different story. Its density is a surprisingly low 1.51 tons per cubic meter. This implies that the smaller asteroid targeted by DART is essentially a fluffy, loosely bound agglomeration of boulders, rocks, and dust, with empty voids between the rubble.

“This was a real surprise,” Makadia said. “We previously didn’t know anything about the density of Dimorphos.” The contrast in density tells the story of how this binary system formed.

Billions of years of uneven heating and radiation from the Sun can cause an irregularly shaped asteroid like Didymos to gradually spin faster, a phenomenon known as the YORP (Yarkovsky, O’Keefe, Radzievskii, Paddack) effect. Eventually, Didymos spun so fast that the centrifugal force overcame its gravity, and it began shedding loose material from its equator. That shed material eventually coalesced in orbit, gently clumping together to form the porous, fragile moonlet we now know as Dimorphos.

Overall, Didymos is nearly 200 times more massive than its smaller companion, which explains why shifting the larger asteroid system takes such an enormous amount of force. The sheer inertia of Didymos means that the barycenter deflection of its entire system was just a tiny fraction of the deflection felt locally by Dimorphos.

Planetary defense

Makadia’s findings confirm the models we used to estimate the consequences of the DART impact: The Didymos system still poses zero threat to us, at least for the next 100 years or so. “The pre-DART condition was that the closest the Didymos system can get to Earth was around 15 lunar distances, and this has not changed appreciably,” Makadia explained.

The goal of DART was primarily to take our planetary defense out of the realm of computer models and get us some hands-on, practical experience, and Makadia thinks we succeeded in doing that. “Our work proves that hitting the secondary asteroid is a viable path for deflecting a binary system away as long as the push is large enough,” he said. “This wasn’t the goal of DART, but we can always design a bigger spacecraft.”

This experience applies both to deflecting binary asteroid systems like Didymos and singular objects. “Our results definitely help us in all sorts of future kinetic impact endeavors,” Makadia added.

The final verification of the DART mission’s consequences, though, will come in late 2026, when the European Space Agency’s Hera spacecraft will arrive at the Didymos system.

By performing independent, in-situ measurements of things like the density of Didymos and Dimorphos, Hera will provide a lot of precise gravitational and physical data that Makadia hopes to use to refine his calculations.

“It’s a high-fidelity instrument that hopefully will give us confirmation of what we believe,” Makadia said. “Plus, there are always new things to be found out when we visit an asteroid. I’m very excited about when Hera gets there.”

Science Advances, 2026.  DOI: 10.1126/sciadv.aea4259

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Asteroid defense mission shifted the orbit of more than its target Read More »

photons-that-aren’t-actually-there-influence-superconductivity

Photons that aren’t actually there influence superconductivity

Despite the headline, this isn’t really a story about superconductivity—at least not the superconductivity that people care about, the stuff that doesn’t require exotic refrigeration to work. Instead, it’s a story about how superconductivity can be used as a test of some of the weirder consequences of quantum mechanics, one that involves non-existent particles of light that still act as if they exist.

Researchers have found a way to get these virtual photons to influence the behavior of a superconductor, ultimately making it worse. That may, in the end, tell us something useful about superconductivity, but it’ll probably take a little while.

Virtual reality

The story starts with quantum field theory, which is incredibly complex, but the simplified version is that even empty space is filled with fields that could govern the interactions of any quantum objects in or near that space. You can think of different particles as energetic excitements of these fields—so a photon is simply an energetic state of the quantum field.

Some of these particles have real existences we can track, like a photon emitted by a laser and absorbed by a detector some distance away. But the quantum field also allows for virtual photons, which simply act to transmit the electromagnetic force between particles. We can’t really directly detect these, but we can definitely track their effects.

One of the stranger consequences of this is that locations that have a strong electromagnetic field can be filled with virtual photons even when no real ones are present.

Which brings us to one of the materials central to the new work: boron nitride. Like the more famous graphene, boron nitride forms a series of interlinked hexagonal rings, extending out into macroscopic sheets. The bulk material is made of sheets layered onto sheets layered onto yet more sheets. This has an effect on light transiting through the material. In one direction, the light will simply slam into the material, getting absorbed or scattered. But if it’s oriented along the plane of the sheets, it’s possible for the light to travel in the space between the boron and nitrogen atoms.

Photons that aren’t actually there influence superconductivity Read More »

the-physics-of-squeaking-sneakers

The physics of squeaking sneakers

We’re all familiar with the high-pitched squeak of basketball shoes on the court during games, or tires squealing on pavement. Scientists conducted several experiments and discovered that the geometry of the sneakers’ tread patterns determines the squeak’s frequency, enabling the team to make rubber blocks set to specific frequencies and slide them across glass surfaces to play Star Wars’ “Imperial March.”

“Tuning frictional behavior on the fly has been a long-standing engineering dream,” said co-author Katia Bertoldi of Harvard University. “This new insight into how surface geometry governs slip pulses paves the way for tunable frictional metamaterials that can transition from low-friction to high-grip states on demand.” In addition, the dynamics revealed by these results are similar to those of tectonic faults and thus give scientists a new model for the mechanics of earthquakes, according to their new paper published in the journal Nature.

Leonardo da Vinci is usually credited with conducting the first systematic study of friction in the late 15th century, a subfield now known as tribology that deals with the dynamics of interacting surfaces in relative motion. Da Vinci’s notebooks depict how he pulled rows of blocks using weights and pulleys, an approach that is still used in frictional studies today, as well as examining the friction produced in screw threads, wheels, and axles. The authors of this latest paper used an experimental setup similar to da Vinci’s.

The squeaking of sneakers on a gym floor is usually attributed to friction, specifically a stick-slip variety that involves cycles of sticking and sliding between two surfaces. But that model is best suited for interfaces involving two rigid objects, such as squeaking door hinges. Sneaker soles sliding across a gym floor involves one hard object (the floor) and one soft one (the sneaker sole). Bertholdi et al. wanted a more complete understanding of the dynamics of soft-on-rigid interfaces.

First, the team slid commercial basketball shoes (the Nike CU3503-100) across a smooth, dry glass plate, simultaneously capturing sound and visual imagery of what was happening between the sole and the glass (i.e., the frictional interface). They identified opening pulses traveling in the sliding direction non-uniformly, resulting in temporary local supersonic separations between the shoe soles and the glass plate. Those audible squeaks aren’t random; the frequency is determined by the repetition rate of the generated pulses.

The physics of squeaking sneakers Read More »

scientists-crack-the-case-of-“screeching”-scotch-tape

Scientists crack the case of “screeching” Scotch tape

In 1953, Russian scientists peeling Scotch tape in a vacuum reported detecting electrons with sufficient energy to emit X-rays. Other scientists were skeptical, but this phenomenon was finally confirmed in 2008, when UCLA physicists produced X-rays while unwinding a roll of Scotch tape in a vacuum chamber. The goal was to harness triboluminescence for X-ray imaging, and the team produced a low-quality X-ray image of a lab member’s finger (see image below). Fortunately, this only works in a perfect vacuum, so everyday Scotch tape users are safe.

A shock to the system

X-ray images of a human finger taken with peeling tape

X-ray images of a human finger taken with peeling tape.

X-ray images of a human finger taken with peeling tape. Credit: Carlos G. Camara et al., 2008

Peeling Scotch tape produces sound as well as light, typically attributed to the slip-stick mechanism at play during the peeling process. In 2010, co-author Sigurdur Thoroddsen of King Abdullah University in Saudi Arabia and colleagues used ultra-fast imaging to identify a crucial micro-fracture phenomenon of the slip mechanism: a sequence of transverse cracks that travel across the width of the adhesive at supersonic speeds. A follow-up 2024 study found a direct correspondence between the screeching sound and those transverse cracks, but did not identify a mechanism.

That is the purpose of this latest study. Thoroddsen et al. wondered whether the sound was directly generated by a crack’s rapidly moving tip, which would also produce the distinctive discrete sound wave pulses associated with peeling Scotch tape. The authors experimentally tested their hypothesis by conducting simultaneous high-speed imaging of the propagating fractures and the sound waves traveling in the air. They manually unpeeled Scotch tape using a metal rod, capturing the cracks with two video cameras and the sound with two microphones synchronized to the video camera, the better to pinpoint the origin of the pressure pulses.

Their results showed that the screeching arises from a train of weak shocks that culminate when the transverse cracks reach the edge of the tape. The supersonic speed at which they travel, relative to the surrounding air, is crucial to the generation of those shockwaves. “A partial vacuum is produced between the tape and the solid when the crack opens,” the authors explained. “The crack moves too fast for this void to be filled immediately, even though air is sucked in from the direction perpendicular to the crack. The void therefore moves with the crack until it reaches the end of the tape and collapses into the stationary air outside.” Each time a fracture tip reaches the edge of the tape, it generates a sound pulse—hence the telltale screech.

DOI: Physical Review E, 2026. 10.1103/p19h-9ysx  (About DOIs).

Scientists crack the case of “screeching” Scotch tape Read More »

did-edison-accidentally-make-graphene-in-1879?

Did Edison accidentally make graphene in 1879?

Graphene is the thinnest material yet known, composed of a single layer of carbon atoms arranged in a hexagonal lattice. That structure gives it many unusual properties that hold great promise for real-world applications: batteries, super capacitors, antennas, water filters, transistors, solar cells, and touchscreens, just to name a few. The physicists who first synthesized graphene in the lab won the 2010 Nobel Prize in Physics. But 19th century inventor Thomas Edison may have unknowingly created graphene as a byproduct of his original experiments on incandescent bulbs over a century earlier, according to a new paper published in the journal ACS Nano.

“To reproduce what Thomas Edison did, with the tools and knowledge we have now, is very exciting,” said co-author James Tour, a chemist at Rice University. “Finding that he could have produced graphene inspires curiosity about what other information lies buried in historical experiments. What questions would our scientific forefathers ask if they could join us in the lab today? What questions can we answer when we revisit their work through a modern lens?”

Edison didn’t invent the concept of incandescent lamps; there were several versions predating his efforts. However, they generally had a a very short life span and required high electric current, so they weren’t well suited to Edison’s vision of large-scale commercialization. He experimented with different filament materials starting with carbonized cardboard and compressed lampblack. This, too, quickly burnt out, as did filaments made with various grasses and canes, like hemp and palmetto. Eventually Edison discovered that carbonized bamboo made for the best filament, with life spans over 1200 hours using a 110 volt power source.

Lucas Eddy, Tour’s  grad student at Rice, was trying to figure out ways to mass produce graphene using the smallest, easiest equipment he could manage, with materials that were both affordable and readily available. He considered such options as arc welders and natural phenomena like lightning striking trees—both of which he admitted were “complete dead ends.” Edison’s light bulb, Eddy decided, would be ideal, since unlike other early light bulbs, Edison’s version was able to achieve the critical 2000 degree C temperatures required for flash Joule heating—the best method for making so-called turbostratic graphene.

Did Edison accidentally make graphene in 1879? Read More »

physicists-3d-printed-a-christmas-tree-of-ice

Physicists 3D-printed a Christmas tree of ice

Physicists at the University of Amsterdam came up with a really cool bit of Christmas decor: a miniature 3D-printed Christmas tree, a mere 8 centimeters tall, made of ice, without any refrigeration equipment or other freezing technology, and at minimal cost. The secret is evaporative cooling, according to a preprint posted to the physics arXiv.

Evaporative cooling is a well-known phenomenon; mammals use it to regulate body temperature. You can see it in your morning cup of hot coffee: the hotter atoms rise to the top of the magnetic trap and “jump out” as steam. It also plays a role (along with shock wave dynamics and various other factors) in the formation of “wine tears.” It’s a key step in creating Bose-Einstein condensates.

And evaporative cooling is also the main culprit behind the infamous “stall” that so frequently plagues aspiring BBQ pit masters eager to make a successful pork butt. The meat sweats as it cooks, releasing the moisture within, and that moisture evaporates and cools the meat, effectively canceling out the heat from the BBQ. That’s why a growing number of competitive pit masters wrap their meat in tinfoil after the first few hours (usually when the internal temperature hits 170° F).

Ice-printing methods usually rely on cryogenics or on cooled substrates. Per the authors, this is the first time evaporative cooling principles have been applied to 3D printing. The trick was to house the 3D printing inside a vacuum chamber using a jet nozzle as the printing head—something they discovered serendipitously when they were trying to get rid of air drag by spraying water in a vacuum chamber.  “The printer’s motion control guides the water jet layer-by-layer, building geometry on demand,” the authors wrote in a blog post for Nature, adding:

Physicists 3D-printed a Christmas tree of ice Read More »

no-sterile-neutrinos-after-all,-say-microboone-physicists

No sterile neutrinos after all, say MicroBooNE physicists

Since the 1990s, physicists have pondered the tantalizing possibility of an exotic fourth type of neutrino, dubbed the “sterile” neutrino, that doesn’t interact with regular matter at all, apart from its fellow neutrinos, perhaps. But definitive experimental evidence for sterile neutrinos has remained elusive. Now it looks like the latest results from Fermilab’s MiniBooNE experiment have ruled out the sterile neutrino entirely, according to a paper published in the journal Nature.

How did the possibility of sterile neutrinos even become a thing? It all dates back to the so-called “solar neutrino problem.” Physicists detected the first solar neutrinos from the Sun in 1966. The only problem was that there were far fewer solar neutrinos being detected than predicted by theory, a conundrum that became known as the solar neutrino problem. In 1962, physicists discovered a second type (“flavor”) of neutrino, the muon neutrino. This was followed by the discovery of a third flavor, the tau neutrino, in 2000.

Physicists already suspected that neutrinos might be able to switch from one flavor to another. In 2002, scientists at the Sudbury Neutrino Observatory (or SNO) announced that they had solved the solar neutrino problem. The missing solar (electron) neutrinos were just in disguise, having changed into a different flavor on the long journey between the Sun and the Earth. If neutrinos oscillate, then they must have a teensy bit of mass after all. That posed another knotty neutrino-related problem. There are three neutrino flavors, but none of them has a well-defined mass. Rather, different kinds of “mass states” mix together in various ways to produce electron, muon, and tau neutrinos. That’s quantum weirdness for you.

And there was another conundrum, thanks to results from Los Alamos’ LSND experiment and Fermilab’s MiniBooNE (MicroBooNE’s predecessor). Both found evidence of muon neutrinos oscillating into electron neutrinos in a way that shouldn’t be possible if there were just three neutrino flavors. So physicists suggested there might be a fourth flavor: the sterile neutrino, so named because unlike the other three, it does not couple to a charged counterpart via the electroweak force. Its existence would also have big implications for the nature of dark matter. But despite the odd tantalizing hint, sterile neutrinos have proven to be maddeningly elusive.

No sterile neutrinos after all, say MicroBooNE physicists Read More »

research-roundup:-6-cool-stories-we-almost-missed

Research roundup: 6 cool stories we almost missed


The assassination of a Hungarian duke, why woodpeckers grunt when they peck, and more.

Skull of remains found in a 13th century Dominican monastery on Margaret Island, Budapest, Hungary Credit: Eötvös Loránd University

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. November’s list includes forensic details of the medieval assassination of a Hungarian duke, why woodpeckers grunt when they peck, and more evidence that X’s much-maligned community notes might actually help combat the spread of misinformation after all.

An assassinated medieval Hungarian duke

The observed perimortem lesions on the human remains (CL=cranial lesion, PL= Postcranial lesion). The drawing of the skeleton was generated using OpenAI’s image generation tools (DALL·E) via ChatGPT.

Credit: Tamás Hajdu et al., 2026

Back in 1915, archaeologists discovered the skeletal remains of a young man in a Dominican monastery on Margaret Island in Budapest, Hungary. The remains were believed to be those of Duke Bela of Masco, grandson of the medieval Hungarian King Bela IV. Per historical records, the young duke was brutally assassinated in 1272 by a rival faction and his mutilated remains were recovered by the duke’s sister and niece and buried in the monastery.

The identification of the remains was based on a contemporary osteological analysis, but they were subsequently lost and only rediscovered in 2018. A paper published in the journal Forensic Science International: Genetics has now confirmed that identification and shed more light on precisely how the duke died. (A preprint is available on bioRxiv.]

An interdisciplinary team of researchers performed various kinds of bioarchaeological analysis on the remains. including genetic testing, proteomics, 3D modeling, and radiocarbon dating. The resulting data definitively proves that the skeleton is indeed that of Duke Bela of Masco.

The authors were also able to reconstruct the manner of the duke’s death, concluding that this was a coordinated attack by three people. One attacked from the front while the other two attacked from the left and right sides, and the duke was facing his assassins and tried to defend himself. The weapons used were most likely a saber and a long sword, and the assassins kept raining down blows even after the duke had fallen to the ground. The authors concluded that while the attack was clearly planned, it was also personal and fueled by rage or hate.

DOI: Forensic Science International: Genetics, 2025. 10.1016/j.fsigen.2025.103381  (About DOIs).

Why woodpeckers grunt when they peck

A male Pileated woodpecker foraging on a t

Woodpeckers energetically drum away at tree trunks all day long with their beaks and yet somehow never seem to get concussions, despite the fact that such drumming can produce deceleration forces as high as 1,200 g’s. (Humans suffer concussions with a sudden deceleration of just 100 g’s.) While popular myth holds that woodpecker heads are structured in such a way to absorb the shock, and there has been some science to back that up, more recent research found that their heads act more like hammers than shock absorbers. A paper published in the Journal of Experimental Biology sheds further light on the biomechanics of how woodpeckers essentially turn themselves into hammers and reveals that the birds actually grunt as they strike wood.

The authors caught eight wild downy woodpeckers and recorded them drilling and tapping on pieces of hardwood in the lab for three days, while also measuring electrical signals in their heads, necks, abdomens, tails, and leg muscles. Analyzing the footage, they found that woodpeckers use their hip flexors and front neck muscles to propel themselves forward as they peck while tipping their heads back and bracing themselves using muscles at the base of the skull and back of the neck. The birds use abdominal muscles for stability and brace for impact using their tail muscles to anchor their bodies against a tree. As for the grunting, the authors noted that it’s a type of breathing pattern used by tennis players (and martial artists) to boost the power of a strike.

DOI: Journal of Experimental Biology, 2025. 10.1242/jeb.251167  (About DOIs).

Raisins turn water into wine

wine glass half filled with raisins

Credit: Kyoto University

Fermentation has been around in some form for millennia, relying on alcohol-producing yeasts like Saccharomyces cerevisiae; cultured S. cerevisiae is still used by winemakers today. It’s long been thought that winemakers in ancient times stored fresh crushed grapes in jars and relied on natural fermentation to work its magic, but recent studies have called this into question by demonstrating that S. cerevisiae colonies usually don’t form on fresh grape skins. But the yeast does like raisins, as Kyoto University researchers recently discovered. They’ve followed up that earlier work with a paper published in Scientific Reports, demonstrating that it’s possible to use raisins to turn water into wine.

The authors harvested fresh grapes and dried them for 28 days. Some were dried using an incubator, some were sun-dried, and a third batch was dried using a combination of the two methods. The researchers then added the resulting raisins to bottles of water—three samples for each type of drying process—sealed the bottles, and stored them at room temperature for two weeks. One incubator-dried sample and two combo samples successfully fermented, but all three of the sun-dried samples did so, and at higher ethanol concentrations. Future research will focus on identifying the underlying molecular mechanisms. And for those interested in trying this at home, the authors warn that it only works with naturally sun-dried raisins, since store-bought varieties have oil coatings that block fermentation.

DOI: Scientific Reports, 2025. 10.1038/s41598-025-23715-3  (About DOIs).

An octopus-inspired pigment

An octopus camouflages itself with the seafloor.

Credit: Charlotte Seid

Octopuses, cuttlefish, and several other cephalopods can rapidly shift the colors in their skin thanks to that skin’s unique complex structure, including layers of chromatophores, iridophores, and leucophores. A color-shifting natural pigment called xanthommatin also plays a key role, but it’s been difficult to study because it’s hard to harvest enough directly from animals, and lab-based methods of making the pigment are labor-intensive and don’t yield much. Scientists at the University of San Diego have developed a new method for making xanthommatin in substantially larger quantities, according to a paper published in Nature Biotechnology.

The issue is that trying to get microbes to make foreign compounds creates a metabolic burden, and the microbes hence resist the process, hindering yields. The USD team figured out how to trick the cells into producing more xanthommatin by genetically engineering them in such a way that making the pigment was essential to a cell’s survival. They achieved yields of between 1 and 3 grams per liter, compared to just five milligrams of pigment per liter using traditional approaches. While this work is proof of principle, the authors foresee such future applications as photoelectronic devices and thermal coatings, dyes, natural sunscreens, color-changing paints, and environmental sensors. It could also be used to make other kinds of chemicals and help industries shift away from older methods that rely on fossil fuel-based materials.

DOI: Nature Biotechnology, 2025. 10.1038/s41587-025-02867-7  (About DOIs).

A body-swap robot

Participant standing on body-swap balance robot

Credit: Sachi Wickramasinghe/UBC Media Relations

Among the most serious risks facing older adults is falling. According to the authors of a paper published in Science Robotics, standing upright requires the brain to coordinate signals from the eyes, inner ears, and feet to counter gravity, and there’s a natural lag in how fast this information travels back and forth between brain and muscles. Aging and certain diseases like diabetic neuropathy and multiple sclerosis can further delay that vital communication; the authors liken it to steering a car with a wheel that responds half a second late. And it’s a challenge to directly study the brain under such conditions.

That’s why researchers at the University of British Columbia built a large “body swap” robotic platform. Subjects stood on force plates attached to a motor-driven backboard to reproduce the physical forces at play when standing upright: gravity, inertia, and “viscosity,” which in this case describes the damping effect of muscles and joints that allow us to lean without falling. The platform is designed to subtly alter those forces and also add a 200-millisecond delay.

The authors tested 20 participants and found that lowering inertia and making the viscosity negative resulted in similar instability to that which resulted from a signal delay. They then brought in ten new subjects to study whether adjusting body mechanics could compensate for information delays. They found that adding inertia and viscosity could at least partially counter the instability that arose from signal delay—essentially giving the body a small mechanical boost to help the brain maintain balance. The eventual goal is to design wearables that offer gentle resistance when an older person starts to lose their balance, and/or help patients with MS, for example, adjust to slower signal feedback.

DOI: Science Robotics, 2025. 10.1126/scirobotics.adv0496  (About DOIs).

X community notes might actually work

cropped image of phone screen showing an X post with a community note underneath

Credit: Huaxia Rui

Earlier this year, Elon Musk claimed that X’s community notes feature needed tweaking because it was being gamed by “government & legacy media” to contradict Trump—despite vigorously defending the robustness of the feature against such manipulation in the past. A growing body of research seems to back Musk’s earlier stance.

For instance, last year Bloomberg pointed to several studies suggesting that crowdsourcing worked just as well as using professional fact-checkers when assessing the accuracy of news stories. The latest evidence that crowd-sourcing fact checks can be effective at curbing misinformation comes from a paper published in the journal Information Systems Research, which found that X posts with public corrections were 32 percent more likely to be deleted by authors.

Co-author Huaxia Rui of the University of Rochester pointed out that community notes must meet a threshold before they will appear publicly on posts, while those that do not remain hidden from public view. Seeing a prime opportunity in the arrangement, Rui et al. analyzed 264,600 X posts that had received at least one community note and compared those just above and just below that threshold. The posts were collected from two different periods: June through August 2024, right before the US presidential election (when misinformation typically surges), and the post-election period of January and February 2025.

The fact that roughly one-third of authors responded to public community notes by deleting the post suggests that the built-in dynamics of social media (e.g., status, visibility, peer feedback) might actually help improve the spread of misinformation as intended. The authors concluded that crowd-checking “strikes a balance between First Amendment rights and the urgent need to curb misinformation.” Letting AI write the community notes, however, is probably still a bad idea.

DOI: Information Systems Research, 2025. 10.1287/isre.2024.1609  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 6 cool stories we almost missed Read More »

what-if-the-aliens-come-and-we-just-can’t-communicate?

What if the aliens come and we just can’t communicate?


Ars chats with particle physicist Daniel Whiteson about his new book Do Aliens Speak Physics?

Science fiction has long speculated about the possibility of first contact with an alien species from a distant world and how we might be able to communicate with them. But what if we simply don’t have enough common ground for that to even be possible? An alien species is bound to be biologically very different, and their language will be shaped by their home environment, broader culture, and even how they perceive the universe. They might not even share the same math and physics. These and other fascinating questions are the focus of an entertaining new book, Do Aliens Speak Physics? And Other Questions About Science and the Nature of Reality.

Co-author Daniel Whiteson is a particle physicist at the University of California, Irvine, who has worked on the ATLAS collaboration at CERN’s Large Hadron Collider. He’s also a gifted science communicator who previously co-authored two books with cartoonist Jorge Cham of PhD Comics fame: 2018’s We Have No Idea and 2021’s Frequently Asked Questions About the Universe. (The pair also co-hosted a podcast from 2018 to 2024, Daniel and Jorge Explain the Universe.) This time around, cartoonist Andy Warner provided the illustrations, and Whiteson and Warner charmingly dedicate their book to “all the alien scientists we have yet to meet.”

Whiteson has long been interested in the philosophy of physics. “I’m not the kind of physicist who’s like, ‘whatever, let’s just measure stuff,’” he told Ars. “The thing that always excited me about physics was this implicit promise that we were doing something universal, that we were learning things that were true on other planets. But the more I learned, the more concerned I became that this might have been oversold. None are fundamental, and we don’t understand why anything emerges. Can we separate the human lens from the thing we’re looking at? We don’t know in the end how much that lens is distorting what we see or defining what we’re looking at. So that was the fundamental question I always wanted to explore.”

Whiteson initially pitched his book idea to his 14-year-old son, who inherited his father’s interest in science. But his son thought Whiteson’s planned discussion of how physics might not be universal was, well, boring. “When you ask for notes, you’ve got to listen to them,” said Whiteson. “So I came back and said, ‘Well, what about a book about when aliens arrive? Will their science be similar to ours? Can we collaborate together?’” That pitch won the teen’s enthusiastic approval: same ideas, but couched in the context of alien contact to make them more concrete.

As for Warner’s involvement as illustrator, “I cold-emailed him and said, ‘Want to write a book about aliens? You get to draw lots of weird aliens,’” said Whiteson. Who could resist that offer?

Ars caught up with Whiteson to learn more.

cartoon exploring possible outcomes of first alien contact

Credit: Andy Warner

Ars Technica: You open each chapter with fictional hypothetical scenarios. Why?

Daniel Whiteson: I sent an early version of the book to my friend, [biologist] Matt Giorgianni, who appears in cartoon form in the book. He said, “The book is great, but it’s abstract. Why don’t you write out a hypothetical concrete scenario for each chapter to show us what it would be like and how we would stumble.”

All great ideas seem obvious once you hear them. I’ve always been a huge science fiction fan. It’s thoughtful and creative and exploratory about the way the universe could be and might be. So I jumped at the opportunity to write a little bit of science fiction. Each one was challenging because you have to come up with a specific example that illustrates the concepts in that chapter, the issues that you might run into, but also be believable and interesting. We went through a lot of drafts. But I had a lot of fun writing them.

Ars Technica: The Voyager Golden Record is perhaps the best known example of humans attempting to communicate with an alien species, spearheaded by the late Carl Sagan, among others. But what are the odds that, despite our best efforts, any aliens will ever be able to decipher our “message in a bottle”?

Daniel Whiteson: I did an informal experiment where I printed out a picture of the Pioneer plaque and showed it to a bunch of grad students who were young enough to not have seen it before. This is Sagan’s audience: biological humans, same brain, same culture, physics grad students—none of them had any idea what any of that was supposed to refer to. NASA gave him two weeks to come up with that design. I don’t know that I would’ve done any better. It’s easy to criticize.

Those folks, they were doing their best. They were trying to step away from our culture. They didn’t use English, they didn’t even use mathematical symbols. They understood that those things are arbitrary, and they were reaching for something they hoped was going to be universal. But in the end, nothing can be universal because language is always symbols, and the choice of those symbols is arbitrary and cultural. It’s impossible to choose a symbol that can only be interpreted in one way.

Fundamentally, the book is trying to poke at our assumptions. It’s so inspiring to me that the history of physics is littered with times when we have had to abandon an assumption that we clung to desperately, until we were shown otherwise with enough data. So we’ve got to be really open-minded about whether these assumptions hold true, whether it’s required to do science, to be technological, or whether there is even a single explanation for reality. We could be very well surprised by what we discover.

Ars Technica: It’s often assumed that math and physics are the closest thing we have to a universal language. You challenge that assumption, probing such questions as “what does it even mean to ‘count’”? 

Daniel Whiteson: At an initial glance, you’re like, well, of course mathematics is required, and of course numbers are universal. But then you dig into it and you start to realize there are fuzzy issues here. So many of the assumptions that underlie our interpretation of what we learned about physics are that way. I had this experience that’s probably very common among physics undergrads in quantum mechanics, learning about those calculations where you see nine decimal places in the theory and nine decimal places in the experiment, and you go, “Whoa, this isn’t just some calculational tool. This is how the universe decides what happens to a particle.”

I literally had that moment. I’m not a religious person, but it was almost spiritual. For many years, I believed that deeply, and I thought it was obvious. But to research this book, I read Science Without Numbers by Hartry Field. I was lucky—here at Irvine, we happen to have an amazing logic and philosophy of science department, and those folks really helped me digest [his ideas] and understand how you can pull yourself away from things like having a number line. It turns out you don’t need numbers; you can just think about relationships. It was really eye-opening, both how essential mathematics seems to human science and how obvious it is that we’re making a bunch of assumptions that we don’t know how to justify.

There are dotted lines humans have drawn because they make sense to us. You don’t have to go to plasma swirls and atmospheres to imagine a scenario where aliens might not have the same differentiation between their identities, me and you, here’s where I end, and here’s where you begin. That’s complicated for aliens that are plasma swirls, but also it’s not even very well-defined for us. How do I define the edge of my body? Is it where my skin ends? What about the dead skin, the hair, my personal space? There’s no obvious definition. We just have a cultural sense for “this is me, this is not me.” In the end, that’s a philosophical choice.

Cartoon of an alien emerging from a spaceship and a human saying

Credit: Andy Warner

Ars Technica: You raise another interesting question in the book: Would aliens even need physics theory or a deeper understanding of how the universe works? Perhaps they could invent, say, warp drive through trial and error. You suggest that our theory is more like a narrative framework. It’s the story we tell, and that is very much prone to bias.

Daniel Whiteson: Absolutely. And not just bias, but emotion and curiosity. We put energy into certain things because we think they’re important. Physicists spend our lives on this because we think, among the many things we could spend our time on, this is an important question. That’s an emotional choice. That’s a personal subjective thing.

Other people find my work dead boring and would hate to have my life, and I would feel the same way about theirs. And that’s awesome. I’m glad that not everybody wants to be a particle physicist or a biologist or an economist. We have a diversity of curiosity, which we all benefit from. People have an intuitive feel for certain things. There’s something in their minds that naturally understands how the system works and reflects it, and they probably can’t explain it. They might not be able to design a better car, but they can drive the heck out of it.

This is maybe the biggest stumbling block for people who are just starting to think about this for the first time. “Obviously aliens are scientific.” Well, how do we know? What do we mean by scientific? That concept has evolved over time, and is it really required? I felt that that was a big-picture thing that people could wrap their minds around but also a shock to the system. They were already a little bit off-kilter and might realize that some things they assumed must be true maybe not have to be.

Ars Technica: You cite the 2016 film Arrival as an example of first contact and the challenge of figuring out how to communicate with an alien species. They had to learn each other’s cultural context before they had any chance of figuring out what their respective symbols meant. 

Daniel Whiteson:  I think that is really crucial. Again, how you choose to represent your ideas is, in a sense, arbitrary. So if you’re on the other side of that, and you have to go from symbols to ideas, you have to know something about how they made those choices in order to reverse-engineer a message, in order to figure out what it means. How do you know if you’ve done it correctly? Say we get a message from aliens and we spend years of supercomputer time cranking on it. Something we rely on for decoding human messages is that you can tell when you’ve done it right because you have something that makes sense.

cartoon about the

Credit: Andy Warner

How do we know when we’ve done that for an alien text? How do you know the difference between nonsense and things that don’t yet make sense to you? It’s essentially impossible. I spoke to some philosophers of language who convinced me that if we get a message from aliens, it might be literally impossible to decode it without their help, without their context. There aren’t enough examples of messages from aliens. We have the “Wow!” signal—who knows what that means? Maybe it’s a great example of getting a message and having no idea how to decode it.

As in many places in the book, I turned to human history because it’s the one example we have. I was shocked to discover how many human languages we haven’t decoded. Not to mention, we haven’t decoded whale. We know whales are talking to each other. Maybe they’re talking to us and we can’t decode it. The lesson, again and again, is culture.

With the Egyptians, the only reason we were able to decode hieroglyphics is because we had the Rosetta Stone, but that still took 20 years there. How does it take 20 years when you have examples and you know what it’s supposed to say? And the answer is that we made cultural assumptions. We assumed pictograms reflected the ideas in the pictures. If there’s a bird in it, it’s about birds. And we were just wrong. That’s maybe why we’ve been struggling with Etruscan and other languages we’ve never decoded.

Even when we do have a lot of culture in common, it’s very, very tricky. So the idea that we would get a message from space and have no cultural clues at all and somehow be able to decode it—I think that only works in the scenario where aliens have been listening to us for a long time and they’re essentially writing to us in something like our language. I’m a big fan of SETI. They host these conferences where they listen to philosophers and linguists and anthropologists. I don’t mean to say that they’ve thought too narrowly, but I think it’s not widely enough appreciated how difficult it is to decode another language with no culture in common.

Ars Technica: You devote a chapter to the possibility that aliens might be able to, say, “taste” electrons. That drives home your point that physiologically, they will probably be different, biologically they will be different, and their experiences are likely to be different. So their notion of culture will also be different.

Daniel Whiteson: Something I really wanted to get across was that perception determines your sense of intuition, which defines in many ways the answers you find acceptable. I read Ed Yong’s amazing book, An Immense World, about animal perception, the diversity of animal experience or animal interaction with the environment. What is it like to be an octopus with a distributed mind? What is it like to be a bird that senses magnetic fields? If you’re an alien and you have very different perceptions, you could also have a different intuitive language in which to understand the universe. What is a photon? Is it a particle? Is it a wave?  Maybe we just struggle with the concept because our intuitive language is limited.

For an alien that can see photons in superposition, this is boring. They could be bored by things we’re fascinated by, or we could explain our theories to them and they could be unsatisfied because to them, it doesn’t translate into their intuitive language. So even though we’ve transcended our biological limitations in many ways, we can detect gravitational waves and infrared photons, we’re still, I think, shaped by those original biological senses that frame our experience, the models we build. What it’s like to be a human in the world could be very, very different from what it’s like to be an alien in the world.

Ars Technica:  Your very last line reads, “Our theories may reveal the patterns of our thoughts as much as the patterns of nature.” Why did you choose to end your book on that note?

Daniel Whiteson: That’s a message to myself. When I started this journey, I thought physics is universal. That’s what makes it beautiful. That implies that if the aliens show up and they do physics in a very different way, it would be a crushing disappointment. Not only is what we’ve learned just another human science, but also it means maybe we can’t benefit from their work or we can’t have that fantasy of a galactic scientific conference. But that actually might be the best-case scenario.

I mean, the reason that you want to meet aliens is the reason you go traveling. You want to expand your horizons and learn new things. Imagine how boring it would be if you traveled around the world to some new country and they just had all the same food. It might be comforting in some way, but also boring. It’s so much more interesting to find out that they have spicy fish soup for breakfast. That’s the moment when you learn not just about what to have for breakfast but about yourself and your boundaries.

So if aliens show up and they’re so weirdly different in a way we can’t possibly imagine—that would be deeply revealing. It’ll give us a chance to separate the reality from the human lens. It’s not just questions of the universe. We are interesting, and we should learn about our own biases. I anticipate that when the aliens do come, their culture and their minds and their science will be so alien, it’ll be a real challenge to make any kind of connection. But if we do, I think we’ll learn a lot, not just about the universe but about ourselves.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

What if the aliens come and we just can’t communicate? Read More »

quantum-computing-tech-keeps-edging-forward

Quantum computing tech keeps edging forward


More superposition, less supposition

IBM follows through on its June promises, plus more trapped ion news.

IBM has moved to large-scale manufacturing of its Quantum Loon chips. Credit: IBM

The end of the year is usually a busy time in the quantum computing arena, as companies often try to announce that they’ve reached major milestones before the year wraps up. This year has been no exception. And while not all of these announcements involve interesting new architectures like the one we looked at recently, they’re a good way to mark progress in the field, and they often involve the sort of smaller, incremental steps needed to push the field forward.

What follows is a quick look at a handful of announcements from the past few weeks that struck us as potentially interesting.

IBM follows through

IBM is one of the companies announcing a brand-new architecture this year. That’s not at all a surprise, given that the company promised to do so back in June; this week sees the company confirming that it has built the two processors it said it would earlier in the year. These include one called Loon, which is focused on the architecture that IBM will use to host error-corrected logical qubits. Loon represents two major changes for the company: a shift to nearest-neighbor connections and the addition of long-distance connections.

IBM had previously used what it termed the “heavy hex” architecture, in which alternating qubits were connected to either two or three of their neighbors, forming a set of overlapping hexagonal structures. In Loon, the company is using a square grid, with each qubit having connections to its four closest neighbors. This higher density of connections can enable more efficient use of the qubits during computations. But qubits in Loon have additional long-distance connections to other parts of the chip, which will be needed for the specific type of error correction that IBM has committed to. It’s there to allow users to test out a critical future feature.

The second processor, Nighthawk, is focused on the now. It also has the nearest-neighbor connections and a square grid structure, but it lacks the long-distance connections. Instead, the focus with Nighthawk is to get error rates down so that researchers can start testing algorithms for quantum advantage—computations where quantum computers have a clear edge over classical algorithms.

In addition, the company is launching GitHub repository that will allow the community to deposit code and performance data for both classical and quantum algorithms, enabling rigorous evaluations of relative performance. Right now, those are broken down into three categories of algorithms that IBM expects are most likely to demonstrate a verifiable quantum advantage.

This isn’t the only follow-up to IBM’s June announcement, which also saw the company describe the algorithm it would use to identify errors in its logical qubits and the corrections needed to fix them. In late October, the company said it had confirmed that the algorithm could work in real time when run on an FPGA made in collaboration with AMD.

Record lows

A few years back, we reported on a company called Oxford Ionics, which had just announced that it achieved a record-low error rate in some qubit operations using trapped ions. Most trapped-ion quantum computers move qubits by manipulating electromagnetic fields, but they perform computational operations using lasers. Oxford Ionics figured out how to perform operations using electromagnetic fields, meaning more of their processing benefited from our ability to precisely manufacture circuitry (lasers were still needed for tasks like producing a readout of the qubits). And as we noted, it could perform these computational operations extremely effectively.

But Oxford Ionics never made a major announcement that would give us a good excuse to describe its technology in more detail. The company was ultimately acquired by IonQ, a competitor in the trapped-ion space.

Now, IonQ is building on what it gained from Oxford Ionics, announcing a new, record-low error rate for two-qubit gates: greater than 99.99 percent fidelity. That could be critical for the company, as a low error rate for hardware qubits means fewer are needed to get good performance from error-corrected qubits.

But the details of the two-qubit gates are perhaps more interesting than the error rate. Two-qubit gates involve bringing both qubits involved into close proximity, which often requires moving them. That motion pumps a bit of energy into the system, raising the ions’ temperature and leaving them slightly more prone to errors. As a result, any movement of the ions is generally followed by cooling, in which lasers are used to bleed energy back out of the qubits.

This process, which involves two distinct cooling steps, is slow. So slow that as much as two-thirds of the time spent in operations involves the hardware waiting around while recently moved ions are cooled back down. The new IonQ announcement includes a description of a method for performing two-qubit gates that doesn’t require the ions to be fully cooled. This allows one of the two cooling steps to be skipped entirely. In fact, coupled with earlier work involving one-qubit gates, it raises the possibility that the entire machine could operate with its ions at a still very cold but slightly elevated temperature, avoiding all need for one of the two cooling steps.

That would shorten operation times and let researchers do more before the limit of a quantum system’s coherence is reached.

State of the art?

The last announcement comes from another trapped-ion company, Quantum Art. A couple of weeks back, it announced a collaboration with Nvidia that resulted in a more efficient compiler for operations on its hardware. On its own, this isn’t especially interesting. But it’s emblematic of a trend that’s worth noting, and it gives us an excuse to look at Quantum Art’s technology, which takes a distinct approach to boosting the efficiency of trapped-ion computation.

First, the trend: Nvidia’s interest in quantum computing. The company isn’t interested in the quantum aspects (at least not publicly); instead, it sees an opportunity to get further entrenched in high-performance computing. There are three areas where the computational capacity of GPUs can play a role here. One is small-scale modeling of quantum processors so that users can perform an initial testing of algorithms without committing to paying for access to the real thing. Another is what Quantum Art is announcing: using GPUs as part of a compiler chain to do all the computations needed to find more efficient ways of executing an algorithm on specific quantum hardware.

Finally, there’s a potential role in error correction. Error correction involves some indirect measurements of a handful of hardware qubits to determine the most likely state that a larger collection (called a logical qubit) is in. This requires modeling a quantum system in real time, which is quite difficult—hence the computational demands that Nvidia hopes to meet. Regardless of the precise role, there has been a steady flow of announcements much like Quantum Art’s: a partnership with Nvidia that will keep the company’s hardware involved if the quantum technology takes off.

In Quantum Art’s case, that technology is a bit unusual. The trapped-ion companies we’ve covered so far are all taking different routes to the same place: moving one or two ions into a location where operations can be performed and then executing one- or two-qubit gates. Quantum Art’s approach is to perform gates with much larger collections of ions. At the compiler level, it would be akin to figuring out which qubits need a specific operation performed, clustering them together, and doing it all at once. Obviously, there are potential efficiency gains here.

The challenge would normally be moving so many qubits around to create these clusters. But Quantum Art uses lasers to “pin” ions in a row so they act to isolate the ones to their right from the ones to their left. Each cluster can then be operated on separately. In between operations, the pins can be moved to new locations, creating different clusters for the next set of operations. (Quantum Art is calling each cluster of ions a “core” and presenting this as multicore quantum computing.)

At the moment, Quantum Art is behind some of its competitors in terms of qubit count and performing interesting demonstrations, and it’s not pledging to scale quite as fast. But the company’s founders are convinced that the complexity of doing so many individual operations and moving so many ions around will catch up with those competitors, while the added efficiency of multiple qubit gates will allow it to scale better.

This is just a small sampling of all the announcements from this fall, but it should give you a sense of how rapidly the field is progressing—from technology demonstrations to identifying cases where quantum hardware has a real edge and exploring ways to sustain progress beyond those first successes.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Quantum computing tech keeps edging forward Read More »

next-generation-black-hole-imaging-may-help-us-understand-gravity-better

Next-generation black hole imaging may help us understand gravity better

Right now, we probably don’t have the ability to detect these small changes in phenomena. However, that may change, as a next-generation version of the Event Horizon Telescope is being considered, along with a space-based telescope that would operate on similar principles. So the team (four researchers based in Shanghai and CERN) decided to repeat an analysis they did shortly before the Event Horizon Telescope went operational, and consider whether the next-gen hardware might be able to pick up features of the environment around the black hole that might discriminate among different theorized versions of gravity.

Theorists have been busy, and there are a lot of potential replacements for general relativity out there. So, rather than working their way through the list, they used a model of gravity (the parametric Konoplya–Rezzolla–Zhidenko metric) that allows that isn’t specific to any given hypothesis. Instead, it allows some of its parameters to be changed, thus allowing the team to vary the behavior of gravity within some limits. To get a sense of the sort of differences that might be present, the researchers swapped two different parameters between zero and one, giving them four different options. Those results were compared to the Kerr metric, which is the standard general relativity version of the event horizon.

Small but clear differences

Using those five versions of gravity, they model the three-dimensional environment near the event horizon using hydrodynamic simulations, including infalling matter, the magnetic fields it produces, and the jets of matter that those magnetic fields power.

The results resemble the sorts of images that the Event Horizon Telescope produced. These include a bright ring with substantial asymmetry, where one side is significantly brighter due to the rotation of the black hole. And, while the differences are subtle between all the variations of gravity, they’re there. One extreme version produced the smallest but brightest ring; another had a reduced contrast between the bright and dim side of the ring. There were also differences between the width of the jets produced in these models.

Next-generation black hole imaging may help us understand gravity better Read More »