Physics

did-hilma-af-klint-draw-inspiration-from-19th-century-physics?

Did Hilma af Klint draw inspiration from 19th century physics?


Diagrams from Thomas Young’s 1807 Lectures bear striking resemblance to abstract figures in af Klint’s work.

Hilma af Klint’s Group IX/SUW, The Swan, No. 17, 1915. Credit: Hilma af Klimt Foundation

In 2019, astronomer Britt Lundgren of the University of North Carolina Asheville visited the Guggenheim Museum in New York City to take in an exhibit of the works of Swedish painter Hilma af Klint. Lundgren noted a striking similarity between the abstract geometric shapes in af Klint’s work and scientific diagrams in 19th century physicist Thomas Young‘s Lectures (1807). So began a four-year journey starting at the intersection of science and art that has culminated in a forthcoming paper in the journal Leonardo, making the case for the connection.

Af Klint was formally trained at the Royal Academy of Fine Arts and initially focused on drawing, portraits, botanical drawings, and landscapes from her Stockholm studio after graduating with honors. This provided her with income, but her true life’s work drew on af Klint’s interest in spiritualism and mysticism. She was one of “The Five,” a group of Swedish women artists who shared those interests. They regularly organized seances and were admirers of theosophical teachings of the time.

It was through her work with The Five that af Klint began experimenting with automatic drawing, driving her to invent her own geometric visual language to conceptualize the invisible forces she believed influenced our world. She painted her first abstract series in 1906 at age 44. Yet she rarely exhibited this work because she believed the art world at the time wasn’t ready to appreciate it. Her will requested that the paintings stay hidden for at least 20 years after her death.

Even after the boxes containing her 1,200-plus abstract paintings were opened, their significance was not fully appreciated at first. The Moderna Museum in Stockholm actually declined to accept them as a gift, although it now maintains a dedicated space to her work. It wasn’t until art historian Ake Fant presented af Klint’s work at a Helsinki conference that the art world finally took notice. The Guggenheim’s exhibit was af Klint’s American debut. “The exhibit seemed to realize af Klint’s documented dream of introducing her paintings to the world from inside a towering spiral temple and it was met roundly with acclaim, breaking all attendance records for the museum,” Lundgren wrote in her paper.

A pandemic project

Lundgren is the first person in her family to become a scientist; her mother studied art history, and her father is a photographer and a carpenter. But she always enjoyed art because of that home environment, and her Swedish heritage made af Klint an obvious artist of interest. It wasn’t until the year after she visited the Guggenheim exhibit, as she was updating her lectures for an astrophysics course, that Lundgren decided to investigate the striking similarities between Young’s diagrams and af Klint’s geometric paintings—in particular those series completed between 1914 and 1916. It proved to be the perfect research project during the COVID-19 lockdowns.

Lundgren acknowledges the inherent skepticism such an approach by an outsider might engender among the art community and is sympathetic, given that physics and astronomy both have their share of cranks. “As a professional scientist, I have in the past received handwritten letters about why Einstein is wrong,” she told Ars. “I didn’t want to be that person.”

That’s why her very first research step was to contact art professors at her institution to get their expert opinions on her insight. They were encouraging, so she dug in a little deeper, reading every book about af Klint she could get her hands on. She found no evidence that any art historians had made this connection before, which gave her the confidence to turn her work into a publishable paper.

The paper didn’t find a home right away, however; the usual art history journals rejected it, partly because Lundgren was an outsider with little expertise in that field. She needed someone more established to vouch for her. Enter Linda Dalrymple Henderson of the University of Texas at Austin, who has written extensively about scientific influences on abstract art, including that of af Klint. Henderson helped Lundgren refine the paper, encouraged her to submit it to Leonardo, and “it came back with the best review I’ve ever received, even inside astronomy,” said Lundgren.

Making the case

Young and af Klint were not contemporaries; Young died in 1829, and af Klint was born in 1862. Nor are there any specific references to Young or his work in the academic literature examining the sources known to have influenced the Swedish painter’s work. Yet af Klint had a well-documented interest in science, spanning everything from evolution and botany to color theory and physics. While those influences tended to be scientists who were her contemporaries, Lundgren points out that the artist’s personal library included a copy of an 1823 astronomy book.

Excerpt from Plate XXIX of Young’s Lectures Niels Bohr Library and Archives/AIP

Af Klint was also commissioned to paint a portrait of Swedish physicist Knut Angstrom in 1910 at Uppsala University, whose library includes a copy of Young’s Lectures. So it’s entirely possible that af Klint had access to the astronomy and physics of the previous century and would likely have been particularly intrigued by discoveries involving “invisible light” (electromagnetism, x-rays, radioactivity, etc.).

Young’s Lectures contain a speculative passage about the existence of a universal ether (since disproven), a concept that fascinated both scientists and those (like af Klint) with certain occult interests in the late 19th and early 20th centuries. In fact, Young’s passage was included in a popular 1875 spiritualist text, Unseen Universe by P.G. Tait and Balfour Stewart, that was heavily cited by Theosophical Society founder Helena Petrovna Blavatsky. Blavatsky in turn is known to have influenced af Klint around the time the artist created The Swan, The Dove, and Altarpieces series.

Lundgren found that “in several instances, the captions accompanying Young’s color figures [in the Lectures] even seem to decode elements of af Klint’s paintings or bring attention to details that might otherwise be overlooked.” For instance, the caption for Young’s Plate XXIX describes the “oblique stripes of color” that appear when candlelight is viewed through a prism that “almost interchangeably describes features in af Klint’s Group X., No. 1, Altarpiece,” she wrote

(a) Excerpt from Young's Plate XXX. (b) af Klint, Parsifal Series No. 68. (c and d) af Klint, Group IX/UW, The Dove, No. 12 and No. 13.

(a) Excerpt from Young’s Plate XXX. (b) af Klint, Parsifal Series No. 68. (c and d) af Klint, Group IX/UW, The Dove, No. 12 and No. 13. Credit: Niels Bohr Library/Hilma af Klint Foundation

Art historians had previously speculated about af Klint’s interest in color theory, as reflected in the annotated watercolor squares featured in her Parsifal Series (1916). Lundgren argues that those squares resemble Fig. 439 in the color plates of Young’s Lectures, demonstrating the inversion of color in human vision. Those diagrams also “appear almost like crude sketches of af Klint’s The Dove, Nos. 12 and 13,” Lundgren wrote. “Paired side by side, these paintings can produce the same visual effects described by Young, with even the same color palette.”

The geometric imagery of af Klint’s The Swan series is similar to Young’s illustrations of the production and perception of colors, while “black and white diagrams depicting the propagation of light through combinations of lenses and refractive surfaces, included in Young’s Lectures On the Theory of Optics, bear a particularly strong geometric resemblance to The Swan paintings No. 12 and No.13,” Lundgren wrote. Other pieces in The Swan series may have been inspired by engravings in Young’s Lectures.

This is admittedly circumstantial evidence and Lundgren acknowledges as much. “Not being able to prove it is intriguing and frustrating at the same time,” she said. She continues to receive additional leads, most recently from an af Klint relative on the board of the Moderna Museum. Once again, the evidence wasn’t direct, but it seems af Klint would have attended certain local lecture circuits about science, while several members of the Theosophy Society were familiar with modern physics and Young’s earlier work. “But none of these are nails in the coffin that really proved she had access to Young’s book,” said Lundgren.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Did Hilma af Klint draw inspiration from 19th century physics? Read More »

why-solving-crosswords-is-like-a-phase-transition

Why solving crosswords is like a phase transition

There’s also the more recent concept of “explosive percolation,” whereby connectivity emerges not in a slow, continuous process but quite suddenly, simply by replacing the random node connections with predetermined criteria—say, choosing to connect whichever pair of nodes has the fewest pre-existing connections to other nodes. This introduces bias into the system and suppresses the growth of large dominant clusters. Instead, many large unconnected clusters grow until the critical threshold is reached. At that point, even adding just one or two more connections will trigger one global violent merger (instant uber-connectivity).

Puzzling over percolation

One might not immediately think of crossword puzzles as a network, although there have been a couple of relevant prior mathematical studies. For instance, John McSweeney of the Rose-Hulman Institute of Technology in Indiana employed a random graph network model for crossword puzzles in 2016. He factored in how a puzzle’s solvability is affected by the interactions between the structure of the puzzle’s cells (squares) and word difficulty, i.e., the fraction of letters you need to know in a given word in order to figure out what it is.

Answers represented nodes while answer crossings represented edges, and McSweeney assigned a random distribution of word difficulty levels to the clues. “This randomness in the clue difficulties is ultimately responsible for the wide variability in the solvability of a puzzle, which many solvers know well—a solver, presented with two puzzles of ostensibly equal difficulty, may solve one readily and be stumped by the other,” he wrote at the time. At some point, there has to be a phase transition, in which solving the easiest words enables the puzzler to solve the more difficult words until the critical threshold is reached and the puzzler can fill in many solutions in rapid succession—a dynamic process that resembles, say, the spread of diseases in social groups.

In this sample realization, sites with black sites are shown in black; empty sites are white; and occupied sites contain symbols and letters.

In this sample realization, black sites are shown in black; empty sites are white; and occupied sites contain symbols and letters. Credit: Alexander K. Hartmann, 2024

Hartmann’s new model incorporates elements of several nonstandard percolation models, including how much the solver benefits from partial knowledge of the answers. Letters correspond to sites (white squares) while words are segments of those sites, bordered by black squares. There is an a priori probability of being able to solve a given word if no letters are known. If some words are solved, the puzzler gains partial knowledge of neighboring unsolved words, which increases the probability of those words being solved as well.

Why solving crosswords is like a phase transition Read More »

delve-into-the-physics-of-the-hula-hoop

Delve into the physics of the Hula-Hoop

High-speed video of experiments on a robotic hula hooper, whose hourglass form holds the hoop up and in place.

Some version of the Hula-Hoop has been around for millennia, but the popular plastic version was introduced by Wham-O in the 1950s and quickly became a fad. Now, researchers have taken a closer look at the underlying physics of the toy, revealing that certain body types are better at keeping the spinning hoops elevated than others, according to a new paper published in the Proceedings of the National Academy of Sciences.

“We were surprised that an activity as popular, fun, and healthy as hula hooping wasn’t understood even at a basic physics level,” said co-author Leif Ristroph of New York University. “As we made progress on the research, we realized that the math and physics involved are very subtle, and the knowledge gained could be useful in inspiring engineering innovations, harvesting energy from vibrations, and improving in robotic positioners and movers used in industrial processing and manufacturing.”

Ristroph’s lab frequently addresses these kinds of colorful real-world puzzles. For instance, in 2018, Ristroph and colleagues fine-tuned the recipe for the perfect bubble based on experiments with soapy thin films. In 2021, the Ristroph lab looked into the formation processes underlying so-called “stone forests” common in certain regions of China and Madagascar.

In 2021, his lab built a working Tesla valve, in accordance with the inventor’s design, and measured the flow of water through the valve in both directions at various pressures. They found the water flowed about two times slower in the nonpreferred direction. In 2022, Ristroph studied the surpassingly complex aerodynamics of what makes a good paper airplane—specifically, what is needed for smooth gliding.

Girl twirling a Hula hoop, 1958

Girl twirling a Hula-Hoop in 1958 Credit: George Garrigues/CC BY-SA 3.0

And last year, Ristroph’s lab cracked the conundrum of physicist Richard Feynman’s “reverse sprinkler” problem, concluding that the reverse sprinkler rotates a good 50 times slower than a regular sprinkler but operates along similar mechanisms. The secret is hidden inside the sprinkler, where there are jets that make it act like an inside-out rocket. The internal jets don’t collide head-on; rather, as water flows around the bends in the sprinkler arms, it is slung outward by centrifugal force, leading to asymmetric flow.

Delve into the physics of the Hula-Hoop Read More »

ten-cool-science-stories-we-almost-missed

Ten cool science stories we almost missed


Bronze Age combat, moral philosophy and Reddit’s AITA, Mondrian’s fractal tree, and seven other fascinating papers.

There is rarely time to write about every cool science paper that comes our way; many worthy candidates sadly fall through the cracks over the course of the year. But as 2024 comes to a close, we’ve gathered ten of our favorite such papers at the intersection of science and culture as a special treat, covering a broad range of topics: from reenacting Bronze Age spear combat and applying network theory to the music of Johann Sebastian Bach, to Spider-Man inspired web-slinging tech and a mathematical connection between a turbulent phase transition and your morning cup of coffee. Enjoy!

Reenacting Bronze Age spear combat

Experiment with experienced fighters who spar freely using different styles.

An experiment with experienced fighters who spar freely using different styles. Credit: Valerio Gentile/CC BY

The European Bronze Age saw the rise of institutionalized warfare, evidenced by the many spearheads and similar weaponry archaeologists have unearthed. But how might these artifacts be used in actual combat? Dutch researchers decided to find out by constructing replicas of Bronze Age shields and spears and using them in realistic combat scenarios. They described their findings in an October paper published in the Journal of Archaeological Science.

There have been a couple of prior experimental studies on bronze spears, but per Valerio Gentile (now at the University of Gottingen) and coauthors, practical research to date has been quite narrow in scope, focusing on throwing weapons against static shields. Coauthors C.J. van Dijk of the National Military Museum in the Netherlands and independent researcher O. Ter Mors each had more than a decade of experience teaching traditional martial arts, specializing in medieval polearms and one-handed weapons. So they were ideal candidates for testing the replica spears and shields.

Of course, there is no direct information on prehistoric fighting styles, so van Dijk and Mors relied on basic biomechanics of combat movements with similar weapons detailed in historic manuals. They ran three versions of the experiment: one focused on engagement and controlled collisions, another on delivering wounding body blows, and the third on free sparring. They then studied wear marks left on the spearheads and found they matched the marks found on similar genuine weapons excavated from Bronze Age sites. They also gleaned helpful clues to the skills required to use such weapons.

DOI: Journal of Archaeological Science, 2024. 10.1016/j.jas.2024.106044 (About DOIs).

Physics of Ned Kahn’s kinetic sculptures

Ned Kahn's Shimmer Wall, The Franklin Institute, Philadelphia, Pennsylvania.

Shimmer Wall, The Franklin Institute, Philadelphia, Pennsylvania. Credit: Ned Kahn

Environmental artist and sculptor Ned Kahn is famous for his kinematic building facades, inspired by his own background in science. An exterior wall on the Children’s Museum of Pittsburgh, for instance, consists of hundreds of flaps that move in response to wind, creating distinctive visual patterns. Kahn used the same method to create his Shimmer Wall at Philadelphia’s Franklin Institute, as well as several other similar projects.

Physicists at Sorbonne Universite in Paris have studied videos of Kahn’s kinetic facades and conducted experiments to measure the underlying physical mechanisms, outlined in a November paper published in the journal Physical Review Fluids. The authors analyzed 18 YouTube videos taken of six of Kahn’s kinematic facades, working with Kahn and building management to get the dimensions of the moving plates, scaling up from the video footage to get further information on spatial dimensions.

They also conducted their own wind tunnel experiments, using strings of pendulum plates. Their measurements confirmed that the kinetic patterns were propagating waves to create the flickering visual effects. The plates’ movement is driven primarily by their natural resonant frequencies at low speeds, and by pressure fluctuations from the wind at higher speeds.

DOI: Physical Review Fluids, 2024. 10.1103/PhysRevFluids.9.114604 (About DOIs).

How brewing coffee connects to turbulence

Trajectories in time traced out by turbulent puffs as they move along a simulated pipe and in experiments, with blue regions indicate the puff

Trajectories in time traced out by turbulent puffs as they move along a simulated pipe and in experiments, with blue regions indicate puff “traffic jams.” Credit: Grégoire Lemoult et al., 2024

Physicists have been studying turbulence for centuries, particularly the transitional period where flows shift from predictably smooth (laminar flow) to highly turbulent. That transition is marked by localized turbulent patches known as “puffs,” which often form in fluids flowing through a pipe or channel. In an October paper published in the journal Nature Physics, physicists used statistical mechanics to reveal an unexpected connection between the process of brewing coffee and the behavior of those puffs.

Traditional mathematical models of percolation date back to the 1940s. Directed percolation is when the flow occurs in a specific direction, akin to how water moves through freshly ground coffee beans, flowing down in the direction of gravity. There’s a sweet spot for the perfect cuppa, where the rate of flow is sufficiently slow to absorb most of the flavor from the beans, but also fast enough not to back up in the filter. That sweet spot in your coffee brewing process corresponds to the aforementioned laminar-turbulent transition in pipes.

Physicist Nigel Goldenfeld of the University of California, San Diego, and his coauthors used pressure sensors to monitor the formation of puffs in a pipe, focusing on how puff-to-puff interactions influenced each other’s motion. Next, they tried to mathematically model the relevant phase transitions to predict puff behavior. They found that the puffs behave much like cars moving on a freeway during rush hour: they are prone to traffic jams—i.e., when a turbulent patch matches the width of the pipe, causing other puffs to build up behind it—that form and dissipate on their own. And they tend to “melt” at the laminar-turbulent transition point.

DOI: Nature Physics, 2024. 10.1038/s41567-024-02513-0 (About DOIs).

Network theory and Bach’s music

In a network representation of music, notes are represented by nodes, and transition between notes are represented by directed edges connecting the nodes. Credit: S. Kulkarni et al., 2024

When you listen to music, does your ability to remember or anticipate the piece tell you anything about its structure? Physicists at the University of Pennsylvania developed a model based on network theory to do just that, describing their work in a February paper published in the journal Physical Review Research. Johann Sebastian Bach’s works were an ideal choice given the highly mathematical structure, plus the composer was so prolific, across so many very different kinds of musical compositions—preludes, fugues, chorales, toccatas, concertos, suites, and cantatas—as to allow for useful comparisons.

First, the authors built a simple “true” network for each composition, in which individual notes served as “nodes” and the transitions from note to note served as “edges” connecting them. Then they calculated the amount of information in each network. They found it was possible to tell the difference between compositional forms based on their information content (entropy). The more complex toccatas and fugues had the highest entropy, while simpler chorales had the lowest.

Next, the team wanted to quantify how effectively this information was communicated to the listener, a task made more difficult by the innate subjectivity of human perception. They developed a fuzzier “inferred” network model for this purpose, capturing an essential aspect of our perception: we find a balance between accuracy and cost, simplifying some details so as to make it easier for our brains to process incoming information like music.

The results: There were fewer differences between the true and inferred networks for Bach’s compositions than for randomly generated networks, suggesting that clustering and the frequent repetition of transitions (represented by thicker edges) in Bach networks were key to effectively communicating information to the listener. The next step is to build a multi-layered network model that incorporates elements like rhythm, timbre, chords, or counterpoint (a Bach specialty).

DOI: Physical Review Research, 2024. 10.1103/PhysRevResearch.6.013136 (About DOIs).

The philosophy of Reddit’s AITA

Count me among the many people practically addicted to Reddit’s “Am I the Asshole” (AITA) forum. It’s such a fascinating window into the intricacies of how flawed human beings navigate different relationships, whether personal or professional. That’s also what makes it a fantastic source of illustrative common-place dilemmas of moral decision-making for philosophers like Daniel Yudkin of the University of Pennsylvania. Relational context matters, as Yudkin and several co-authors ably demonstrated in a PsyArXiv preprint earlier this year.

For their study, Yudkin et al. compiled a dataset of nearly 370,000 AITA posts, along with over 11 million comments, posted between 2018 and 2021. They used machine learning to analyze the language used to sort all those posts into different categories. They relied on an existing taxonomy identifying six basic areas of moral concern: fairness/proportionality, feelings, harm/offense, honesty, relational obligation, and social norms.

Yudkin et al. identified 29 of the most common dilemmas in the AITA dataset and grouped them according to moral theme. Two of the most common were relational transgression and relational omission (failure to do what was expected), followed by behavioral over-reaction and unintended harm. Cheating and deliberate misrepresentation/dishonesty were the moral dilemmas rated most negatively in the dataset—even more so than intentional harm. Being judgmental was also evaluated very negatively, as it was often perceived as being self-righteous or hypocritical. The least negatively evaluated dilemmas were relational omissions.

As for relational context, cheating and broken promise dilemmas typically involved romantic partners like boyfriends rather than one’s mother, for example, while mother-related dilemmas more frequently fell under relational omission. Essentially, “people tend to disappoint their mothers but be disappointed by their boyfriends,” the authors wrote. Less close relationships, by contrast, tend to be governed by “norms of politeness and procedural fairness.” Hence, Yudkin et al. prefer to think of morality “less as a set of abstract principles and more as a ‘relational toolkit,’ guiding and constraining behavior according to the demands of the social situation.”

DOI: PsyArXiv, 2024. 10.31234/osf.io/5pcew (About DOIs).

Fractal scaling of trees in art

De grijze boom (Gray tree) Piet Mondrian, 1911.

De grijze boom (Gray tree) by Piet Mondrian, 1911. Credit: Public domain

Leonardo da Vinci famously invented a so-called “rule of trees” as a guide to realistically depicting trees in artistic representations according to their geometric proportions. In essence, if you took all the branches of a given tree, folded them up and compressed them into something resembling a trunk, that trunk would have the same thickness from top to bottom. That rule in turn implies a fractal branching pattern, with a scaling exponent of about 2 describing the proportions between the diameters of nearby boughs and the number of boughs with a given diameter.

According to the authors of a preprint posted to the physics arXiv in February, however, recent biological research suggests a higher scaling exponent of 3 known as Murray’s Law, for the rule of trees. Their analysis of 16th century Islamic architecture, Japanese paintings from the Edo period, and 20th century European art showed fractal scaling between 1.5 and 2.5. However, when they analyzed an abstract tree painting by Piet Mondrian, they found it exhibited fractal scaling of 3, before mathematicians had formulated Murray’s Law, even though Mondrian’s tree did not feature explicit branching.

The findings intrigued physicist Richard Taylor of the University of Oregon, whose work over the last 20 years includes analyzing fractal patterns in the paintings of Jackson Pollock. “In particular, I thought the extension to Mondrian’s ‘trees’ was impressive,” he told Ars earlier this year. “I like that it establishes a connection between abstract and representational forms. It makes me wonder what would happen if the same idea were to be applied to Pollock’s poured branchings.”

Taylor himself published a 2022 paper about climate change and how nature’s stress-reducing fractals might disappear in the future. “If we are pessimistic for a moment, and assume that climate change will inevitably impact nature’s fractals, then our only future source of fractal aesthetics will be through art, design and architecture,” he said. “This brings a very practical element to studies like [this].”

DOI: arXiv, 2024. 10.48550/arXiv.2402.13520 (About DOIs).

IDing George Washington’s descendants

Portrait of George Washington

A DNA study identified descendants of George Washington from unmarked remains. Credit: Public domain

DNA profiling is an incredibly useful tool in forensics, but the most common method—short tandem repeat (STR) analysis—typically doesn’t work when remains are especially degraded, especially if said remains have been preserved with embalming methods using formaldehyde. This includes the remains of US service members who died in such past conflicts as World War II, Korea, Vietnam, and the Cold War. That’s why scientists at the Armed Forces Medical Examiner System’s identification lab at the Dover Air Force Base have developed new DNA sequencing technologies.

They used those methods to identify the previously unmarked remains of descendants of George Washington, according to a March paper published in the journal iScience. The team tested three sets of remains and compared the results with those of a known living descendant, using methods for assessing paternal and maternal relationships, as well as a new method for next-generation sequencing data involving some 95,000 single-nucleotide polymorphisms (SNPs) in order to better predict more distant ancestry. The combined data confirmed that the remains belonged to Washington’s descendants and the new method should help do the same for the remains of as-yet-unidentified service members.

In related news, in July, forensic scientists successfully used descendant DNA to identify a victim of the 1921 Tulsa massacre in Oklahoma City, buried in a mass grave containing more than a hundred victims. C.L. Daniel was a World War I veteran, still in his 20s when he was killed. More than 120 such graves have been found since 2020, with DNA collected from around 30 sets of remains, but this is the first time those remains have been directly linked to the massacre. There are at least 17 other victims in the grave where Daniel’s remains were found.

DOI: iScience, 2024. 10.1016/j.isci.2024.109353 (About DOIs).

Spidey-inspired web-slinging tech

stream of liquid silk quickly turns to a strong fiber that sticks to and lifts objects

stream of liquid silk quickly turns to a strong fiber that sticks to and lifts objects. Credit: Marco Lo Presti et al., 2024

Over the years, researchers in Tufts University’s Silklab have come up with all kinds of ingenious bio-inspired uses for the sticky fibers found in silk moth cocoons: adhesive glues, printable sensors, edible coatings, and light-collecting materials for solar cells, to name a few. Their latest innovation is a web-slinging technology inspired by Spider-Man’s ability to shoot webbing from his wrists, described in an October paper published in the journal Advanced Functional Materials.

Coauthor Marco Lo Presti was cleaning glassware with acetone in the lab one day when he noticed something that looked a lot like webbing forming on the bottom of a glass. He realized this could be the key to better replicating spider threads for the purpose of shooting the fibers from a device like Spider-Man—something actual spiders don’t do. (They spin the silk, find a surface, and draw out lines of silk to build webs.)

The team boiled silk moth cocoons in a solution to break them down into proteins called fibroin. The fibroin was then extruded through bore needles into a stream. Spiking the fibroin solution with just the right additives will cause it to solidify into fiber once it comes into contact with air. For the web-slinging technology, they added dopamine to the fibroin solution and then shot it through a needle in which the solution was surrounded by a layer of acetone, which triggered solidification.

The acetone quickly evaporated, leaving just the webbing attached to whatever object it happened it hit. The team tested the resulting fibers and found they could lift a steel bolt, a tube floating on water, a partially buried scalpel and a wooden block—all from as far away as 12 centimeters. Sure, natural spider silk is still about 1000 times stronger than these fibers, but it’s still a significant step forward that paves the way for future novel technological applications.

DOI: Advanced Functional Materials, 2024. 10.1002/adfm.202414219

Solving a mystery of a 12th century supernova

Pa 30 is the supernova remnant of SN 1181.

Pa 30 is the supernova remnant of SN 1181. Credit: unWISE (D. Lang)/CC BY-SA 4.0

In 1181, astronomers in China and Japan recorded the appearance of a “guest star” that shone as bright as Saturn and was visible in the sky for six months. We now know it was a supernova (SN1181), one of only five such known events occurring in our Milky Way. Astronomers got a closer look at the remnant of that supernova and have determined the nature of strange filaments resembling dandelion petals that emanate from a “zombie star” at its center, according to an October paper published in The Astrophysical Journal Letters.

The Chinese and Japanese astronomers only recorded an approximate location for the unusual sighting, and for centuries no one managed to make a confirmed identification of a likely remnant from that supernova. Then, in 2021, astronomers measured the speed of expansion of a nebula known as Pa 30, which enabled them to determine its age: around 1,000 years, roughly coinciding with the recorded appearance of SN1181. PA 30 is an unusual remnant because of its zombie star—most likely itself a remnant of the original white dwarf that produced the supernova.

This latest study relied on data collected by Caltech’s Keck Cosmic Web Imager, a spectrograph at the Keck Observatory in Hawaii. One of the unique features of this instrument is that it can measure the motion of matter in a supernova and use that data to create something akin to a 3D movie of the explosion. The authors were able to create such a 3D map of P 30 and calculated that the zombie star’s filaments have ballistic motion, moving at approximately 1,000 kilometers per second.

Nor has that velocity changed since the explosion, enabling them to date that event almost exactly to 1181. And the findings raised fresh questions—namely, the ejected filament material is asymmetrical—which is unusual for a supernova remnant. The authors suggest that asymmetry may originate with the initial explosion.

There’s also a weird inner gap around the zombie star. Both will be the focus of further research.

DOI: Astrophysical Journal Letters, 2024. 10.3847/2041-8213/ad713b (About DOIs).

Reviving a “lost” 16th century score

manuscript page of Aberdeen Breviary : Volume 1 or 'Pars Hiemalis'

Fragment of music from The Aberdeen Breviary: Volume 1 Credit: National Library of Scotland /CC BY 4.0

Never underestimate the importance of marginalia in old manuscripts. Scholars from the University of Edinburgh and KU Leuven in Belgium can attest to that, having discovered a fragment of “lost” music from 16th-century pre-Reformation Scotland in a collection of worship texts. The team was even able to reconstruct the fragment and record it to get a sense of what music sounded like from that period in northeast Scotland, as detailed in a December paper published in the journal Music and Letters.

King James IV of Scotland commissioned the printing of several copies of The Aberdeen Breviary—a collection of prayers, hymns, readings, and psalms for daily worship—so that his subjects wouldn’t have to import such texts from England or Europe. One 1510 copy, known as the “Glamis copy,” is currently housed in the National Library of Scotland in Edinburgh. It was while examining handwritten annotations in this copy that the authors discovered the musical fragment on a page bound into the book—so it hadn’t been slipped between the pages at a later date.

The team figured out the piece was polyphonic, and then realized it was the tenor part from a harmonization for three or four voices of the hymn “Cultor Dei,” typically sung at night during Lent. (You can listen to a recording of the reconstructed composition here.) The authors also traced some of the history of this copy of The Aberdeen Breviary, including its use at one point by a rural chaplain at Aberdeen Cathedral, before a Scottish Catholic acquired it as a family heirloom.

“Identifying a piece of music is a real ‘Eureka’ moment for musicologists,” said coauthor David Coney of Edinburgh College of Art. “Better still, the fact that our tenor part is a harmony to a well-known melody means we can reconstruct the other missing parts. As a result, from just one line of music scrawled on a blank page, we can hear a hymn that had lain silent for nearly five centuries, a small but precious artifact of Scotland’s musical and religious traditions.”

DOI: Music and Letters, 2024. 10.1093/ml/gcae076 (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Ten cool science stories we almost missed Read More »

the-physics-of-ugly-christmas-sweaters

The physics of ugly Christmas sweaters

In 2018, a team of French physicists developed a rudimentary mathematical model to describe the deformation of a common type of knit. Their work was inspired when co-author Frédéric Lechenault watched his pregnant wife knitting baby booties and blankets, and he noted how the items would return to their original shape even after being stretched. With a few colleagues, he was able to boil the mechanics down to a few simple equations, adaptable to different stitch patterns. It all comes down to three factors: the “bendiness” of the yarn, the length of the yarn, and how many crossing points are in each stitch.

A simpler stitch

A simplified model of how yarns interact

A simplified model of how yarns interact Credit: J. Crassous/University of Rennes

One of the co-authors of that 2018 paper, Samuel Poincloux of Aoyama Gakuin University in Japan, also co-authored this latest study with two other colleagues, Jérôme Crassous (University of Rennes in France) and Audrey Steinberger (University of Lyon). This time around, Poincloux was interested in the knotty problem of predicting the rest shape of a knitted fabric, given the yarn’s length by stitch—an open question dating back at least to a 1959 paper.

It’s the complex geometry of all the friction-producing contact zones between the slender elastic fibers that makes such a system too difficult to model precisely, because the contact zones can rotate or change shape as the fabric moves. Poincloux and his cohorts came up with their own more simplified model.

The team performed experiments with a Jersey stitch knit (aka a stockinette), a widely used and simple knit consisting of a single yarn (in this case, a nylon thread) forming interlocked loops. They also ran numerical simulations modeled on discrete elastic rods coupled with dry contacts with a specific friction coefficient to form meshes.

The results: Even when there were no external stresses applied to the fabric, the friction between the threads served as a stabilizing factor. And there was no single form of equilibrium for a knitted sweater’s resting shape; rather, there were multiple metastable states that were dependent on the fabric’s history—the different ways it had been folded, stretched, or rumpled. In short, “Knitted fabrics do not have a unique shape when no forces are applied, contrary to the relatively common belief in textile literature,” said Crassous.

DOI: Physical Review Letters, 2024. 10.1103/PhysRevLett.133.248201 (About DOIs).

The physics of ugly Christmas sweaters Read More »

could-microwaved-grapes-be-used-for-quantum-sensing?

Could microwaved grapes be used for quantum sensing?

The microwaved grape trick also shows their promise as alternative microwave resonators for quantum sensing applications, according to the authors of this latest paper. Those applications include satellite technology, masers, microwave photon detection, hunting for axions (a dark matter candidate), and various quantum systems, and driving spin in superconducting qubits for quantum computing, among others.

Prior research had specifically investigated the electrical fields behind the plasma effect. “We showed that grape pairs can also enhance magnetic fields which are crucial for quantum sensing applications,” said co-author Ali Fawaz, a graduate student at Macquarie University.

Fawaz and co-authors used specially fabricated nanodiamonds for their experiments. Unlike pure diamonds, which are colorless, some of the carbon atoms in the nanodiamonds were replaced, creating tiny defect centers that act like tiny magnets, making them ideal for quantum sensing. Sapphires are typically used for this purpose, but Fawaz et al. realized that water conducts microwave energy better than sapphires—and grapes are mostly water.

So the team placed a nanodiamond atop a thin glass fiber and placed it between two grapes. Then they shone green laser light through the fiber, making the defect centers glow red. Measuring the brightness told them the strength of the magnetic field around the grapes, which turned out to be twice as strong with grapes than without.

The size and shape of the grapes used in the experiments proved crucial; they must be about 27 millimeters long to get concentrated microwave energy at just the right frequency for the quantum sensor. The biggest catch is that using the grapes proved to be less stable with more energy loss. Future research may identify more reliable potential materials to achieve a similar effect.

DOI: Physical Review Applied, 2024. 10.1103/PhysRevApplied.22.064078 (About DOIs).

Could microwaved grapes be used for quantum sensing? Read More »

google-gets-an-error-corrected-quantum-bit-to-be-stable-for-an-hour

Google gets an error-corrected quantum bit to be stable for an hour


Using almost the entire chip for a logical qubit provides long-term stability.

Google’s new Willow chip is its first new generation of chips in about five years. Credit: Google

On Monday, Nature released a paper from Google’s quantum computing team that provides a key demonstration of the potential of quantum error correction. Thanks to an improved processor, Google’s team found that increasing the number of hardware qubits dedicated to an error-corrected logical qubit led to an exponential increase in performance. By the time the entire 105-qubit processor was dedicated to hosting a single error-corrected qubit, the system was stable for an average of an hour.

In fact, Google told Ars that errors on this single logical qubit were rare enough that it was difficult to study them. The work provides a significant validation that quantum error correction is likely to be capable of supporting the execution of complex algorithms that might require hours to execute.

A new fab

Google is making a number of announcements in association with the paper’s release (an earlier version of the paper has been up on the arXiv since August). One of those is that the company is committed enough to its quantum computing efforts that it has built its own fabrication facility for its superconducting processors.

“In the past, all the Sycamore devices that you’ve heard about were fabricated in a shared university clean room space next to graduate students and people doing kinds of crazy stuff,” Google’s Julian Kelly said. “And we’ve made this really significant investment in bringing this new facility online, hiring staff, filling it with tools, transferring their process over. And that enables us to have significantly more process control and dedicated tooling.”

That’s likely to be a critical step for the company, as the ability to fabricate smaller test devices can allow the exploration of lots of ideas on how to structure the hardware to limit the impact of noise. The first publicly announced product of this lab is the Willow processor, Google’s second design, which ups its qubit count to 105. Kelly said one of the changes that came with Willow actually involved making the individual pieces of the qubit larger, which makes them somewhat less susceptible to the influence of noise.

All of that led to a lower error rate, which was critical for the work done in the new paper. This was demonstrated by running Google’s favorite benchmark, one that it acknowledges is contrived in a way to make quantum computing look as good as possible. Still, people have figured out how to make algorithm improvements for classical computers that have kept them mostly competitive. But, with all the improvements, Google expects that the quantum hardware has moved firmly into the lead. “We think that the classical side will never outperform quantum in this benchmark because we’re now looking at something on our new chip that takes under five minutes, would take 1025 years, which is way longer than the age of the Universe,” Kelly said.

Building logical qubits

The work focuses on the behavior of logical qubits, in which a collection of individual hardware qubits are grouped together in a way that enables errors to be detected and corrected. These are going to be essential for running any complex algorithms, since the hardware itself experiences errors often enough to make some inevitable during any complex calculations.

This naturally creates a key milestone. You can get better error correction by adding more hardware qubits to each logical qubit. If each of those hardware qubits produces errors at a sufficient rate, however, then you’ll experience errors faster than you can correct for them. You need to get hardware qubits of a sufficient quality before you start benefitting from larger logical qubits. Google’s earlier hardware had made it past that milestone, but only barely. Adding more hardware qubits to each logical qubit only made for a marginal improvement.

That’s no longer the case. Google’s processors have the hardware qubits laid out on a square grid, with each connected to its nearest neighbors (typically four except at the edges of the grid). And there’s a specific error correction code structure, called the surface code, that fits neatly into this grid. And you can use surface codes of different sizes by using progressively more of the grid. The size of the grid being used is measured by a term called distance, with larger distance meaning a bigger logical qubit, and thus better error correction.

(In addition to a standard surface code, Google includes a few qubits that handle a phenomenon called “leakage,” where a qubit ends up in a higher-energy state, instead of the two low-energy states defined as zero and one.)

The key result is that going from a distance of three to a distance of five more than doubled the ability of the system to catch and correct errors. Going from a distance of five to a distance of seven doubled it again. Which shows that the hardware qubits have reached a sufficient quality that putting more of them into a logical qubit has an exponential effect.

“As we increase the grid from three by three to five by five to seven by seven, the error rate is going down by a factor of two each time,” said Google’s Michael Newman. “And that’s that exponential error suppression that we want.”

Going big

The second thing they demonstrated is that, if you make the largest logical qubit that the hardware can support, with a distance of 15, it’s possible to hang onto the quantum information for an average of an hour. This is striking because Google’s earlier work had found that its processors experience widespread simultaneous errors that the team ascribed to cosmic ray impacts. (IBM, however, has indicated it doesn’t see anything similar, so it’s not clear whether this diagnosis is correct.) Those happened every 10 seconds or so. But this work shows that a sufficiently large error code can correct for these events, whatever their cause.

That said, these qubits don’t survive indefinitely. One of them seems to be a localized temporary increase in errors. The second, more difficult to deal with problem involves a widespread spike in error detection affecting an area that includes roughly 30 qubits. At this point, however, Google has only seen six of these events, so they told Ars that it’s difficult to really characterize them. “It’s so rare it actually starts to become a bit challenging to study because you have to gain a lot of statistics to even see those events at all,” said Kelly.

Beyond the relative durability of these logical qubits, the paper notes another advantage to going with larger code distances: it enhances the impact of further hardware improvements. Google estimates that at a distance of 15, improving hardware performance by a factor of two would drop errors in the logical qubit by a factor of 250. At a distance of 27, the same hardware improvement would lead to an improvement of over 10,000 in the logical qubit’s performance.

Note that none of this will ever get the error rate to zero. Instead, we just need to get the error rate to a level where an error is unlikely for a given calculation (more complex calculations will require a lower error rate). “It’s worth understanding that there’s always going to be some type of error floor and you just have to push it low enough to the point where it practically is irrelevant,” Kelly said. “So for example, we could get hit by an asteroid and the entire Earth could explode and that would be a correlated error that our quantum computer is not currently built to be robust to.”

Obviously, a lot of additional work will need to be done to both make logical qubits like this survive for even longer, and to ensure we have the hardware to host enough logical qubits to perform calculations. But the exponential improvements here, to Google, suggest that there’s nothing obvious standing in the way of that. “We woke up one morning and we kind of got these results and we were like, wow, this is going to work,” Newman said. “This is really it.”

Nature, 2024. DOI: 10.1038/s41586-024-08449-y  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google gets an error-corrected quantum bit to be stable for an hour Read More »

cheerios-effect-inspires-novel-robot-design

Cheerios effect inspires novel robot design

There’s a common popular science demonstration involving “soap boats,” in which liquid soap poured onto the surface of water creates a propulsive flow driven by gradients in surface tension. But it doesn’t last very long since the soapy surfactants rapidly saturate the water surface, eliminating that surface tension. Using ethanol to create similar “cocktail boats” can significantly extend the effect because the alcohol evaporates rather than saturating the water.

That simple classroom demonstration could also be used to propel tiny robotic devices across liquid surfaces to carry out various environmental or industrial tasks, according to a preprint posted to the physics arXiv. The authors also exploited the so-called “Cheerios effect” as a means of self-assembly to create clusters of tiny ethanol-powered robots.

As previously reported, those who love their Cheerios for breakfast are well acquainted with how those last few tasty little “O”s tend to clump together in the bowl: either drifting to the center or to the outer edges. The “Cheerios effect is found throughout nature, such as in grains of pollen (or, alternatively, mosquito eggs or beetles) floating on top of a pond; small coins floating in a bowl of water; or fire ants clumping together to form life-saving rafts during floods. A 2005 paper in the American Journal of Physics outlined the underlying physics, identifying the culprit as a combination of buoyancy, surface tension, and the so-called “meniscus effect.”

It all adds up to a type of capillary action. Basically, the mass of the Cheerios is insufficient to break the milk’s surface tension. But it’s enough to put a tiny dent in the surface of the milk in the bowl, such that if two Cheerios are sufficiently close, the curved surface in the liquid (meniscus) will cause them to naturally drift toward each other. The “dents” merge and the “O”s clump together. Add another Cheerio into the mix, and it, too, will follow the curvature in the milk to drift toward its fellow “O”s.

Physicists made the first direct measurements of the various forces at work in the phenomenon in 2019. And they found one extra factor underlying the Cheerios effect: The disks tilted toward each other as they drifted closer in the water. So the disks pushed harder against the water’s surface, resulting in a pushback from the liquid. That’s what leads to an increase in the attraction between the two disks.

Cheerios effect inspires novel robot design Read More »

microsoft-and-atom-computing-combine-for-quantum-error-correction-demo

Microsoft and Atom Computing combine for quantum error correction demo


New work provides a good view of where the field currently stands.

The first-generation tech demo of Atom’s hardware. Things have progressed considerably since. Credit: Atom Computing

In September, Microsoft made an unusual combination of announcements. It demonstrated progress with quantum error correction, something that will be needed for the technology to move much beyond the interesting demo phase, using hardware from a quantum computing startup called Quantinuum. At the same time, however, the company also announced that it was forming a partnership with a different startup, Atom Computing, which uses a different technology to make qubits available for computations.

Given that, it was probably inevitable that the folks in Redmond, Washington, would want to show that similar error correction techniques would also work with Atom Computing’s hardware. It didn’t take long, as the two companies are releasing a draft manuscript describing their work on error correction today. The paper serves as both a good summary of where things currently stand in the world of error correction, as well as a good look at some of the distinct features of computation using neutral atoms.

Atoms and errors

While we have various technologies that provide a way of storing and manipulating bits of quantum information, none of them can be operated error-free. At present, errors make it difficult to perform even the simplest computations that are clearly beyond the capabilities of classical computers. More sophisticated algorithms would inevitably encounter an error before they could be completed, a situation that would remain true even if we could somehow improve the hardware error rates of qubits by a factor of 1,000—something we’re unlikely to ever be able to do.

The solution to this is to use what are called logical qubits, which distribute quantum information across multiple hardware qubits and allow the detection and correction of errors when they occur. Since multiple qubits get linked together to operate as a single logical unit, the hardware error rate still matters. If it’s too high, then adding more hardware qubits just means that errors will pop up faster than they can possibly be corrected.

We’re now at the point where, for a number of technologies, hardware error rates have passed the break-even point, and adding more hardware qubits can lower the error rate of a logical qubit based on them. This was demonstrated using neutral atom qubits by an academic lab at Harvard University about a year ago. The new manuscript demonstrates that it also works on a commercial machine from Atom Computing.

Neutral atoms, which can be held in place using a lattice of laser light, have a number of distinct advantages when it comes to quantum computing. Every single atom will behave identically, meaning that you don’t have to manage the device-to-device variability that’s inevitable with fabricated electronic qubits. Atoms can also be moved around, allowing any atom to be entangled with any other. This any-to-any connectivity can enable more efficient algorithms and error-correction schemes. The quantum information is typically stored in the spin of the atom’s nucleus, which is shielded from environmental influences by the cloud of electrons that surround it, making them relatively long-lived qubits.

Operations, including gates and readout, are performed using lasers. The way the physics works, the spacing of the atoms determines how the laser affects them. If two atoms are a critical distance apart, the laser can perform a single operation, called a two-qubit gate, that affects both of their states. Anywhere outside this distance, and a laser only affects each atom individually. This allows a fine control over gate operations.

That said, operations are relatively slow compared to some electronic qubits, and atoms can occasionally be lost entirely. The optical traps that hold atoms in place are also contingent upon the atom being in its ground state; if any atom ends up stuck in a different state, it will be able to drift off and be lost. This is actually somewhat useful, in that it converts an unexpected state into a clear error.

Image of a grid of dots arranged in sets of parallel vertical rows. There is a red bar across the top, and a green bar near the bottom of the grid.

Atom Computing’s system. Rows of atoms are held far enough apart so that a single laser sent across them (green bar) only operates on individual atoms. If the atoms are moved to the interaction zone (red bar), a laser can perform gates on pairs of atoms. Spaces where atoms can be held can be left empty to avoid performing unneeded operations. Credit: Reichardt, et al.

The machine used in the new demonstration hosts 256 of these neutral atoms. Atom Computing has them arranged in sets of parallel rows, with space in between to let the atoms be shuffled around. For single-qubit gates, it’s possible to shine a laser across the rows, causing every atom it touches to undergo that operation. For two-qubit gates, pairs of atoms get moved to the end of the row and moved a specific distance apart, at which point a laser will cause the gate to be performed on every pair present.

Atom’s hardware also allows a constant supply of new atoms to be brought in to replace any that are lost. It’s also possible to image the atom array in between operations to determine whether any atoms have been lost and if any are in the wrong state.

It’s only logical

As a general rule, the more hardware qubits you dedicate to each logical qubit, the more simultaneous errors you can identify. This identification can enable two ways of handling the error. In the first, you simply discard any calculation with an error and start over. In the second, you can use information about the error to try to fix it, although the repair involves additional operations that can potentially trigger a separate error.

For this work, the Microsoft/Atom team used relatively small logical qubits (meaning they used very few hardware qubits), which meant they could fit more of them within 256 total hardware qubits the machine made available. They also checked the error rate of both error detection with discard and error detection with correction.

The research team did two main demonstrations. One was placing 24 of these logical qubits into what’s called a cat state, named after Schrödinger’s hypothetical feline. This is when a quantum object simultaneously has non-zero probability of being in two mutually exclusive states. In this case, the researchers placed 24 logical qubits in an entangled cat state, the largest ensemble of this sort yet created. Separately, they implemented what’s called the Bernstein-Vazirani algorithm. The classical version of this algorithm requires individual queries to identify each bit in a string of them; the quantum version obtains the entire string with a single query, so is a notable case of something where a quantum speedup is possible.

Both of these showed a similar pattern. When done directly on the hardware, with each qubit being a single atom, there was an appreciable error rate. By detecting errors and discarding those calculations where they occurred, it was possible to significantly improve the error rate of the remaining calculations. Note that this doesn’t eliminate errors, as it’s possible for multiple errors to occur simultaneously, altering the value of the qubit without leaving an indication that can be spotted with these small logical qubits.

Discarding has its limits; as calculations become increasingly complex, involving more qubits or operations, it will inevitably mean every calculation will have an error, so you’d end up wanting to discard everything. Which is why we’ll ultimately need to correct the errors.

In these experiments, however, the process of correcting the error—taking an entirely new atom and setting it into the appropriate state—was also error-prone. So, while it could be done, it ended up having an overall error rate that was intermediate between the approach of catching and discarding errors and the rate when operations were done directly on the hardware.

In the end, the current hardware has an error rate that’s good enough that error correction actually improves the probability that a set of operations can be performed without producing an error. But not good enough that we can perform the sort of complex operations that would lead quantum computers to have an advantage in useful calculations. And that’s not just true for Atom’s hardware; similar things can be said for other error-correction demonstrations done on different machines.

There are two ways to go beyond these current limits. One is simply to improve the error rates of the hardware qubits further, as fewer total errors make it more likely that we can catch and correct them. The second is to increase the qubit counts so that we can host larger, more robust logical qubits. We’re obviously going to need to do both, and Atom’s partnership with Microsoft was formed in the hope that it will help both companies get there faster.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft and Atom Computing combine for quantum error correction demo Read More »

scientist-behind-superconductivity-claims-ousted

Scientist behind superconductivity claims ousted

University of Rochester physicist Ranga Dias made headlines with his controversial claims of high-temperature superconductivity—and made headlines again when the two papers reporting the breakthroughs were later retracted under suspicion of scientific misconduct, although Dias denied any wrongdoing. The university conducted a formal investigation over the past year and has now terminated Dias’ employment, The Wall Street Journal reported.

“In the past year, the university completed a fair and thorough investigation—conducted by a panel of nationally and internationally known physicists—into data reliability concerns within several retracted papers in which Dias served as a senior and corresponding author,” a spokesperson for the University of Rochester said in a statement to the WSJ, confirming his termination. “The final report concluded that he engaged in research misconduct while a faculty member here.”

The spokesperson declined to elaborate further on the details of his departure, and Dias did not respond to the WSJ’s request for comment. Dias did not have tenure, so the final decision rested with the Board of Trustees after a recommendation from university President Sarah Mangelsdorf. Mangelsdorf had called for terminating his position in an August letter to the chair and vice chair of the Board of Trustees, so the decision should not come as a surprise. Dias’ lawsuit claiming that the investigation was biased was dismissed by a judge in April.

Ars has been following this story ever since Dias first burst onto the scene with reports of a high-pressure, room-temperature superconductor, published in Nature in 2020. Even as that paper was being retracted due to concerns about the validity of some of its data, Dias published a second paper in Nature claiming a similar breakthrough: a superconductor that works at high temperatures but somewhat lower pressures. Shortly afterward, that paper was retracted as well. As Ars Science Editor John Timmer reported previously:

Dias’ lab was focused on high-pressure superconductivity. At extreme pressures, the orbitals where electrons hang out get distorted, which can alter the chemistry and electronic properties of materials. This can mean the formation of chemical compounds that don’t exist at normal pressures, along with distinct conductivity. In a number of cases, these changes enabled superconductivity at unusually high temperatures, although still well below the freezing point of water.

Dias, however, supposedly found a combination of chemicals that would boost the transition to superconductivity to near room temperature, although only at extreme pressures. While the results were plausible, the details regarding how some of the data was processed to produce one of the paper’s key graphs were lacking, and Dias didn’t provide a clear explanation.

The ensuing investigation cleared Dias of misconduct for that first paper. Then came the second paper, which reported another high-temperature superconductor forming at less extreme pressures. However, potential problems soon became apparent, with many of the authors calling for its retraction, although Dias did not.

Scientist behind superconductivity claims ousted Read More »

ibm-boosts-the-amount-of-computation-you-can-get-done-on-quantum-hardware

IBM boosts the amount of computation you can get done on quantum hardware

By making small adjustments to the frequency that the qubits are operating at, it’s possible to avoid these problems. This can be done when the Heron chip is being calibrated before it’s opened for general use.

Separately, the company has done a rewrite of the software that controls the system during operations. “After learning from the community, seeing how to run larger circuits, [we were able to] almost better define what it should be and rewrite the whole stack towards that,” Gambetta said. The result is a dramatic speed-up. “Something that took 122 hours now is down to a couple of hours,” he told Ars.

Since people are paying for time on this hardware, that’s good for customers now. However,  it could also pay off in the longer run, as some errors can occur randomly, so less time spent on a calculation can mean fewer errors.

Deeper computations

Despite all those improvements, errors are still likely during any significant calculations. While it continues to work toward developing error-corrected qubits, IBM is focusing on what it calls error mitigation, which it first detailed last year. As we described it then:

“The researchers turned to a method where they intentionally amplified and then measured the processor’s noise at different levels. These measurements are used to estimate a function that produces similar output to the actual measurements. That function can then have its noise set to zero to produce an estimate of what the processor would do without any noise at all.”

The problem here is that using the function is computationally difficult, and the difficulty increases with the qubit count. So, while it’s still easier to do error mitigation calculations than simulate the quantum computer’s behavior on the same hardware, there’s still the risk of it becoming computationally intractable. But IBM has also taken the time to optimize that, too. “They’ve got algorithmic improvements, and the method that uses tensor methods [now] uses the GPU,” Gambetta told Ars. “So I think it’s a combination of both.”

IBM boosts the amount of computation you can get done on quantum hardware Read More »

what-makes-baseball’s-“magic-mud”-so-special?

What makes baseball’s “magic mud” so special?

“Magic mud” composition and microstructure: (top right) a clean baseball surface; (bottom right) a mudded baseball.

Credit: S. Pradeep et al., 2024

“Magic mud” composition and microstructure: (top right) a clean baseball surface; (bottom right) a mudded baseball. Credit: S. Pradeep et al., 2024

Pradeep et al. found that magic mud’s particles are primarily silt and clay, with a bit of sand and organic material. The stickiness comes from the clay, silt, and organic matter, while the sand makes it gritty. So the mud “has the properties of skin cream,” they wrote. “This allows it to be held in the hand like a solid but also spread easily to penetrate pores and make a very thin coating on the baseball.”

When the mud dries on the baseball, however, the residue left behind is not like skin cream. That’s due to the angular sand particles bonded to the baseball by the clay, which can increase surface friction by as much as a factor of two. Meanwhile, the finer particles double the adhesion. “The relative proportions of cohesive particulates, frictional sand, and water conspire to make a material that flows like skin cream but grips like sandpaper,” they wrote.

Despite its relatively mundane components, the magic mud nonetheless shows remarkable mechanical behaviors that the authors think would make it useful in other practical applications. For instance, it might replace synthetic materials as an effective lubricant, provided the gritty sand particles are removed. Or it could be used as a friction agent to improve traction on slippery surfaces, provided one could define the optimal fraction of sand content that wouldn’t diminish its spreadability. Or it might be used as a binding agent in locally sourced geomaterials for construction.

“As for the future of Rubbing Mud in Major League Baseball, unraveling the mystery of its behavior does not and should not necessarily lead to a synthetic replacement,” the authors concluded. “We rather believe the opposite; Rubbing Mud is a nature-based material that is replenished by the tides, and only small quantities are needed for great effect. In a world that is turning toward green solutions, this seemingly antiquated baseball tradition provides a glimpse of a future of Earth-inspired materials science.”

DOI: PNAS, 2024. 10.1073/pnas.241351412  (About DOIs).

What makes baseball’s “magic mud” so special? Read More »