There’s a common popular science demonstration involving “soap boats,” in which liquid soap poured onto the surface of water creates a propulsive flow driven by gradients in surface tension. But it doesn’t last very long since the soapy surfactants rapidly saturate the water surface, eliminating that surface tension. Using ethanol to create similar “cocktail boats” can significantly extend the effect because the alcohol evaporates rather than saturating the water.
That simple classroom demonstration could also be used to propel tiny robotic devices across liquid surfaces to carry out various environmental or industrial tasks, according to a preprint posted to the physics arXiv. The authors also exploited the so-called “Cheerios effect” as a means of self-assembly to create clusters of tiny ethanol-powered robots.
As previously reported, those who love their Cheerios for breakfast are well acquainted with how those last few tasty little “O”s tend to clump together in the bowl: either drifting to the center or to the outer edges. The “Cheerios effect is found throughout nature, such as in grains of pollen (or, alternatively, mosquito eggs or beetles) floating on top of a pond; small coins floating in a bowl of water; or fire ants clumping together to form life-saving rafts during floods. A 2005 paper in the American Journal of Physics outlined the underlying physics, identifying the culprit as a combination of buoyancy, surface tension, and the so-called “meniscus effect.”
It all adds up to a type of capillary action. Basically, the mass of the Cheerios is insufficient to break the milk’s surface tension. But it’s enough to put a tiny dent in the surface of the milk in the bowl, such that if two Cheerios are sufficiently close, the curved surface in the liquid (meniscus) will cause them to naturally drift toward each other. The “dents” merge and the “O”s clump together. Add another Cheerio into the mix, and it, too, will follow the curvature in the milk to drift toward its fellow “O”s.
Physicists made the first direct measurements of the various forces at work in the phenomenon in 2019. And they found one extra factor underlying the Cheerios effect: The disks tilted toward each other as they drifted closer in the water. So the disks pushed harder against the water’s surface, resulting in a pushback from the liquid. That’s what leads to an increase in the attraction between the two disks.
New work provides a good view of where the field currently stands.
The first-generation tech demo of Atom’s hardware. Things have progressed considerably since. Credit: Atom Computing
In September, Microsoft made an unusual combination of announcements. It demonstrated progress with quantum error correction, something that will be needed for the technology to move much beyond the interesting demo phase, using hardware from a quantum computing startup called Quantinuum. At the same time, however, the company also announced that it was forming a partnership with a different startup, Atom Computing, which uses a different technology to make qubits available for computations.
Given that, it was probably inevitable that the folks in Redmond, Washington, would want to show that similar error correction techniques would also work with Atom Computing’s hardware. It didn’t take long, as the two companies are releasing a draft manuscript describing their work on error correction today. The paper serves as both a good summary of where things currently stand in the world of error correction, as well as a good look at some of the distinct features of computation using neutral atoms.
Atoms and errors
While we have various technologies that provide a way of storing and manipulating bits of quantum information, none of them can be operated error-free. At present, errors make it difficult to perform even the simplest computations that are clearly beyond the capabilities of classical computers. More sophisticated algorithms would inevitably encounter an error before they could be completed, a situation that would remain true even if we could somehow improve the hardware error rates of qubits by a factor of 1,000—something we’re unlikely to ever be able to do.
The solution to this is to use what are called logical qubits, which distribute quantum information across multiple hardware qubits and allow the detection and correction of errors when they occur. Since multiple qubits get linked together to operate as a single logical unit, the hardware error rate still matters. If it’s too high, then adding more hardware qubits just means that errors will pop up faster than they can possibly be corrected.
We’re now at the point where, for a number of technologies, hardware error rates have passed the break-even point, and adding more hardware qubits can lower the error rate of a logical qubit based on them. This was demonstrated using neutral atom qubits by an academic lab at Harvard University about a year ago. The new manuscript demonstrates that it also works on a commercial machine from Atom Computing.
Neutral atoms, which can be held in place using a lattice of laser light, have a number of distinct advantages when it comes to quantum computing. Every single atom will behave identically, meaning that you don’t have to manage the device-to-device variability that’s inevitable with fabricated electronic qubits. Atoms can also be moved around, allowing any atom to be entangled with any other. This any-to-any connectivity can enable more efficient algorithms and error-correction schemes. The quantum information is typically stored in the spin of the atom’s nucleus, which is shielded from environmental influences by the cloud of electrons that surround it, making them relatively long-lived qubits.
Operations, including gates and readout, are performed using lasers. The way the physics works, the spacing of the atoms determines how the laser affects them. If two atoms are a critical distance apart, the laser can perform a single operation, called a two-qubit gate, that affects both of their states. Anywhere outside this distance, and a laser only affects each atom individually. This allows a fine control over gate operations.
That said, operations are relatively slow compared to some electronic qubits, and atoms can occasionally be lost entirely. The optical traps that hold atoms in place are also contingent upon the atom being in its ground state; if any atom ends up stuck in a different state, it will be able to drift off and be lost. This is actually somewhat useful, in that it converts an unexpected state into a clear error.
Atom Computing’s system. Rows of atoms are held far enough apart so that a single laser sent across them (green bar) only operates on individual atoms. If the atoms are moved to the interaction zone (red bar), a laser can perform gates on pairs of atoms. Spaces where atoms can be held can be left empty to avoid performing unneeded operations. Credit: Reichardt, et al.
The machine used in the new demonstration hosts 256 of these neutral atoms. Atom Computing has them arranged in sets of parallel rows, with space in between to let the atoms be shuffled around. For single-qubit gates, it’s possible to shine a laser across the rows, causing every atom it touches to undergo that operation. For two-qubit gates, pairs of atoms get moved to the end of the row and moved a specific distance apart, at which point a laser will cause the gate to be performed on every pair present.
Atom’s hardware also allows a constant supply of new atoms to be brought in to replace any that are lost. It’s also possible to image the atom array in between operations to determine whether any atoms have been lost and if any are in the wrong state.
It’s only logical
As a general rule, the more hardware qubits you dedicate to each logical qubit, the more simultaneous errors you can identify. This identification can enable two ways of handling the error. In the first, you simply discard any calculation with an error and start over. In the second, you can use information about the error to try to fix it, although the repair involves additional operations that can potentially trigger a separate error.
For this work, the Microsoft/Atom team used relatively small logical qubits (meaning they used very few hardware qubits), which meant they could fit more of them within 256 total hardware qubits the machine made available. They also checked the error rate of both error detection with discard and error detection with correction.
The research team did two main demonstrations. One was placing 24 of these logical qubits into what’s called a cat state, named after Schrödinger’s hypothetical feline. This is when a quantum object simultaneously has non-zero probability of being in two mutually exclusive states. In this case, the researchers placed 24 logical qubits in an entangled cat state, the largest ensemble of this sort yet created. Separately, they implemented what’s called the Bernstein-Vazirani algorithm. The classical version of this algorithm requires individual queries to identify each bit in a string of them; the quantum version obtains the entire string with a single query, so is a notable case of something where a quantum speedup is possible.
Both of these showed a similar pattern. When done directly on the hardware, with each qubit being a single atom, there was an appreciable error rate. By detecting errors and discarding those calculations where they occurred, it was possible to significantly improve the error rate of the remaining calculations. Note that this doesn’t eliminate errors, as it’s possible for multiple errors to occur simultaneously, altering the value of the qubit without leaving an indication that can be spotted with these small logical qubits.
Discarding has its limits; as calculations become increasingly complex, involving more qubits or operations, it will inevitably mean every calculation will have an error, so you’d end up wanting to discard everything. Which is why we’ll ultimately need to correct the errors.
In these experiments, however, the process of correcting the error—taking an entirely new atom and setting it into the appropriate state—was also error-prone. So, while it could be done, it ended up having an overall error rate that was intermediate between the approach of catching and discarding errors and the rate when operations were done directly on the hardware.
In the end, the current hardware has an error rate that’s good enough that error correction actually improves the probability that a set of operations can be performed without producing an error. But not good enough that we can perform the sort of complex operations that would lead quantum computers to have an advantage in useful calculations. And that’s not just true for Atom’s hardware; similar things can be said for other error-correction demonstrations done on different machines.
There are two ways to go beyond these current limits. One is simply to improve the error rates of the hardware qubits further, as fewer total errors make it more likely that we can catch and correct them. The second is to increase the qubit counts so that we can host larger, more robust logical qubits. We’re obviously going to need to do both, and Atom’s partnership with Microsoft was formed in the hope that it will help both companies get there faster.
John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.
University of Rochester physicist Ranga Dias made headlines with his controversial claims of high-temperature superconductivity—and made headlines again when the two papers reporting the breakthroughs were later retracted under suspicion of scientific misconduct, although Dias denied any wrongdoing. The university conducted a formal investigation over the past year and has now terminated Dias’ employment, The Wall Street Journal reported.
“In the past year, the university completed a fair and thorough investigation—conducted by a panel of nationally and internationally known physicists—into data reliability concerns within several retracted papers in which Dias served as a senior and corresponding author,” a spokesperson for the University of Rochester said in a statement to the WSJ, confirming his termination. “The final report concluded that he engaged in research misconduct while a faculty member here.”
The spokesperson declined to elaborate further on the details of his departure, and Dias did not respond to the WSJ’s request for comment. Dias did not have tenure, so the final decision rested with the Board of Trustees after a recommendation from university President Sarah Mangelsdorf. Mangelsdorf had called for terminating his position in an August letter to the chair and vice chair of the Board of Trustees, so the decision should not come as a surprise. Dias’ lawsuit claiming that the investigation was biased was dismissed by a judge in April.
Ars has been following this story ever since Dias first burst onto the scene with reports of a high-pressure, room-temperature superconductor, published in Nature in 2020. Even as that paper was being retracted due to concerns about the validity of some of its data, Dias published a second paper in Nature claiming a similar breakthrough: a superconductor that works at high temperatures but somewhat lower pressures. Shortly afterward, that paper was retracted as well. As Ars Science Editor John Timmer reported previously:
Dias’ lab was focused on high-pressure superconductivity. At extreme pressures, the orbitals where electrons hang out get distorted, which can alter the chemistry and electronic properties of materials. This can mean the formation of chemical compounds that don’t exist at normal pressures, along with distinct conductivity. In a number of cases, these changes enabled superconductivity at unusually high temperatures, although still well below the freezing point of water.
Dias, however, supposedly found a combination of chemicals that would boost the transition to superconductivity to near room temperature, although only at extreme pressures. While the results were plausible, the details regarding how some of the data was processed to produce one of the paper’s key graphs were lacking, and Dias didn’t provide a clear explanation.
The ensuing investigation cleared Dias of misconduct for that first paper. Then came the second paper, which reported another high-temperature superconductor forming at less extreme pressures. However, potential problems soon became apparent, with many of the authors calling for its retraction, although Dias did not.
By making small adjustments to the frequency that the qubits are operating at, it’s possible to avoid these problems. This can be done when the Heron chip is being calibrated before it’s opened for general use.
Separately, the company has done a rewrite of the software that controls the system during operations. “After learning from the community, seeing how to run larger circuits, [we were able to] almost better define what it should be and rewrite the whole stack towards that,” Gambetta said. The result is a dramatic speed-up. “Something that took 122 hours now is down to a couple of hours,” he told Ars.
Since people are paying for time on this hardware, that’s good for customers now. However, it could also pay off in the longer run, as some errors can occur randomly, so less time spent on a calculation can mean fewer errors.
Deeper computations
Despite all those improvements, errors are still likely during any significant calculations. While it continues to work toward developing error-corrected qubits, IBM is focusing on what it calls error mitigation, which it first detailed last year. As we described it then:
“The researchers turned to a method where they intentionally amplified and then measured the processor’s noise at different levels. These measurements are used to estimate a function that produces similar output to the actual measurements. That function can then have its noise set to zero to produce an estimate of what the processor would do without any noise at all.”
The problem here is that using the function is computationally difficult, and the difficulty increases with the qubit count. So, while it’s still easier to do error mitigation calculations than simulate the quantum computer’s behavior on the same hardware, there’s still the risk of it becoming computationally intractable. But IBM has also taken the time to optimize that, too. “They’ve got algorithmic improvements, and the method that uses tensor methods [now] uses the GPU,” Gambetta told Ars. “So I think it’s a combination of both.”
“Magic mud” composition and microstructure: (top right) a clean baseball surface; (bottom right) a mudded baseball.
Credit: S. Pradeep et al., 2024
“Magic mud” composition and microstructure: (top right) a clean baseball surface; (bottom right) a mudded baseball. Credit: S. Pradeep et al., 2024
Pradeep et al. found that magic mud’s particles are primarily silt and clay, with a bit of sand and organic material. The stickiness comes from the clay, silt, and organic matter, while the sand makes it gritty. So the mud “has the properties of skin cream,” they wrote. “This allows it to be held in the hand like a solid but also spread easily to penetrate pores and make a very thin coating on the baseball.”
When the mud dries on the baseball, however, the residue left behind is not like skin cream. That’s due to the angular sand particles bonded to the baseball by the clay, which can increase surface friction by as much as a factor of two. Meanwhile, the finer particles double the adhesion. “The relative proportions of cohesive particulates, frictional sand, and water conspire to make a material that flows like skin cream but grips like sandpaper,” they wrote.
Despite its relatively mundane components, the magic mud nonetheless shows remarkable mechanical behaviors that the authors think would make it useful in other practical applications. For instance, it might replace synthetic materials as an effective lubricant, provided the gritty sand particles are removed. Or it could be used as a friction agent to improve traction on slippery surfaces, provided one could define the optimal fraction of sand content that wouldn’t diminish its spreadability. Or it might be used as a binding agent in locally sourced geomaterials for construction.
“As for the future of Rubbing Mud in Major League Baseball, unraveling the mystery of its behavior does not and should not necessarily lead to a synthetic replacement,” the authors concluded. “We rather believe the opposite; Rubbing Mud is a nature-based material that is replenished by the tides, and only small quantities are needed for great effect. In a world that is turning toward green solutions, this seemingly antiquated baseball tradition provides a glimpse of a future of Earth-inspired materials science.”
When Zachary Lindsey, a physicist at Berry College in Georgia, decided to run an experiment on how to get the best speed and torque while playing disc golf (aka Frisbee golf), he had no trouble recruiting 24 eager participants keen on finding science-based tips on how to improve their game. Lindsey and his team determined the optimal thumb distance from the center of the disc to increase launch speed and distance, according to a new paper published in the journal AIP Advances.
Disc golf first emerged in the 1960s, but “Steady” Ed Hendrick, inventor of the modern Frisbee, is widely considered the “father” of the sport since it was he who coined and trademarked the name “disc golf” in 1975. He and his son founded their own company to manufacture the equipment used in the game. As of 2023, the Professional Disc Golf Association (PDGA) had over 107,000 registered members worldwide, with players hailing from 40 countries.
A disc golf course typically has either nine or 18 holes or targets, called “baskets.” There is a tee position for starting play, and players take turns throwing discs until they catch them in the basket, similar to how golfers work toward sinking a golf ball into a hole. The expected number of throws required of an experienced player to make the basket is considered “par.”
There are essentially three different disc types: drivers, mid-rangers, and putters. Driver discs are thin and sharp-edged, designed to reduce drag for long throws; they’re typically used for teeing off or other long-distance throws since a strong throw can cover as much as 500 feet. Putter discs, as the name implies, are better for playing close to the basket since they are thicker and thus have higher drag when in flight. Mid-range discs have elements of both drivers and putters, designed for distances of 200–300 feet—i.e., approaching the basket—where players want to optimize range and accuracy.
Enlarge/ Shark intestines are naturally occurring Tesla valves; scientists have figured out how to mimic their unique structure.
Sarah L. Keller/University of Washington
Scientists at the University of Washington have re-created the distinctive spiral shapes of shark intestines in 3D-printed pipes in order to study the unique fluid flow inside the spirals. Their prototypes kept fluids flowing in one preferred direction with no need for flaps to control that flow and performed significantly better than so-called “Tesla valves,” particularly when made of soft polymers, according to a new paper published in the Proceedings of the National Academy of Sciences.
As we’ve reported previously, in 1920, Serbian-born inventor Nikola Tesla designed and patented what he called a “valvular conduit“: a pipe whose internal design ensures that fluid will flow in one preferred direction, with no need for moving parts, making it ideal for microfluidics applications, among other uses. The key to Tesla’s ingenious valve design is a set of interconnected, asymmetric, tear-shaped loops.
In his patent application, Tesla described this series of 11 flow-control segments as being made of “enlargements, recessions, projections, baffles, or buckets which, while offering virtually no resistance to the passage of fluid in one direction, other than surface friction, constitute an almost impassable barrier to its flow in the opposite direction.” And because it achieves this with no moving parts, a Tesla valve is much more resistant to the wear and tear of frequent operation.
Tesla claimed that water would flow through his valve 200 times slower in one direction than another, which may have been an exaggeration. A team of scientists at New York University built a working Tesla valve in 2021, in accordance with the inventor’s design, and tested that claim by measuring the flow of water through the valve in both directions at various pressures. The scientists found the water only flowed about two times slower in the nonpreferred direction.
Flow rate proved to be a critical factor. The valve offered very little resistance at slow flow rates, but once that rate increased above a certain threshold, the valve’s resistance would increase as well, generating turbulent flows in the reverse direction, thereby “plugging” the pipe with vortices and disruptive currents. So it actually works more like a switch and can also help smooth out pulsing flows, akin to how AC/DC converters turn alternating currents into direct currents. That may even have been Tesla’s original intent in designing the valve, given that his biggest claim to fame is inventing both the AC motor and an AC/DC converter.
It helps to be a shark
Enlarge/ Different kinds of sharks have intestines with different spiral patterns that favor fluid flow in one direction.
Ido Levin
The Tesla valve also provides a useful model for how food moves through the digestive system of many species of shark. In 2020, Japanese researchers reconstructed micrographs of histological sections from a species of catshark into a three-dimensional model, offering a tantalizing glimpse of the anatomy of a scroll-type spiral intestine. The following year, scientists took CT scans of shark intestines and concluded that the intestines are naturally occurring Tesla valves.
That’s where the work of UW postdoc Ido Levin and his co-authors comes in. They had questions about the 2021 research in particular. “Flow asymmetry in a pipe with no moving flaps has tremendous technological potential, but the mechanism was puzzling,” said Levin. “It was not clear which parts of the shark’s intestinal structure contributed to the asymmetry and which served only to increase the surface area for nutrient uptake.”
Levin et al. 3D-printed several pipes with an internal helical structure mimicking that of shark intestines, varying certain geometrical parameters like the number of turns or the pitch angle of the helix. It was admittedly an idealized structure, so the team was delighted when the first batch, made from rigid materials, produced the hoped-for flow asymmetry. After further fine-tuning of the parameters, the rigid printed pipes produced flow asymmetries that matched or exceeded Tesla valves.
Enlarge/ Eight of the team’s 3D-printed prototypes with various interior helices.
Ido Levin/University of Washington
But the researchers weren’t done yet. “[Prior work] showed that if you connect these intestines in the same direction as a digestive tract, you get a faster flow of fluid than if you connect them the other way around. We thought this was very interesting from a physics perspective,” said Levin last year while presenting preliminary results at the 67th Annual Biophysical Society Meeting. “One of the theorems in physics actually states that if you take a pipe, and you flow fluid very slowly through it, you have the same flow if you invert it. So we were very surprised to see experiments that contradict the theory. But then you remember that the intestines are not made out of steel—they’re made of something soft, so while fluid flows through the pipe, it deforms it.”
That gave Levin et al. the idea to try making their pipes out of soft deformable polymers—the softest commercially available ones that could also be used for 3D printing. That batch of pipes performed seven times better on flow asymmetry than any prior measurements of Tesla valves. And since actual shark intestines are about 100 times softer than the polymers they used, the team thinks they can achieve even better performance, perhaps with hydrogels when they become more widely available as 3D printing continues to evolve. The biggest challenge, per the authors, is finding soft materials that can withstand high deformations.
Finally, because the pipes are three-dimensional, they can accommodate larger fluid volumes, opening up applications in larger commercial devices. “Chemists were already motivated to develop polymers that are simultaneously soft, strong and printable,” said co-author Alshakim Nelson, whose expertise lies in developing new types of polymers. “The potential use of these polymers to control flow in applications ranging from engineering to medicine strengthens that motivation.”
Enlarge/ Washington State University scientists conducted batting cage tests of wood and metal bats with young players.
There’s long been a debate in baseball circles about the respective benefits and drawbacks of using wood bats versus metal bats. However, there are relatively few scientific studies on the topic that focus specifically on young athletes, who are most likely to use metal bats. Scientists at Washington State University (WSU) conducted their own tests of wood and metal bats with young players. They found that while there are indeed performance differences between wooden and metal bats, a batter’s skill is still the biggest factor affecting how fast the ball comes off the bat, according to a new paper published in the Journal of Sports Engineering and Technology.
According to physicist and acoustician Daniel Russell of Penn State University—who was not involved in the study but has a long-standing interest in the physics of baseball ever since his faculty days at Kettering University in Michigan—metal bats were first introduced in 1974 and soon dominated NCAA college baseball, youth baseball, and adult amateur softball. Those programs liked the metal bats because they were less likely to break than traditional wooden bats, reducing costs.
Players liked them because it can be easier to control metal bats and swing faster, as the center of mass is closer to the balance point in the bat’s handle, resulting in a lower moment of inertia (or “swing weight”). A faster swing doesn’t mean that a hit ball will travel faster, however, since the lower moment of inertia is countered by a decreased collision efficiency. Metal bats are also more forgiving if players happen to hit the ball away from the proverbial “sweet spot” of the bat. (The definition of the sweet spot is a bit fuzzy because it is sometimes defined in different ways, but it’s commonly understood to be the area on the bat’s barrel that results in the highest batted ball speeds.)
“There’s more of a penalty when you’re not on the sweet spot with wood bats than with the other metal bats,” said Lloyd Smith, director of WSU’s Sport Science Laboratory and a co-author of the latest study. “[And] wood is still heavy. Part of baseball is hitting the ball far, but the other part is just hitting the ball. If you have a heavy bat, you’re going to have a harder time making contact because it’s harder to control.”
Metal bats may also improve performance via a kind of “trampoline effect.” Metal bats are hollow, while wood bats are solid. When a ball hits a wood bat, the bat barrel compresses by as much as 75 percent, such that internal friction forces decrease the initial energy by as much as 75 percent. A metal bat barrel behaves more like a spring when it compresses in response to a ball’s impact, so there is much less energy loss. Based on his own research back in 2004, Russell has found that improved performance of metal bats is linked to the frequency of the barrel’s mode of vibration, aka the “hoop mode.” (Bats with the lowest hoop frequency will have the highest performance.)
This electroactive polymer hydrogel “learned” to play Pong. Credit: Cell Reports Physical Science/Strong et al.
Pong will always hold a special place in the history of gaming as one of the earliest arcade video games. Introduced in 1972, it was a table tennis game featuring very simple graphics and gameplay. In fact, it’s simple enough that even non-living materials known as hydrogels can “learn” to play the game by “remembering” previous patterns of electrical stimulation, according to a new paper published in the journal Cell Reports Physical Science.
“Our research shows that even very simple materials can exhibit complex, adaptive behaviors typically associated with living systems or sophisticated AI,” said co-author Yoshikatsu Hayashi, a biomedical engineer at the University of Reading in the UK. “This opens up exciting possibilities for developing new types of ‘smart’ materials that can learn and adapt to their environment.”
Hydrogels are soft, flexible biphasic materials that swell but do not dissolve in water. So a hydrogel may contain a large amount of water but still maintain its shape, making it useful for a wide range of applications. Perhaps the best-known use is soft contact lenses, but various kinds of hydrogels are also used in breast implants, disposable diapers, EEG and ECG medical electrodes, glucose biosensors, encapsulating quantum dots, solar-powered water purification, cell cultures, tissue engineering scaffolds, water gel explosives, actuators for soft robotics, supersonic shock-absorbing materials, and sustained-release drug delivery systems, among other uses.
In April, Hayashi co-authored a paper showing that hydrogels can “learn” to beat in rhythm with an external pacemaker, something previously only achieved with living cells. They exploited the intrinsic ability of the hydrogels to convert chemical energy into mechanical oscillations, using the pacemaker to apply cyclic compressions. They found that when the oscillation of a gel sample matched the harmonic resonance of the pacemaker’s beat, the system kept a “memory” of that resonant oscillation period and could retain that memory even when the pacemaker was turned off. Such hydrogels might one day be a useful substitute for heart research using animals, providing new ways to research conditions like cardiac arrhythmia.
For this latest work, Hayashi and co-authors were partly inspired by a 2022 study in which brain cells in a dish—dubbed DishBrain—were electrically stimulated in such a way as to create useful feedback loops, enabling them to “learn” to play Pong (albeit badly). As Ars Science Editor John Timmer reported at the time:
Pong proved to be an excellent choice for the experiments. The environment only involves a couple of variables: the location of the paddle and the location of the ball. The paddle can only move along a single line, so the motor portion of things only needs two inputs: move up or move down. And there’s a clear reward for doing things well: you avoid an end state where the ball goes past the paddles and the game stops. It is a great setup for testing a simple neural network.
Put in Pong terms, the sensory portion of the network will take the positional inputs, determine an action (move the paddle up or down), and then generate an expectation for what the next state will be. If it’s interpreting the world correctly, that state will be similar to its prediction, and thus the sensory input will be its own reward. If it gets things wrong, then there will be a large mismatch, and the network will revise its connections and try again.
There were a few caveats—even the best systems didn’t play Pong all that well—but the approach mostly worked. Those systems comprising either mouse or human neurons saw the average length of Pong rallies increase over time, indicating they might be learning the game’s rules. Systems based on non-neural cells, or those lacking a reward system, didn’t see this sort of improvement. The findings provided some evidence that neural networks formed from actual neurons spontaneously develop the ability to learn. And that could explain some of the learning capabilities of actual brains, where smaller groups of neurons are organized into functional units.
Enlarge/ The Wow! signal, represented as “6EQUJ5,” was discovered in 1977 by astronomer Jerry Ehman.
Public domain
An unusually bright burst of radio waves—dubbed the Wow! signal—discovered in the 1970s has baffled astronomers ever since, given the tantalizing possibility that it just might be from an alien civilization trying to communicate with us. A team of astronomers think they might have a better explanation, according to a preprint posted to the physics arXiv: clouds of atomic hydrogen that essentially act like a naturally occurring galactic maser, emitting a beam of intense microwave radiation when zapped by a flare from a passing magnetar.
As previously reported, the Wow! signal was detected on August 18, 1977, by The Ohio State University Radio Observatory, known as “Big Ear.” Astronomy professor Jerry Ehman was analyzing Big Ear data in the form of printouts that, to the untrained eye, looked like someone had simply smashed the number row of a typewriter with a preference for lower digits. Numbers and letters in the Big Ear data indicated, essentially, the intensity of the electromagnetic signal picked up by the telescope over time, starting at ones and moving up to letters in the double digits (A was 10, B was 11, and so on). Most of the page was covered in ones and twos, with a stray six or seven sprinkled in.
But that day, Ehman found an anomaly: 6EQUJ5 (sometimes misinterpreted as a message encoded in the radio signal). This signal had started out at an intensity of six—already an outlier on the page—climbed to E, then Q, peaked at U—the highest power signal Big Ear had ever seen—then decreased again. Ehman circled the sequence in red pen and wrote “Wow!” next to it. The signal appeared to be coming from the direction of the Sagittarius constellation, and the entire signal lasted for about 72 seconds. Alas, SETI researchers have never been able to detect the so-called “Wow! Signal” again, despite many tries with radio telescopes around the world.
One reason for the excited reaction is that such a signal had been proposed as a possible communication from extraterrestrial civilizations in a 1959 paper by Cornell University physicists Philip Morrison and Giuseppe Cocconi. Morrison and Cocconi thought that such a civilization might use the 1420 megahertz frequency naturally emitted by hydrogen, the universe’s most abundant element and, therefore, something an alien civilization would be familiar with. In fact, the Big Ear had been reassigned to the SETI project in 1973 specifically to hunt for possible signals. Ehman himself was quite skeptical of the “it could be aliens” hypothesis for several decades, although he admitted in a 2019 interview that “the Wow! signal certainly has the potential of being the first signal from extraterrestrial intelligence.”
Several other alternative hypotheses have been suggested. For instance, Antonio Paris suggested in 2016 that the signal may have come from the hydrogen cloud surrounding a pair of comets, 266P/Christensen and 335P/Gibbs. This was rejected by most astronomers, however, in part because comets don’t emit strongly at the relevant frequencies. Others have suggested the signal was the result of interference from satellites orbiting the Earth, or a signal from Earth reflected off a piece of space debris.
Space maser!
Astrobiologist Abel Mendez of the University of Puerto Rico at Arecibo and his co-authors think they have the strongest astrophysical explanation to date with their cosmic maser hypothesis. The team was actually hunting for habitable exoplanets using signals from red dwarf stars. In some of the last archival data collected at the Arecibo radio telescope (which collapsed in 2020), they noticed several signals that were remarkably similar to the Wow! signal in terms of frequency—just much less intense (bright).
Mendez admitted to Science News that he had always viewed the Wow! signal as just a fluke—he certainly didn’t think it was aliens. But he realized that if the signals they were identifying had blazed brighter, even momentarily, they would be very much like the Wow! signal. As for the mechanism that caused such a brightening, Mendez et al. propose that a magnetar (a highly magnetic neutron star) passing behind a cloud of atomic hydrogen could have flared up with sufficient energy to produce stimulated emission in the form of a tightly focused beam of microwave radiation—a cosmic maser. (Masers are akin to lasers, except they emit microwave radiation rather than visible radiation.)
Proving their working hypothesis will be much more challenging, although there have been rare sightings of such naturally occurring masers from hydrogen molecules in space. But nobody has ever spotted an atomic hydrogen cloud with an associated maser, and that’s what would be needed to explain the intensity of the Wow! signal. That’s why other astronomers are opting for cautious skepticism. “A magnetar is going to produce [short] radio emissions as well. Do you really need this complicated maser stuff happening as well to explain the Wow! signal?” Michael Garrett of the University of Manchester told New Scientist. “Personally, I don’t think so. It just makes a complicated story even more complicated.”
Enlarge/ Some cricket bowlers favor keeping the arm horizontal during delivery, the better to trick the batsmen.
Although the sport of cricket has been around for centuries in some form, the game strategy continues to evolve in the 21st century. Among the newer strategies employed by “bowlers”—the equivalent of the pitcher in baseball—is delivering the ball with the arm horizontally positioned close to the shoulder line, which has proven remarkably effective in “tricking” batsmen in their perception of the ball’s trajectory.
Scientists at Amity University Dubai in the United Arab Emirates were curious about the effectiveness of the approach, so they tested the aerodynamics of cricket balls in wind tunnel experiments. The team concluded that this style of bowling creates a high-speed spinning effect that shifts the ball’s trajectory mid-flight—an effect also seen in certain baseball pitches, according to a new paper published in the journal Physics of Fluids.
“The unique and unorthodox bowling styles demonstrated by cricketers have drawn significant attention, particularly emphasizing their proficiency with a new ball in early stages of a match,” said co-author Kizhakkelan Sudhakaran Siddharth, a mechanical engineer at Amity University Dubai. “Their bowling techniques frequently deceive batsmen, rendering these bowlers effective throughout all phases of a match in almost all formats of the game.”
As previously reported, any moving ball leaves a trail of air as it travels; the inevitable drag slows the ball down. The ball’s trajectory is affected by diameter and speed and by tiny irregularities on the surface. Baseballs, for example, are not completely smooth; they have stitching in a figure-eight pattern. Those stitches are bumpy enough to affect the airflow around the baseball as it’s thrown toward home plate. As a baseball moves, it creates a whirlpool of air around it, commonly known as the Magnus effect. The raised seams churn the air around the ball, creating high-pressure zones in various locations (depending on the pitch type) that can cause deviations in its trajectory.
Physicists have been enthusiastically studying baseballs since the 1940s, when Lyman Briggs became intrigued by whether a curveball actually curves. Initially, he enlisted the aid of the Washington Senators pitching staff at Griffith Stadium to measure the spin of a pitched ball; the idea was to determine how much the curve of a baseball depends on its spin and speed.
Briggs followed up with wind tunnel experiments at the National Bureau of Standards (now the National Institute of Standards and Technology) to make even more precise measurements since he could control most variables. He found that spin rather than speed was the key factor in causing a pitched ball to curve and that a curveball could dip up to 17.5 inches as it travels from the pitcher’s mound to home plate.
Enlarge/ The first recorded photo of a cricket match taken on July 25, 1857, by Roger Fenton.
Public domain
In 2018, we reported on a Utah State University study to explain the fastball’s unexpected twist in experiments using Little League baseballs. The USU scientists fired the balls one by one through a smoke-filled chamber. Two red sensors detected the balls as they zoomed past, triggering lasers that acted as flashbulbs. They then used particle image velocimetry to calculate airflow at any given spot around the ball. Conclusion: It all comes down to spin speed, spin axis, and the orientation of the ball, and there is no meaningful aerodynamical difference between a two-seam fastball and a four-seam fastball.
In 2022, two physicists developed a laser-guided speed measurement system to measure the change in speed of a baseball mid-flight and then used that measurement to calculate the acceleration, the various forces acting on the ball, and the lift and drag. They suggested their approach could also be used for other ball sports like cricket and soccer.
Enlarge/ The Armfield C15-15 Wake Survey Rake measured pressure downstream of the ball.
A.B. Faazil et al., 2024
Similarly, golf ball dimples reduce the drag flow by creating a turbulent boundary layer of air, while the ball’s spin generates lift by creating a higher air pressure area on the bottom of the ball than on the top. The surface patterns on volleyballs can also affect their trajectories. Conventional volleyballs have six panels, but more recent designs have eight panels, a hexagonal honeycomb pattern, or dimples. A 2019 study found that the surface panels on conventional volleyballs can give rise to unpredictable trajectories on float serves (which have no spin), and modifying the surface patterns could make for a more consistent flight.
From a physics standpoint, the float serve is similar to throwing a knuckleball in baseball, which is largely unaffected by the Magnus force because it has no spin. Its trajectory is determined entirely by how the seams affect the turbulent airflow around the baseball. The seams of a baseball can change the speed (velocity) of the air near the ball’s surface, speeding the ball up or slowing it down, depending on whether said seams are on the top or the bottom. The panels on conventional volleyballs have a similar effect.
Inertial confinement fusion is one method for generating energy through nuclear fusion, albeit one plagued by all manner of scientific challenges (although progress is being made). Researchers at LeHigh University are attempting to overcome one specific bugbear with this approach by conducting experiments with mayonnaise placed in a rotating figure-eight contraption. They described their most recent findings in a new paper published in the journal Physical Review E with an eye toward increasing energy yields from fusion.
The work builds on prior research in the LeHigh laboratory of mechanical engineer Arindam Banerjee, who focuses on investigating the dynamics of fluids and other materials in response to extremely high acceleration and centrifugal force. In this case, his team was exploring what’s known as the “instability threshold” of elastic/plastic materials. Scientists have debated whether this comes about because of initial conditions, or whether it’s the result of “more local catastrophic processes,” according to Banerjee. The question is relevant to a variety of fields, including geophysics, astrophysics, explosive welding, and yes, inertial confinement fusion.
The idea behind inertial confinement fusion is simple. To get two atoms to fuse together, you need to bring their nuclei into contact with each other. Both nuclei are positively charged, so they repel each other, which means that force is needed to convince two hydrogen nuclei to touch. In a hydrogen bomb, force is generated when a small fission bomb explodes, compressing a core of hydrogen. This fuses to create heavier elements, releasing a huge amount of energy.
Being killjoys, scientists prefer not to detonate nuclear weapons every time they want to study fusion or use it to generate electricity. Which brings us to inertial confinement fusion. In inertial confinement fusion, the hydrogen core consists of a spherical pellet of hydrogen ice inside a heavy metal casing. The casing is illuminated by powerful lasers, which burn off a large portion of the material. The reaction force from the vaporized material exploding outward causes the remaining shell to implode. The resulting shockwave compresses the center of the core of the hydrogen pellet so that it begins to fuse.
If confinement fusion ended there, the amount of energy released would be tiny. But the energy released due to the initial fusion burn in the center generates enough heat for the hydrogen on the outside of the pellet to reach the required temperature and pressure. So, in the end (at least in computer models), all of the hydrogen is consumed in a fiery death, and massive quantities of energy are released.
That’s the idea anyway. The problem is that hydrodynamic instabilities tend to form in the plasma state—Banerjee likens it to “two materials [that] penetrate one another like fingers” in the presence of gravity or any accelerating field—which in turn reduces energy yields. The technical term is a Rayleigh-Taylor instability, which occurs between two materials of different densities, where the density and pressure gradients move in opposite directions. Mayonnaise turns out to be an excellent analog for investigating this instability in accelerated solids, with no need for a lab setup with high temperature and pressure conditions, because it’s a non-Newtonian fluid.
“We use mayonnaise because it behaves like a solid, but when subjected to a pressure gradient, it starts to flow,” said Banerjee. “As with a traditional molten metal, if you put a stress on mayonnaise, it will start to deform, but if you remove the stress, it goes back to its original shape. So there’s an elastic phase followed by a stable plastic phase. The next phase is when it starts flowing, and that’s where the instability kicks in.”
More mayo, please
2019 video showcasing the rotating wheel Rayleigh Taylor instability experiment at Lehigh University.
His team’s 2019 experiments involved pouring Hellman’s Real Mayonnaise—no Miracle Whip for this crew—into a Plexiglass container and then creating wavelike perturbations in the mayo. One experiment involved placing the container on a rotating wheel in the shape of a figure eight and tracking the material with a high-speed camera, using an image processing algorithm to analyze the footage. Their results supported the claim that the instability threshold is dependent on initial conditions, namely amplitude and wavelength.
This latest paper sheds more light on the structural integrity of fusion capsules used in inertial confinement fusion, taking a closer look at the material properties, the amplitude and wavelength conditions, and the acceleration rate of such materials as they hit the Rayleigh-Taylor instability threshold. The more scientists know about the phase transition from the elastic to the stable phase, the better they can control the conditions and maintain either an elastic or plastic phase, avoiding the instability. Banerjee et al. were able to identify the conditions to maintain the elastic phase, which could inform the design of future pellets for inertial confinement fusion.
That said, the mayonnaise experiments are an analog, orders of magnitude away from the real-world conditions of nuclear fusion, which Banerjee readily acknowledges. He is nonetheless hopeful that future research will improve the predictability of just what happens within the pellets in their high-temperature, high-pressure environments. “We’re another cog in this giant wheel of researchers,” he said. “And we’re all working towards making inertial fusion cheaper and therefore, attainable.”