Physics

for-the-strongest-disc-golf-throws,-it’s-all-in-the-thumbs

For the strongest disc golf throws, it’s all in the thumbs

When Zachary Lindsey, a physicist at Berry College in Georgia, decided to run an experiment on how to get the best speed and torque while playing disc golf (aka Frisbee golf), he had no trouble recruiting 24 eager participants keen on finding science-based tips on how to improve their game. Lindsey and his team determined the optimal thumb distance from the center of the disc to increase launch speed and distance, according to a new paper published in the journal AIP Advances.

Disc golf first emerged in the 1960s, but “Steady” Ed Hendrick, inventor of the modern Frisbee, is widely considered the “father” of the sport since it was he who coined and trademarked the name “disc golf” in 1975. He and his son founded their own company to manufacture the equipment used in the game. As of 2023, the Professional Disc Golf Association (PDGA) had over 107,000 registered members worldwide, with players hailing from 40 countries.

A disc golf course typically has either nine or 18 holes or targets, called “baskets.” There is a tee position for starting play, and players take turns throwing discs until they catch them in the basket, similar to how golfers work toward sinking a golf ball into a hole. The expected number of throws required of an experienced player to make the basket is considered “par.”

There are essentially three different disc types: drivers, mid-rangers, and putters. Driver discs are thin and sharp-edged, designed to reduce drag for long throws; they’re typically used for teeing off or other long-distance throws since a strong throw can cover as much as 500 feet. Putter discs, as the name implies, are better for playing close to the basket since they are thicker and thus have higher drag when in flight. Mid-range discs have elements of both drivers and putters, designed for distances of 200–300 feet—i.e., approaching the basket—where players want to optimize range and accuracy.

For the strongest disc golf throws, it’s all in the thumbs Read More »

these-3d-printed-pipes-inspired-by-shark-intestines-outperform-tesla-valves

These 3D-printed pipes inspired by shark intestines outperform Tesla valves

“You don’t get to beat Tesla every day” —

Prototypes control fluid flow in a preferred direction with no need for moving parts.

some of the research team’s 3D-printed pipes alongside a plastic toy shark.

Enlarge / Shark intestines are naturally occurring Tesla valves; scientists have figured out how to mimic their unique structure.

Sarah L. Keller/University of Washington

Scientists at the University of Washington have re-created the distinctive spiral shapes of shark intestines in 3D-printed pipes in order to study the unique fluid flow inside the spirals. Their prototypes kept fluids flowing in one preferred direction with no need for flaps to control that flow and performed significantly better than so-called “Tesla valves,” particularly when made of soft polymers, according to a new paper published in the Proceedings of the National Academy of Sciences.

As we’ve reported previously, in 1920, Serbian-born inventor Nikola Tesla designed and patented what he called a “valvular conduit“: a pipe whose internal design ensures that fluid will flow in one preferred direction, with no need for moving parts, making it ideal for microfluidics applications, among other uses. The key to Tesla’s ingenious valve design is a set of interconnected, asymmetric, tear-shaped loops.

In his patent application, Tesla described this series of 11 flow-control segments as being made of “enlargements, recessions, projections, baffles, or buckets which, while offering virtually no resistance to the passage of fluid in one direction, other than surface friction, constitute an almost impassable barrier to its flow in the opposite direction.” And because it achieves this with no moving parts, a Tesla valve is much more resistant to the wear and tear of frequent operation.

Tesla claimed that water would flow through his valve 200 times slower in one direction than another, which may have been an exaggeration. A team of scientists at New York University built a working Tesla valve in 2021, in accordance with the inventor’s design, and tested that claim by measuring the flow of water through the valve in both directions at various pressures. The scientists found the water only flowed about two times slower in the nonpreferred direction.

Flow rate proved to be a critical factor. The valve offered very little resistance at slow flow rates, but once that rate increased above a certain threshold, the valve’s resistance would increase as well, generating turbulent flows in the reverse direction, thereby “plugging” the pipe with vortices and disruptive currents. So it actually works more like a switch and can also help smooth out pulsing flows, akin to how AC/DC converters turn alternating currents into direct currents. That may even have been Tesla’s original intent in designing the valve, given that his biggest claim to fame is inventing both the AC motor and an AC/DC converter.

It helps to be a shark

Different kinds of sharks have intestines with different spiral patterns that favor fluid flow in one direction.

Enlarge / Different kinds of sharks have intestines with different spiral patterns that favor fluid flow in one direction.

Ido Levin

The Tesla valve also provides a useful model for how food moves through the digestive system of many species of shark. In 2020, Japanese researchers reconstructed micrographs of histological sections from a species of catshark into a three-dimensional model, offering a tantalizing glimpse of the anatomy of a scroll-type spiral intestine. The following year, scientists took CT scans of shark intestines and concluded that the intestines are naturally occurring Tesla valves.

That’s where the work of UW postdoc Ido Levin and his co-authors comes in. They had questions about the 2021 research in particular. “Flow asymmetry in a pipe with no moving flaps has tremendous technological potential, but the mechanism was puzzling,” said Levin. “It was not clear which parts of the shark’s intestinal structure contributed to the asymmetry and which served only to increase the surface area for nutrient uptake.”

Levin et al. 3D-printed several pipes with an internal helical structure mimicking that of shark intestines, varying certain geometrical parameters like the number of turns or the pitch angle of the helix. It was admittedly an idealized structure, so the team was delighted when the first batch, made from rigid materials, produced the hoped-for flow asymmetry. After further fine-tuning of the parameters, the rigid printed pipes produced flow asymmetries that matched or exceeded Tesla valves.

Eight of the team’s 3D-printed prototypes with various interior helices.

Enlarge / Eight of the team’s 3D-printed prototypes with various interior helices.

Ido Levin/University of Washington

But the researchers weren’t done yet. “[Prior work] showed that if you connect these intestines in the same direction as a digestive tract, you get a faster flow of fluid than if you connect them the other way around. We thought this was very interesting from a physics perspective,” said Levin last year while presenting preliminary results at the 67th Annual Biophysical Society Meeting. “One of the theorems in physics actually states that if you take a pipe, and you flow fluid very slowly through it, you have the same flow if you invert it. So we were very surprised to see experiments that contradict the theory. But then you remember that the intestines are not made out of steel—they’re made of something soft, so while fluid flows through the pipe, it deforms it.”

That gave Levin et al. the idea to try making their pipes out of soft deformable polymers—the softest commercially available ones that could also be used for 3D printing. That batch of pipes performed seven times better on flow asymmetry than any prior measurements of Tesla valves. And since actual shark intestines are about 100 times softer than the polymers they used, the team thinks they can achieve even better performance, perhaps with hydrogels when they become more widely available as 3D printing continues to evolve. The biggest challenge, per the authors, is finding soft materials that can withstand high deformations.

Finally, because the pipes are three-dimensional, they can accommodate larger fluid volumes, opening up applications in larger commercial devices. “Chemists were already motivated to develop polymers that are simultaneously soft, strong and printable,” said co-author Alshakim Nelson, whose expertise lies in developing new types of polymers. “The potential use of these polymers to control flow in applications ranging from engineering to medicine strengthens that motivation.”

PNAS, 2024. DOI: 10.1073/pnas.2406481121 (About DOIs).

These 3D-printed pipes inspired by shark intestines outperform Tesla valves Read More »

metal-bats-have-pluses-for-young-players,-but-in-the-end-it-comes-down-to-skill

Metal bats have pluses for young players, but in the end it comes down to skill

four different kinds of wood and metal bats laid flat on baseball diamond grass

Enlarge / Washington State University scientists conducted batting cage tests of wood and metal bats with young players.

There’s long been a debate in baseball circles about the respective benefits and drawbacks of using wood bats versus metal bats. However, there are relatively few scientific studies on the topic that focus specifically on young athletes, who are most likely to use metal bats. Scientists at Washington State University (WSU) conducted their own tests of wood and metal bats with young players. They found that while there are indeed performance differences between wooden and metal bats, a batter’s skill is still the biggest factor affecting how fast the ball comes off the bat, according to a new paper published in the Journal of Sports Engineering and Technology.

According to physicist and acoustician Daniel Russell of Penn State University—who was not involved in the study but has a long-standing interest in the physics of baseball ever since his faculty days at Kettering University in Michigan—metal bats were first introduced in 1974 and soon dominated NCAA college baseball, youth baseball, and adult amateur softball. Those programs liked the metal bats because they were less likely to break than traditional wooden bats, reducing costs.

Players liked them because it can be easier to control metal bats and swing faster, as the center of mass is closer to the balance point in the bat’s handle, resulting in a lower moment of inertia (or “swing weight”). A faster swing doesn’t mean that a hit ball will travel faster, however, since the lower moment of inertia is countered by a decreased collision efficiency. Metal bats are also more forgiving if players happen to hit the ball away from the proverbial “sweet spot” of the bat. (The definition of the sweet spot is a bit fuzzy because it is sometimes defined in different ways, but it’s commonly understood to be the area on the bat’s barrel that results in the highest batted ball speeds.)

“There’s more of a penalty when you’re not on the sweet spot with wood bats than with the other metal bats,” said Lloyd Smith, director of WSU’s Sport Science Laboratory and a co-author of the latest study. “[And] wood is still heavy. Part of baseball is hitting the ball far, but the other part is just hitting the ball. If you have a heavy bat, you’re going to have a harder time making contact because it’s harder to control.”

Metal bats may also improve performance via a kind of “trampoline effect.” Metal bats are hollow, while wood bats are solid. When a ball hits a wood bat, the bat barrel compresses by as much as 75 percent, such that internal friction forces decrease the initial energy by as much as 75 percent. A metal bat barrel behaves more like a spring when it compresses in response to a ball’s impact, so there is much less energy loss. Based on his own research back in 2004, Russell has found that improved performance of metal bats is linked to the frequency of the barrel’s mode of vibration, aka the “hoop mode.” (Bats with the lowest hoop frequency will have the highest performance.)

Metal bats have pluses for young players, but in the end it comes down to skill Read More »

hydrogels-can-learn-to-play-pong

Hydrogels can learn to play Pong

It’s all about the feedback loops —

Work could lead to new “smart” materials that can learn and adapt to their environment.

This electroactive polymer hydrogel “learned” to play Pong. Credit: Cell Reports Physical Science/Strong et al.

Pong will always hold a special place in the history of gaming as one of the earliest arcade video games. Introduced in 1972, it was a table tennis game featuring very simple graphics and gameplay. In fact, it’s simple enough that even non-living materials known as hydrogels can “learn” to play the game by “remembering” previous patterns of electrical stimulation, according to a new paper published in the journal Cell Reports Physical Science.

“Our research shows that even very simple materials can exhibit complex, adaptive behaviors typically associated with living systems or sophisticated AI,” said co-author Yoshikatsu Hayashi, a biomedical engineer at the University of Reading in the UK. “This opens up exciting possibilities for developing new types of ‘smart’ materials that can learn and adapt to their environment.”

Hydrogels are soft, flexible biphasic materials that swell but do not dissolve in water. So a hydrogel may contain a large amount of water but still maintain its shape, making it useful for a wide range of applications. Perhaps the best-known use is soft contact lenses, but various kinds of hydrogels are also used in breast implants, disposable diapers, EEG and ECG medical electrodes, glucose biosensors, encapsulating quantum dots, solar-powered water purification, cell cultures, tissue engineering scaffolds, water gel explosives, actuators for soft robotics, supersonic shock-absorbing materials, and sustained-release drug delivery systems, among other uses.

In April, Hayashi co-authored a paper showing that hydrogels can “learn” to beat in rhythm with an external pacemaker, something previously only achieved with living cells. They exploited the intrinsic ability of the hydrogels to convert chemical energy into mechanical oscillations, using the pacemaker to apply cyclic compressions. They found that when the oscillation of a gel sample matched the harmonic resonance of the pacemaker’s beat, the system kept a “memory” of that resonant oscillation period and could retain that memory even when the pacemaker was turned off. Such hydrogels might one day be a useful substitute for heart research using animals, providing new ways to research conditions like cardiac arrhythmia.

For this latest work, Hayashi and co-authors were partly inspired by a 2022 study in which brain cells in a dish—dubbed DishBrain—were electrically stimulated in such a way as to create useful feedback loops, enabling them to “learn” to play Pong (albeit badly). As Ars Science Editor John Timmer reported at the time:

Pong proved to be an excellent choice for the experiments. The environment only involves a couple of variables: the location of the paddle and the location of the ball. The paddle can only move along a single line, so the motor portion of things only needs two inputs: move up or move down. And there’s a clear reward for doing things well: you avoid an end state where the ball goes past the paddles and the game stops. It is a great setup for testing a simple neural network.

Put in Pong terms, the sensory portion of the network will take the positional inputs, determine an action (move the paddle up or down), and then generate an expectation for what the next state will be. If it’s interpreting the world correctly, that state will be similar to its prediction, and thus the sensory input will be its own reward. If it gets things wrong, then there will be a large mismatch, and the network will revise its connections and try again.

There were a few caveats—even the best systems didn’t play Pong all that well—but the approach mostly worked. Those systems comprising either mouse or human neurons saw the average length of Pong rallies increase over time, indicating they might be learning the game’s rules. Systems based on non-neural cells, or those lacking a reward system, didn’t see this sort of improvement. The findings provided some evidence that neural networks formed from actual neurons spontaneously develop the ability to learn. And that could explain some of the learning capabilities of actual brains, where smaller groups of neurons are organized into functional units.

Hydrogels can learn to play Pong Read More »

astronomers-think-they’ve-found-a-plausible-explanation-of-the-wow!-signal

Astronomers think they’ve found a plausible explanation of the Wow! signal

“I’m not saying it’s aliens…” —

Magnetars could zap clouds of atomic hydrogen, producing focused microwave beams.

The Wow! signal represented as

Enlarge / The Wow! signal, represented as “6EQUJ5,” was discovered in 1977 by astronomer Jerry Ehman.

Public domain

An unusually bright burst of radio waves—dubbed the Wow! signal—discovered in the 1970s has baffled astronomers ever since, given the tantalizing possibility that it just might be from an alien civilization trying to communicate with us. A team of astronomers think they might have a better explanation, according to a preprint posted to the physics arXiv: clouds of atomic hydrogen that essentially act like a naturally occurring galactic maser, emitting a beam of intense microwave radiation when zapped by a flare from a passing magnetar.

As previously reported, the Wow! signal was detected on August 18, 1977, by The Ohio State University Radio Observatory, known as “Big Ear.” Astronomy professor Jerry Ehman was analyzing Big Ear data in the form of printouts that, to the untrained eye, looked like someone had simply smashed the number row of a typewriter with a preference for lower digits. Numbers and letters in the Big Ear data indicated, essentially, the intensity of the electromagnetic signal picked up by the telescope over time, starting at ones and moving up to letters in the double digits (A was 10, B was 11, and so on). Most of the page was covered in ones and twos, with a stray six or seven sprinkled in.

But that day, Ehman found an anomaly: 6EQUJ5 (sometimes misinterpreted as a message encoded in the radio signal). This signal had started out at an intensity of six—already an outlier on the page—climbed to E, then Q, peaked at U—the highest power signal Big Ear had ever seen—then decreased again. Ehman circled the sequence in red pen and wrote “Wow!” next to it. The signal appeared to be coming from the direction of the Sagittarius constellation, and the entire signal lasted for about 72 seconds. Alas, SETI researchers have never been able to detect the so-called “Wow! Signal” again, despite many tries with radio telescopes around the world.

One reason for the excited reaction is that such a signal had been proposed as a possible communication from extraterrestrial civilizations in a 1959 paper by Cornell University physicists Philip Morrison and Giuseppe Cocconi. Morrison and Cocconi thought that such a civilization might use the 1420 megahertz frequency naturally emitted by hydrogen, the universe’s most abundant element and, therefore, something an alien civilization would be familiar with. In fact, the Big Ear had been reassigned to the SETI project in 1973 specifically to hunt for possible signals. Ehman himself was quite skeptical of the “it could be aliens” hypothesis for several decades, although he admitted in a 2019 interview that “the Wow! signal certainly has the potential of being the first signal from extraterrestrial intelligence.”

Several other alternative hypotheses have been suggested. For instance, Antonio Paris suggested in 2016 that the signal may have come from the hydrogen cloud surrounding a pair of comets, 266P/Christensen and 335P/Gibbs. This was rejected by most astronomers, however, in part because comets don’t emit strongly at the relevant frequencies. Others have suggested the signal was the result of interference from satellites orbiting the Earth, or a signal from Earth reflected off a piece of space debris.

Space maser!

Astrobiologist Abel Mendez of the University of Puerto Rico at Arecibo and his co-authors think they have the strongest astrophysical explanation to date with their cosmic maser hypothesis. The team was actually hunting for habitable exoplanets using signals from red dwarf stars. In some of the last archival data collected at the Arecibo radio telescope (which collapsed in 2020), they noticed several signals that were remarkably similar to the Wow! signal in terms of frequency—just much less intense (bright).

Mendez admitted to Science News that he had always viewed the Wow! signal as just a fluke—he certainly didn’t think it was aliens. But he realized that if the signals they were identifying had blazed brighter, even momentarily, they would be very much like the Wow! signal. As for the mechanism that caused such a brightening, Mendez et al. propose that a magnetar (a highly magnetic neutron star) passing behind a cloud of atomic hydrogen could have flared up with sufficient energy to produce stimulated emission in the form of a tightly focused beam of microwave radiation—a cosmic maser. (Masers are akin to lasers, except they emit microwave radiation rather than visible radiation.)

Proving their working hypothesis will be much more challenging, although there have been rare sightings of such naturally occurring masers from hydrogen molecules in space. But nobody has ever spotted an atomic hydrogen cloud with an associated maser, and that’s what would be needed to explain the intensity of the Wow! signal. That’s why other astronomers are opting for cautious skepticism. “A magnetar is going to produce [short] radio emissions as well. Do you really need this complicated maser stuff happening as well to explain the Wow! signal?” Michael Garrett of the University of Manchester told New Scientist. “Personally, I don’t think so. It just makes a complicated story even more complicated.”

arXiv, 2024. DOI: 10.48550/arXiv.2408.08513  (About DOIs).

Astronomers think they’ve found a plausible explanation of the Wow! signal Read More »

why-cricket’s-latest-bowling-technique-is-so-effective-against-batters

Why cricket’s latest bowling technique is so effective against batters

Some cricket bowlers favor keeping the arm horizontal during delivery, the better to trick the batsmen.

Enlarge / Some cricket bowlers favor keeping the arm horizontal during delivery, the better to trick the batsmen.

Although the sport of cricket has been around for centuries in some form, the game strategy continues to evolve in the 21st century. Among the newer strategies employed by “bowlers”—the equivalent of the pitcher in baseball—is delivering the ball with the arm horizontally positioned close to the shoulder line, which has proven remarkably effective in “tricking” batsmen in their perception of the ball’s trajectory.

Scientists at Amity University Dubai in the United Arab Emirates were curious about the effectiveness of the approach, so they tested the aerodynamics of cricket balls in wind tunnel experiments. The team concluded that this style of bowling creates a high-speed spinning effect that shifts the ball’s trajectory mid-flight—an effect also seen in certain baseball pitches, according to a new paper published in the journal Physics of Fluids.

“The unique and unorthodox bowling styles demonstrated by cricketers have drawn significant attention, particularly emphasizing their proficiency with a new ball in early stages of a match,” said co-author Kizhakkelan Sudhakaran Siddharth, a mechanical engineer at Amity University Dubai. “Their bowling techniques frequently deceive batsmen, rendering these bowlers effective throughout all phases of a match in almost all formats of the game.”

As previously reported, any moving ball leaves a trail of air as it travels; the inevitable drag slows the ball down. The ball’s trajectory is affected by diameter and speed and by tiny irregularities on the surface. Baseballs, for example, are not completely smooth; they have stitching in a figure-eight pattern. Those stitches are bumpy enough to affect the airflow around the baseball as it’s thrown toward home plate. As a baseball moves, it creates a whirlpool of air around it, commonly known as the Magnus effect. The raised seams churn the air around the ball, creating high-pressure zones in various locations (depending on the pitch type) that can cause deviations in its trajectory.

Physicists have been enthusiastically studying baseballs since the 1940s, when Lyman Briggs became intrigued by whether a curveball actually curves. Initially, he enlisted the aid of the Washington Senators pitching staff at Griffith Stadium to measure the spin of a pitched ball; the idea was to determine how much the curve of a baseball depends on its spin and speed.

Briggs followed up with wind tunnel experiments at the National Bureau of Standards (now the National Institute of Standards and Technology) to make even more precise measurements since he could control most variables. He found that spin rather than speed was the key factor in causing a pitched ball to curve and that a curveball could dip up to 17.5 inches as it travels from the pitcher’s mound to home plate.

The first recorded photo of a cricket match taken on July 25, 1857, by Roger Fenton.

Enlarge / The first recorded photo of a cricket match taken on July 25, 1857, by Roger Fenton.

Public domain

In 2018, we reported on a Utah State University study to explain the fastball’s unexpected twist in experiments using Little League baseballs. The USU scientists fired the balls one by one through a smoke-filled chamber. Two red sensors detected the balls as they zoomed past, triggering lasers that acted as flashbulbs. They then used particle image velocimetry to calculate airflow at any given spot around the ball. Conclusion: It all comes down to spin speed, spin axis, and the orientation of the ball, and there is no meaningful aerodynamical difference between a two-seam fastball and a four-seam fastball.

In 2022, two physicists developed a laser-guided speed measurement system to measure the change in speed of a baseball mid-flight and then used that measurement to calculate the acceleration, the various forces acting on the ball, and the lift and drag. They suggested their approach could also be used for other ball sports like cricket and soccer.

The Armfield C15-15 Wake Survey Rake measured pressure downstream of the ball.

Enlarge / The Armfield C15-15 Wake Survey Rake measured pressure downstream of the ball.

A.B. Faazil et al., 2024

Similarly, golf ball dimples reduce the drag flow by creating a turbulent boundary layer of air, while the ball’s spin generates lift by creating a higher air pressure area on the bottom of the ball than on the top. The surface patterns on volleyballs can also affect their trajectories. Conventional volleyballs have six panels, but more recent designs have eight panels, a hexagonal honeycomb pattern, or dimples. A 2019 study found that the surface panels on conventional volleyballs can give rise to unpredictable trajectories on float serves (which have no spin), and modifying the surface patterns could make for a more consistent flight.

From a physics standpoint, the float serve is similar to throwing a knuckleball in baseball, which is largely unaffected by the Magnus force because it has no spin. Its trajectory is determined entirely by how the seams affect the turbulent airflow around the baseball. The seams of a baseball can change the speed (velocity) of the air near the ball’s surface, speeding the ball up or slowing it down, depending on whether said seams are on the top or the bottom. The panels on conventional volleyballs have a similar effect.

Why cricket’s latest bowling technique is so effective against batters Read More »

pass-the-mayo:-condiment-could-help-improve-fusion-energy-yields

Pass the mayo: Condiment could help improve fusion energy yields

Don’t hold the mayo —

Controlling a problematic instability could lead to cheaper internal fusion.

A jar of homemade mayonnaise

Inertial confinement fusion is one method for generating energy through nuclear fusion, albeit one plagued by all manner of scientific challenges (although progress is being made). Researchers at LeHigh University are attempting to overcome one specific bugbear with this approach by conducting experiments with mayonnaise placed in a rotating figure-eight contraption. They described their most recent findings in a new paper published in the journal Physical Review E with an eye toward increasing energy yields from fusion.

The work builds on prior research in the LeHigh laboratory of mechanical engineer Arindam Banerjee, who focuses on investigating the dynamics of fluids and other materials in response to extremely high acceleration and centrifugal force. In this case, his team was exploring what’s known as the “instability threshold” of elastic/plastic materials. Scientists have debated whether this comes about because of initial conditions, or whether it’s the result of “more local catastrophic processes,” according to Banerjee. The question is relevant to a variety of fields, including geophysics, astrophysics, explosive welding, and yes, inertial confinement fusion.

How exactly does inertial confinement fusion work? As Chris Lee explained for Ars back in 2016:

The idea behind inertial confinement fusion is simple. To get two atoms to fuse together, you need to bring their nuclei into contact with each other. Both nuclei are positively charged, so they repel each other, which means that force is needed to convince two hydrogen nuclei to touch. In a hydrogen bomb, force is generated when a small fission bomb explodes, compressing a core of hydrogen. This fuses to create heavier elements, releasing a huge amount of energy.

Being killjoys, scientists prefer not to detonate nuclear weapons every time they want to study fusion or use it to generate electricity. Which brings us to inertial confinement fusion. In inertial confinement fusion, the hydrogen core consists of a spherical pellet of hydrogen ice inside a heavy metal casing. The casing is illuminated by powerful lasers, which burn off a large portion of the material. The reaction force from the vaporized material exploding outward causes the remaining shell to implode. The resulting shockwave compresses the center of the core of the hydrogen pellet so that it begins to fuse.

If confinement fusion ended there, the amount of energy released would be tiny. But the energy released due to the initial fusion burn in the center generates enough heat for the hydrogen on the outside of the pellet to reach the required temperature and pressure. So, in the end (at least in computer models), all of the hydrogen is consumed in a fiery death, and massive quantities of energy are released.

That’s the idea anyway. The problem is that hydrodynamic instabilities tend to form in the plasma state—Banerjee likens it to “two materials [that] penetrate one another like fingers” in the presence of gravity or any accelerating field—which in turn reduces energy yields. The technical term is a Rayleigh-Taylor instability, which occurs between two materials of different densities, where the density and pressure gradients move in opposite directions. Mayonnaise turns out to be an excellent analog for investigating this instability in accelerated solids, with no need for a lab setup with high temperature and pressure conditions, because it’s a non-Newtonian fluid.

“We use mayonnaise because it behaves like a solid, but when subjected to a pressure gradient, it starts to flow,” said Banerjee. “As with a traditional molten metal, if you put a stress on mayonnaise, it will start to deform, but if you remove the stress, it goes back to its original shape. So there’s an elastic phase followed by a stable plastic phase. The next phase is when it starts flowing, and that’s where the instability kicks in.”

More mayo, please

2019 video showcasing the rotating wheel Rayleigh Taylor instability experiment at Lehigh University.

His team’s 2019 experiments involved pouring Hellman’s Real Mayonnaise—no Miracle Whip for this crew—into a Plexiglass container and then creating wavelike perturbations in the mayo. One experiment involved placing the container on a rotating wheel in the shape of a figure eight and tracking the material with a high-speed camera, using an image processing algorithm to analyze the footage. Their results supported the claim that the instability threshold is dependent on initial conditions, namely amplitude and wavelength.

This latest paper sheds more light on the structural integrity of fusion capsules used in inertial confinement fusion, taking a closer look at the material properties, the amplitude and wavelength conditions, and the acceleration rate of such materials as they hit the Rayleigh-Taylor instability threshold. The more scientists know about the phase transition from the elastic to the stable phase, the better they can control the conditions and maintain either an elastic or plastic phase, avoiding the instability. Banerjee et al. were able to identify the conditions to maintain the elastic phase, which could inform the design of future pellets for inertial confinement fusion.

That said, the mayonnaise experiments are an analog, orders of magnitude away from the real-world conditions of nuclear fusion, which Banerjee readily acknowledges. He is nonetheless hopeful that future research will improve the predictability of just what happens within the pellets in their high-temperature, high-pressure environments. “We’re another cog in this giant wheel of researchers,” he said. “And we’re all working towards making inertial fusion cheaper and therefore, attainable.”

DOI: Physical Review E, 2024. 10.1103/PhysRevE.109.055103 (About DOIs).

Pass the mayo: Condiment could help improve fusion energy yields Read More »

broadway-embraces-particle-physics-with-musical-about-higgs-boson-discovery

Broadway embraces particle physics with musical about Higgs boson discovery

Catch the fever —

The 2013 documentary Particle Fever is being turned into a Broadway musical.

A collision between subatomic particles in the Large Hadron Collider's CMS detector.

A collision between subatomic particles in the Large Hadron Collider’s CMS detector.

Particle physics is poised to hit the bright lights of Broadway with the adaptation into a musical of the 2013 documentary Particle Fever, which charts the journey to detect the Higgs boson at the world’s largest particle accelerator. According to Deadline Hollywood, the creators described their musical as being filled with “heart, humor, and hope,” calling it an “exploration of the very nature of exploration itself… Particle Fever proves that even the very best theories are often no match for reality.”

(Spoiler: Physicists discovered the Higgs boson in 2012.)

Johns Hopkins University’s David Kaplan was a film student turned theoretical physicist when he came up with the idea for a documentary on the search for the Higgs boson—at the time, the last remaining piece of the Standard Model of Particle Physics yet to be detected. The Large Hadron Collider at CERN was designed for that purpose, although the physics community hoped (in vain thus far) to also discover exciting new physics.

Kaplan has said he originally planned to make the film himself, but his Los Angeles-based sister talked him out of it. Mark Levinson (a physicist turned filmmaker) ended up directing, with Oscar winner Walter Murch handling the editing, sifting through nearly 500 hours of footage—including amateur video footage shot by CERN physicists themselves.

Particle Fever.” height=”427″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/08/particle1-640×427.jpg” width=”640″>

Enlarge / Physicist David Kaplan interviews Fabiola Gianotti, head of one of the two teams that found the Higgs Boson at CERN, in a still from Particle Fever.

Anthos Media

The project took seven years to complete and made its debut at various small film festivals before enjoying a limited US release in March 2015. It received critical acclaim, and for fans of popular physics, it was delightful to see working physicists like Monica Dunford—then a post-doc working on the ATLAS experiment, now a professor at Heidelberg University—and Nima Arkani-Hamed of the Institute for Advanced Study front and center, highlighting the give-and-take between experiment and theory as they sought to detect the elusive Higgs boson.

Kaplan and his crew were there in July 2012 when the momentous discovery was announced, capturing the standing ovation for an emotional Peter Higgs. It was physics in action, right down to the theorists’ disappointment that the Higgs mass turned out to be about 125 GeV, consistent with many models predicting new physics.

Still, it’s hardly the first documentary that comes to mind when one thinks “musical.” But ROCO Films CEO Annie Roney, whose company distributed the film, had that vision. “It’s already infused with the elements that make a musical memorable and desirable,” she told The New York Times. “It has universal themes of humankind trying to understand the meaning of our lives and our place in the universe. The story celebrates the best in humanity—collaboration, curiosity.” And while she liked the explanations of the heady physics concepts in the film, “I thought that the bigger concepts can be best communicated by music nonverbally.”

Roney has been working to bring that vision to life ever since, tapping noted Broadway playwright David Henry Hwang (M. Butterfly) to write, with music and lyrics by Bear McCreary (Battlestar Galactica, Rings of Power) and Zoe Sarnak (Galileo: A Rock Musical). Leigh Silverman, who just won a Tony for the Broadway musical Suffs, will direct. There’s no word on when we’ll be seeing Particle Fever: The Musical on Boardway, but the group just held the first private reading: a basement industry-only performance featuring songs about particle physics.

Trailer for Particle Fever.

Broadway embraces particle physics with musical about Higgs boson discovery Read More »

how-kepler’s-400-year-old-sunspot-sketches-helped-solve-a-modern-mystery

How Kepler’s 400-year-old sunspot sketches helped solve a modern mystery

A naked-eye sunspot group on 11 May 2024

Enlarge / A naked-eye sunspot group on May 11, 2024. There are typically 40,000 to 50,000 sunspots observed in ~11-year solar cycles.

E. T. H. Teague

A team of Japanese and Belgian astronomers has re-examined the sunspot drawings made by 17th century astronomer Johannes Kepler with modern analytical techniques. By doing so, they resolved a long-standing mystery about solar cycles during that period, according to a recent paper published in The Astrophysical Journal Letters.

Precisely who first observed sunspots was a matter of heated debate in the early 17th century. We now know that ancient Chinese astronomers between 364 and 28 BCE observed these features and included them in their official records. A Benedictine monk in 807 thought he’d observed Mercury passing in front of the Sun when, in reality, he had witnessed a sunspot; similar mistaken interpretations were also common in the 12th century. (An English monk made the first known drawings of sunspots in December 1128.)

English astronomer Thomas Harriot made the first telescope observations of sunspots in late 1610 and recorded them in his notebooks, as did Galileo around the same time, although the latter did not publish a scientific paper on sunspots (accompanied by sketches) until 1613. Galileo also argued that the spots were not, as some believed, solar satellites but more like clouds in the atmosphere or the surface of the Sun. But he was not the first to suggest this; that credit belongs to Dutch astronomer Johannes Fabricus, who published his scientific treatise on sunspots in 1611.

Kepler read that particular treatise and admired it, having made his sunspot observations using a camera obscura in 1607 (published in a 1609 treatise), which he initially thought was a transit of Mercury. He retracted that report in 1618, concluding that he had actually seen a group of sunspots. Kepler made his solar drawings based on observations conducted both in his own house and in the workshop of court mechanic Justus Burgi in Prague.  In the first case, he reported “a small spot in the size of a small fly”; in the second, “a small spot of deep darkness toward the center… in size and appearance like a thin flea.”

The earliest datable sunspot drawings based on Kepler's solar observations with camera obscura in May 1607.

Enlarge / The earliest datable sunspot drawings based on Kepler’s solar observations with camera obscura in May 1607.

Public domain

The long-standing debate that is the subject of this latest paper concerns the period from around 1645 to 1715, during which there were very few recorded observations of sunspots despite the best efforts of astronomers. This was a unique event in astronomical history. Despite only observing some 59 sunspots during this time—compared to between 40,000 to 50,000 sunspots over a similar time span in our current age—astronomers were nonetheless able to determine that sunspots seemed to occur in 11-year cycles.

German astronomer Gustav Spörer noted the steep decline in 1887 and 1889 papers, and his British colleagues, Edward and Annie Maunder, expanded on that work to study how the latitudes of sunspots changed over time. That period became known as the “Maunder Minimum.” Spörer also came up with “Spörer’s law,” which holds that spots at the start of a cycle appear at higher latitudes in the Sun’s northern hemisphere, moving to successively lower latitudes in the southern hemisphere as the cycle runs its course until a new cycle of sunspots begins in the higher latitudes.

But precisely how the solar cycle transitioned to the Maunder Minimum has been far from clear. Reconstructions based on tree rings have produced conflicting data. For instance, one such reconstruction concluded that the gradual transition was preceded either by an extremely short solar cycle of about five years or an extremely long solar cycle of about 16 years. Another tree ring reconstruction concluded the solar cycle would have been of normal 11-year duration.

Independent observational records can help resolve the discrepancy. That’s why Hisashi Hayakawa of Nagoya University in Japan and co-authors turned to Kepler’s drawings of sunspots for additional insight, which predate existing telescopic observations by several years.

How Kepler’s 400-year-old sunspot sketches helped solve a modern mystery Read More »

astronomers-find-first-emission-spectra-in-brightest-grb-of-all-time

Astronomers find first emission spectra in brightest GRB of all time

shine on, you beautiful BOAT —

Chance that first detected emission line is a noise fluctuation is one in half a billion.

A jet of particles moving at nearly light speed emerges from a massive star in this artist’s concept.

Enlarge / A jet of particles moving at nearly light-speed emerges from a massive star in this artist’s concept of the BOAT.

NASA’s Goddard Space Flight Center Conceptual Image Lab

Scientists have been all aflutter since several space-based detectors picked up a powerful gamma-ray burst (GRB) in October 2022—a burst so energetic that astronomers nicknamed it the BOAT (Brightest Of All Time). Now an international team of astronomers has analyzed an unusual energy peak detected by NASA’s Fermi Gamma-ray Space Telescope and concluded that it was an emission spectra, according to a new paper published in the journal Science. Per the authors, it’s the first high-confidence emission line ever seen in 50 years of studying GRBs.

As reported previously, gamma-ray bursts are extremely high-energy explosions in distant galaxies lasting between mere milliseconds to several hours. There are two classes of gamma-ray bursts. Most (70 percent) are long bursts lasting more than two seconds, often with a bright afterglow. These are usually linked to galaxies with rapid star formation. Astronomers think that long bursts are tied to the deaths of massive stars collapsing to form a neutron star or black hole (or, alternatively, a newly formed magnetar). The baby black hole would produce jets of highly energetic particles moving near the speed of light, powerful enough to pierce through the remains of the progenitor star, emitting X-rays and gamma rays.

Those gamma-ray bursts lasting less than two seconds (about 30 percent) are deemed short bursts, usually emitting from regions with very little star formation. Astronomers think these gamma-ray bursts are the result of mergers between two neutron stars, or a neutron star merging with a black hole, comprising a “kilonova.” That hypothesis was confirmed in 2017 when the LIGO collaboration picked up the gravitational wave signal of two neutron stars merging, accompanied by the powerful gamma-ray bursts associated with a kilonova.

Several papers were published last year reporting on the analytical results of all the observational data. Those findings confirmed that GRB 221009A was indeed the BOAT, appearing especially bright because its narrow jet was pointing directly at Earth. But the various analyses also yielded several surprising results that puzzled astronomers. Most notably, a supernova should have occurred a few weeks after the initial burst, but astronomers didn’t detect one, perhaps because it was very faint, and thick dust clouds in that part of the sky were dimming any incoming light.

Earlier this year, astronomers confirmed that the BOAT came from a supernova, thanks to the telltale signatures of key elements like calcium and oxygen that one would expect to find with a supernova. However, they did not find evidence of the expected heavy elements like platinum and gold, which bears on the longstanding question of the origin of such elements in the universe. The BOAT might just be special in that regard; further data will tell us more.

“It gave me goosebumps”

A few minutes after the BOAT erupted, Fermi’s Gamma-ray Burst Monitor recorded an unusual energy peak. Scientists now say this feature is the first high-confidence emission line ever seen in 50 years of studying GRBs.

The newly detected spectral emission line was likely caused by the collision of matter and anti-matter, according to the authors, producing a pair of gamma rays that are blue-shifted toward higher energies because we are looking into the jet. Having a spectral emission associated with a GRB is important because it can shed light on the specific chemicals involved in the interactions. There have been prior studies reporting possible evidence for absorption or emission lines in other GRBs, but they have usually turned out likely to be statistical noise.

That’s not the case with this latest detection, according to co-author Om Sharan Salafia at INAF-Brera Observatory in Milan, Italy, who added that the odds of this turning out to be a statistical fluctuation “are less than one chance in half a billion.” His INAF colleague and co-author, Maria Edvige Ravasio, said that when she first saw the signal, “it gave me goosebumps.”

Why did astronomers take so long to detect it? When the BOAT first erupted in 2022, it saturated most of the space-based gamma-ray detectors, including the Fermi Space Telescope, making them unable to measure the most intense part of that blast. The emission line didn’t appear until a good five minutes after the burst when it had sufficiently dimmed for Fermi to make a measurement. The spectral emission lasted for about 40 seconds and reached a peak energy of about 12 MeV, compared to 2 or 3 MeB for visible light, per the authors.

Science, 2024. DOI: 10.1126/science.adj3638  (About DOIs).

Astronomers find first emission spectra in brightest GRB of all time Read More »

scientists-unlock-more-secrets-of-rembrandt’s-pigments-in-the-night-watch

Scientists unlock more secrets of Rembrandt’s pigments in The Night Watch

More from operation night watch —

Use of arsenic sulfides for yellow, orange/red hues adds to artist’s known pigment palette.

The Nightwatch, or Militia Company of District II under the Command of Captain Frans Banninck Cocq (1642)

Enlarge / Rembrandt’s The Night Watch underwent many chemical and mechanical alterations over the last 400 years.

Public domain

Since 2019, researchers have been analyzing the chemical composition of the materials used to create Rembrandt’s masterpiece, The Night Watch, as part of the Rijksmuseum’s ongoing Operation Night Watch, devoted to its long-term preservation. Chemists at the Rijksmuseum and the University of Amsterdam have now detected unusual arsenic-based yellow and orange/red pigments used to paint the duff coat of one of the central figures in the painting, according to a recent paper in the journal Heritage Science. It’s a new addition to Rembrandt’s known pigment palette that further adds to our growing body of knowledge about the materials he used.

As previously reported, past analyses of Rembrandt’s paintings identified many pigments the Dutch master used in his work, including lead white, multiple ochres, bone black, vermilion, madder lake, azurite, ultramarine, yellow lake, and lead-tin yellow, among others. The artist rarely used pure blue or green pigments, with Belshazzar’s Feast being a notable exception. (The Rembrandt Database is the best resource for a comprehensive chronicling of the many different investigative reports.)

Early last year, the researchers at Operation Night Watch found rare traces of a compound called lead formate in the painting—surprising in itself, but the team also identified those formates in areas where there was no lead pigment, white or yellow. It’s possible that lead formates disappear fairly quickly, which could explain why they have not been detected in paintings by the Dutch Masters until now. But if that is the case, why didn’t the lead formate disappear in The Night Watch? And where did it come from in the first place?

Hoping to answer these questions, the team whipped up a model of “cooked oils” from a 17th-century recipe and analyzed those model oils with synchrotron radiation. The results supported their hypothesis that the oil used for light parts of the painting was treated with an alkaline lead drier. The fact that The Night Watch was revarnished with an oil-based varnish in the 18th century complicates matters, as this may have provided a fresh source of formic acid, such that different regions of the painting rich in lead formates may have formed at different times in the painting’s history.

Last December, the team turned its attention to the preparatory layers applied to the canvas. It’s known that Rembrandt used a quartz-clay ground for The Night Watch—the first time he had done so, perhaps because the colossal size of the painting “motivated him to look for a cheaper, less heavy and more flexible alternative for the ground layer” than the red earth, lead white, and cerussite he was known to use on earlier paintings.

The Night Watch. (b) Detail of figure’s embroidered gold buff coat. (c) X-ray diffraction image of coat detail showing arsenic. (d) Stereomicroscope image showing arsenic hot spot.” height=”531″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/rembrandt1-640×531.jpg” width=”640″>

Enlarge / (a) Rembrandt’s The Night Watch. (b) Detail of figure’s embroidered gold buff coat. (c) X-ray diffraction image of coat detail showing arsenic. (d) Stereomicroscope image showing arsenic hot spot.

N. De Keyser et al., 2024

They used 3D X-ray methods to capture more detail, revealing the presence of an unknown (and unexpected) lead-containing layer located just underneath the ground layer. This could be due to using a lead compound added to the oil used to prepare the canvas as a drying additive—perhaps to protect the painting from the damaging effects of humidity. (Usually a glue sizing was used before applying the ground layer.) The lead layer discovered last year could be the reason for the unusual lead protrusions in areas of The Night Watch, since there are no other lead-containing compounds in the paint. It’s possible that lead migrated into the painting’s ground layer from the lead-oil preparatory layer below.

An intentional combination

The presence of arsenic sulfides in The Night Watch appears to be an intentional pigment combination by Rembrandt, according to the authors of this latest paper. Artists throughout history have used naturally occurring orpiment and realgar, as well as artificial arsenic sulfide pigments, to get yellow, orange, and red hues in their paints. Orpiment was also used for medicinal purposes, in hair removal creams and oils, in wax seals, yellow ink, bookbinder green (mixed with indigo), and for the treatment or coating of metals like silver.

However, the use of artificial arsenic sulfides has rarely been reported in artworks, although they are mentioned in multiple artists’ treatises dating back to the 15th century. Earlier work using advanced analytical techniques such as Raman spectroscopy and X-ray powder diffraction revealed that Rembrandt used arsenic sulfide pigments (artificial orpiment) in two late paintings: The Jewish Bride (c 1665) and The Man in a Red Cap (c 1665).

For this latest work, Nouchka De Keyser of the Rijksmuseum and co-authors used macroscopic X-ray fluorescence imaging to map The Night Watch, which revealed the presence of arsenic and sulfur in the doublet sleeves and embroidered buff coat worn by Lt. Willem Van Ruytenburch, i.e., the central figure to the right of Captain Frans Bannick Cocq in the painting. The researchers initially assumed that this was due to Rembrandt’s use of orpiment for yellow hues and realgar for red hues.

Ars Vitraria Experimentalis, 1679. (c) Page from the Weimar taxa of 1674 including prices for white, yellow, and red arsenic.” height=”300″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/rembrandt2-640×300.jpg” width=”640″>

Enlarge / (a, b) Pages from Johann Kunckel’s Ars Vitraria Experimentalis, 1679. (c) Page from the Weimar taxa of 1674 including prices for white, yellow, and red arsenic.

N. De Keyser et al., 2024

To learn more, they took tiny samples and analyzed them with light microscopy, micro-Raman spectroscopy, electron microscopy, and X-ray powder diffraction. They found the yellow particles were actually pararealgar while the orange to red particles were semi-amorphous pararealgar. These are more unusual arsenic sulfide components, typically associated with degradation products from either the natural minerals or their artificial equivalents as they age.

But De Keyser et al. concluded that the presence of these components was actually an intentional mixture, based on their perusal of multiple historical sources and catalogs of collection cabinets with long lists of various arsenic sulfides. There was clearly contemporary knowledge of manipulating both natural and artificial arsenic sulfides to get different shades of yellow, orange, and red.

They also found vermilion and lead-tin yellow in the paint mixture; Rembrandt was known to use these to add brightness and intensity to his paintings. In the case of The Night Watch, “Rembrandt clearly aimed for a bright orange tone with a high color strength that allowed him to create an illusion of the gold thread embroidery in Van Ruytenburch’s costume,” the authors wrote. “The artificial orange to red arsenic sulfide might have offered different optical and rheological paint properties as compared to the mineral form of orpiment and realgar.”

In addition, the team examined paint samples from different artists known to use arsenic sulfides—whose works are also part of the Rijksmuseum collection—and found a similar mixture of pigments in a painting by Rembrandt’s contemporary, Willem Kalf. “It is evidence that a variety of natural and artificial arsenic sulfides were manufactured and traded during Rembrandt’s time and were available in Amsterdam,” the authors wrote—most likely imported, since the Dutch Republic did not have considerable mining resources.

Heritage Science, 2024. DOI: 10.1186/s40494-024-01350-x  (About DOIs).

Scientists unlock more secrets of Rembrandt’s pigments in The Night Watch Read More »

researchers-build-ultralight-drone-that-flies-with-onboard-solar

Researchers build ultralight drone that flies with onboard solar

Where does it go? It goes up! —

Bizarre design uses a solar-powered motor that’s optimized for weight.

Image of a metallic object composed from top to bottom of a propeller, a large cylinder with metallic panels, a stalk, and a flat slab with solar panels and electronics.

Enlarge / The CoulombFly doing its thing.

On Wednesday, researchers reported that they had developed a drone they’re calling the CoulombFly, which is capable of self-powered hovering for as long as the Sun is shining. The drone, which is shaped like no aerial vehicle you’ve ever seen before, combines solar cells, a voltage converter, and an electrostatic motor to drive a helicopter-like propeller—with all components having been optimized for a balance of efficiency and light weight.

Before people get excited about buying one, the list of caveats is extensive. There’s no onboard control hardware, and the drone isn’t capable of directed flight anyway, meaning it would drift on the breeze if ever set loose outdoors. Lots of the components appear quite fragile, as well. However, the design can be miniaturized, and the researchers built a version that weighs only 9 milligrams.

Built around a motor

One key to this development was the researchers’ recognition that most drones use electromagnetic motors, which involve lots of metal coils that add significant weight to any system. So, the team behind the work decided to focus on developing a lightweight electrostatic motor. These rely on charge attraction and repulsion to power the motor, as opposed to magnetic interactions.

The motor the researchers developed is quite large relative to the size of the drone. It consists of an inner ring of stationary charged plates called the stator. These plates are composed of a thin carbon-fiber plate covered in aluminum foil. When in operation, neighboring plates have opposite charges. A ring of 64 rotating plates surrounds that.

The motor starts operating when the plates in the outer ring are charged. Since one of the nearby plates on the stator will be guaranteed to have the opposite charge, the pull will start the rotating ring turning. When the plates of the stator and rotor reach their closest approach, thin wires will make contact, allowing charges to transfer between them. This ensures that the stator and rotor plates now have the same charge, converting the attraction to a repulsion. This keeps the rotor moving, and guarantees that the rotor’s plate now has the opposite charge from the next stator plate down the line.

These systems typically require very little in the way of amperage to operate. But they do require a large voltage difference between the plates (something we’ll come back to).

When hooked up to a 10-centimeter, eight-bladed propeller, the system could produce a maximum lift of 5.8 grams. This gave the researchers clear weight targets when designing the remaining components.

Ready to hover

The solar power cells were made of a thin film of gallium arsenide, which is far more expensive than other photovoltaic materials, but offers a higher efficiency (30 percent conversion compared to numbers that are typically in the mid-20s). This tends to provide the opposite of what the system needs: reasonable current at a relatively low voltage. So, the system also needed a high-voltage power converter.

Here, the researchers sacrificed efficiency for low weight, arranging a bunch of voltage converters in series to create a system that weighs just 1.13 grams, but steps the voltage up from 4.5 V all the way to 9.0 kV. But it does so with a power conversion efficiency of just 24 percent.

The resulting CoulombFly is dominated by the large cylindrical motor, which is topped by the propeller. Suspended below that is a platform with the solar cells on one side, balanced out by the long, thin power converter on the other.

Meet the CoulombFly.

To test their system, the researchers simply opened a window on a sunny day in Beijing. Starting at noon, the drone took off and hovered for over an hour, and all indications are that it would have continued to do so for as long as the sunlight provided enough power.

The total system required just over half a watt of power to stay aloft. Given a total mass of 4 grams, that works out to a lift-to-power efficiency of 7.6 grams per watt. But a lot of that power is lost during the voltage conversion. If you focus on the motor alone, it only requires 0.14 watts, giving it a lift-to-power efficiency of over 30 grams per watt.

The researchers provide a long list of things they could do to optimize the design, including increasing the motor’s torque and propeller’s lift, placing the solar cells on structural components, and boosting the efficiency of the voltage converter. But one thing they don’t have to optimize is the vehicle’s size since they already built a miniaturized version that’s only 8 millimeters high and weighs just 9 milligrams but is able to generate a milliwatt of power that turns its propeller at over 15,000 rpm.

Again, all this is done without any onboard control circuitry or the hardware needed to move the machine anywhere—they’re basically flying these in cages to keep them from wandering off on the breeze. But there seems to be enough leeway in the weight that some additional hardware should be possible, especially if they manage some of the potential optimizations they mentioned.

Nature, 2024. DOI: 10.1038/s41586-024-07609-4  (About DOIs).

Researchers build ultralight drone that flies with onboard solar Read More »